Avatar
average_bitcoiner
d3d74124ddfb5bdc61b8f18d17c3335bbb4f8c71182a35ee27314a49a4eb7b1d
sum divided by count sp1qqvj5uth3ruspj8marnkr2eyvufsfu2ssggwyl4ma3ad3cxta86y6wqje6h6astqvrx5m9qpc34hahfavq90hwcdl3umnls9e9mef6nyaj5dcz0ev

Three raw eggs. Two seltzer tablets. And an ibuprofen. Chase it all with a Bloody Mary.

Replying to Avatar ⚡️🌱🌙

It’s a lot of hacking about at the moment and reading, as it’s quite an ambitious project for me.

It’s very reliant on the GPT-4 model at the moment, but I’m hoping to substitute with more powerful LLM’s asap.

One major aspect for @jack to consider is that GitHub repo monopoly is the bedrock for GPT-4’s monopoly on AI coding ability.

To have an open source LLM that can surpass GPT4 code abilities we really require a large open source code repo that can be easily structured as training data.

Without that, OpenAI’s LLMs are going to be eclipsed on chat yes, but will be stubbornly superior for coding.

Of course we now have a massive code generation engine in GPT4, so it’s likely they sow their own downfall, we just need to direct the output of that engine into a suitable open source location with inherent training data schema.

Everything else is open source.

I think its possible to have a USB with less than 80kb on it, that can iPXE boot ipxe.org from the web into a RAM only linux instance slax.org

From there it can download self assembling code, and begin downloading and installing db libraries and asking for API keys. Then map environment, complete the stand up process and then it will ask for a mission.

It’s ethereal for security reasons because it has full permissions and can write whatever it sees fit.

It will break down a prompt into token limit workscopes develop the scope and break it down again until it as able to code functionality.

The splitting up of the prompt into workscope chunks and the further splitting followed by the tracking and reassembly of the full system is the most difficult part of this. But if human orgs can do this, so can LLM’s.

And I’m over here just trying to LoRA one model onto a codebase so I can ask it questions while I develop.

You’re sounding way more advanced.

This. Exactly this. nostr:note1zutrgdek8fqk5pc5kz6e88qf8cjz2yp2c6xvsf5y7k0shxzsdnxsfm89q7

Do we know #[0]​ and #[1]​ made it home ok? 😧

😂

Had to make sure I didn’t inadvertently discover a poorly programmed bot handing out 1 sat zaps for every reply.

🤔

Let’s try again. I think maybe 3 sats? Is this a bot?

I feel like nostr is alive with fresh faces. Is it all my other Miami FOMO people that wish they were there for the nest party?

#[0]​ or some other #[1]​ dev snuck in an Easter egg that was totally not cool when you’re a little stoned. 😅

I look at Damus and it’s telling me to wake up. Then I have question if I feel asleep or not. But then I wonder

Me “how the fuck would Damus know if I was a sleep?!?”

Also me: “it wasn’t real, go do a unique string search. You know the repo is on GitHub”

Me: “oh shit it’s here! But still, how do they know…?”

var mystery: some View {

VStack(spacing: 20) {

Text("Wake up, \(Profile.displayName(profile: state.profiles.lookup(id: state.pubkey), pubkey: state.pubkey).display_name)", comment: "Text telling the user to wake up, where the argument is their display name.")

Text("You are dreaming...", comment: "Text telling the user that they are dreaming.")

}

.id("what")

}

damus”also me “

Wait. Who the fuck is #[1]​ ? nostr:note15dx7emx5d2hm8gctxjmjj38sg4eantvjvzzheh75mj43q24xs6xqtlsls0

Fuck yea #[1]​ LFG!!!! nostr:note1g0tjrps0c6cfgq3mpcwtalwdfhax0a7yfj0qc23cvx9pmynfntdsrw9pny

🤣

When nostr men’s calendar?

Can’t wait for the meetup attendance! We’ll have to do Shenandoah ₿Itcoin Club At Night. 🕺🏿 😂