Avatar
Luke Childs ☂️
bae77874946ec111f94be59aef282de092dc4baf213f8ecb8c9e15cb7ed7304e
Cofounder & CTO Umbrel. Previously BitcoinJS, Browserify.

Over time I started to prefer zero/low config options with sensible defaults over manually tweaking everything myself. Fish is really great for that.

You could also do musig for payments and mint unilateral exit and only script path for user unilateral exit. Maybe that’s the best of both worlds.

Simple stateless backups and minimal onchain footprint in the optimistic best case. Worse privacy and more onchain use on the rare event that the mint disappears.

musig leaks nothing about the contract on chain and is more block space efficient.

Using the script path for unilateral exit leaks the fact you were most likely using a Spillman channel to any observers. It’s also much larger.

Maybe simpler backups are more important to people than privacy and block space. And if that’s the case my proposal also works with the script path variant.

There are a few different ways to do Spillman channels, the way I propose requires holding onto a presigned transaction for unilateral exit. This is only needed if the mint is uncooperative. This is non deterministic so you need to store it alongside your seed. However unlike lightning if you lose it the mint can’t cheat you out of any money by settling a bad state.

You can make the users unilateral exit path deterministic by implementing it as a timelocked script path instead of a timelocked presigned transaction. Then you only need your seed and nothing else to unilaterally exit.

However I like the presigned tx model better since it results in everything being a musig spend.

The new LlamaGPT self-hosted Al app on nostr:npub1aghreq2dpz3h3799hrawev5gf5zc2kt4ch9ykhp9utt0jd3gdu2qtlmhct is uncensored, just saying...

nostr:note1c0yxqnnmjpe74j0ku9sm7nkgh93jjlmdjqdvtljzy02y58pa95aqkuss92

GPT-4 is very capable, the resulting bash scripts work well about 95% of the time. However there are often subtle bugs or edge cases that aren’t handled unless you explicitly tell it to look out for them.

After a few iterations of adding extra details to handle edge cases you can get something very high quality.

Results are much worse on open source LLMs but they’re catching up quickly.

Not if you run it on your host but you can also run it in a Docker sandbox so you can just mount in directories that you want it to have access to.

humanscript is an inferpreter. A script interpreter that infers commands from natural language using AI. There is no predefined syntax, humanscripts just say what they want to happen, and when you execute them, it happens.

https://github.com/lukechilds/humanscript

This is a humanscript called tidy-screenshots. It takes an unorganised directory of screenshots and organises them into directories based on the month the screenshot was taken.

It can be executed like any other script.

https://void.cat/d/Rf3GkVQbCtg6NSfzUndkou.webp

The LLM inferpreted the humanscript into the following bash script at runtime.

https://void.cat/d/Br86EpyZev12HtsxtYka3i.webp

The code is streamed out of the LLM during inferpretation and executed line by line so execution is not blocked waiting for inference to finish. The generated code is cached on first run and will be executed instantly on subsequent runs, bypassing the need for reinferpretation.

https://void.cat/d/CCLBU6ZNWq5bXnioMGkkcB.webp

The humanscript inferpreter supports a wide range of LLM backends. It can be used with cloud hosted LLMs like OpenAI's GTP-3.5 and GPT-4 or locally running open source LLMs like Llama 2.

You can run humanscript in a sandboxed Docker environment with a single command if you want to have a play.

https://github.com/lukechilds/humanscript#install-humanscript

Replying to Avatar Laser

Ya'll need to study https://semver.org/; never release new features in a patch release. Should have cut 0.6.0, instead.

It’s actually valid semver during 0.x.x releases. https://semver.org/#spec-item-4

Technically according to spec you can do anything during 0.x.x releases but it’s common convention to push the meanings across one place so:

- 0.5.3 > 0.6.0 = major bump

- 0.5.3 > 0.5.4 = minor bump

- No way to define a patch bump

This is the convention we’ve been following since initial launch of 0.1.0.

It idles around 5W and bursts to about 15W.

Strip our payment processor don’t support Bitcoin out of the box but we’re working on adding Bitcoin as a second payment method!

Thanks for the tip but I’m using https://carrd.co which doesn’t support proxying. I’ll run a proxy to it on my dedicated server and add in the nostr.json file there.

Hmmn yeah I get that but aren’t you doing the opposite here?

HTTP spec already states how 301/307 should be handled. NIP-05 rules are saying do a lookup how HTTP says but don’t follow the redirect how HTTP says.

So if an app is requesting some HTTP resource and a NIP-05 JSON file it has to follow two different sets of rules to resolve both those resources instead of just following the HTTP spec.

Awww man that sucks, that redirect massively simplifies my server setup. Thanks for the heads up though.

#[3]​ what is the security issue with redirects? Couldn’t anything bad done by a redirect also be done by a proxy?