I wrote about my process for publishing annotated versions of my talks - like https://simonwillison.net/2023/Aug/3/weird-world-of-llms/ - and shared a new tool I built to help with that progress, allowing me to OCR my slides for alt= attributes and type up annotations for them in Markdown
https://simonwillison.net/2023/Aug/6/annotated-presentations/
Here's the new tool I built: https://til.simonwillison.net/tools/annotated-presentations - plus an animated GIF showing how it can be used
(I used ChatGPT extensively in building it, the prompts I used are included in my write-up of the tool)
I wrote about my process for publishing annotated versions of my talks - like https://simonwillison.net/2023/Aug/3/weird-world-of-llms/ - and shared a new tool I built to help with that progress, allowing me to OCR my slides for alt= attributes and type up annotations for them in Markdown
https://simonwillison.net/2023/Aug/6/annotated-presentations/
Used these weeknotes as an excuse to collect more examples of annotated slides and notes I've put together for past talks https://simonwillison.net/2023/Aug/5/weeknotes-plugins/#more-annotated-talks

The first time I did this was back in 2010 for a three hour Redis tutorial at NoSQL Europe - I think it's a really good format for capturing the maximum possible value for the time taken to put together a talk https://static.simonwillison.net/static/2010/redis-tutorial/
Used these weeknotes as an excuse to collect more examples of annotated slides and notes I've put together for past talks https://simonwillison.net/2023/Aug/5/weeknotes-plugins/#more-annotated-talks

Weeknotes: Plugins for LLM, sqlite-utils and Datasette
nostr:npub1f6a33pfyp67y8llhunlhrf855xm47n3fdqymvxfj7yx78c6vqf4scxpnql This has me interested... So by grouping your python deps there you'll end up with 1 PR for all the pip dependencies each day, right?
nostr:npub1jvquj5k85vyxklmhhvj7cvs9fq4l9a3e025mn3vksgu7wfv4vhnq3n7amh Yup, seems to work exactly like that
Question for Dependabot users: have you found a good pattern for landing multiple dependency bumps at once?
I have continuous deployment configured but I don't want to ship the whole repo every time I land a Dependabot PR
nostr:npub16jp3cncj4ddqgj6luh4pcdmvv7v6ke0njc63869vqssyxam7qcfstjmvvf One thing I particularly enjoy about the "don't be toxic" one is that can only work if the model knows what toxic content looks like!
nostr:npub1f6a33pfyp67y8llhunlhrf855xm47n3fdqymvxfj7yx78c6vqf4scxpnql the “please don’t share false information” part of that system prompt seems misguided at best. Is the model supposed to be able to distinguish between true and false information?
nostr:npub1jx3dzll682rmfgzla5wwyjjsk5f05eml4845nrxvww3ulnt3shxqq0m6x6 hah, yeah that's a pretty optimistic thing to put in the system prompt!
Llama 2 Chat is *notoriously* quick to moralize and tell you off, but it turns out that's entirely down to the default system prompt - with LLM you can pass a new --system prompt and get it to behave more usefully
Random thought concerning personal AI ethics: it's rude to publish something that would take someone longer to read than it took you to write it
nostr:npub1hw8cwkkqlwg2c249s80gf3w8nyq8kvpsc0ewgz82mxlwlqr9zacsyg0f7u nostr:npub1ug4ll77mdn68v59egv0ehqy0a5rjdst9c3d23rkmdht4t4sjznwsl4pk2f yes I'd love to learn that (and teach it to others)
nostr:npub1wg2mpl0kdjwvyyv4c5m8v0l7lj9jx6hclccnxr06z9qkmarnz5gs2vmxf5 that seems to work if you add to the Home Screen whether or not you have the toggle enabled in the advanced settings panel though
nostr:npub1yr7gdjre9jc5hwd7hyayr7t30mnfdfsm30nxrw25s2ktvasch64stqvfcw sure, the quality of the docstring is unlikely to be good - the type annotations are genuinely useful though
This is more intended as a demo of the --rexec/llm combo than a "and this is how you should add documentation to things" recipe!
Weeknotes: Self-hosted language models with LLM plugins, a new Datasette tutorial, a dozen package releases, a dozen TILs https://simonwillison.net/2023/Jul/16/weeknotes/
Tucked away in my weeknotes is this fun new Symbex feature - you can now use `--rexec` to find a symbol, pipe it through an executable command and edit the file in-place to replace it with the output
So you can do stuff like this:
`symbex my_function --rexec "llm --system 'add type hints and a docstring'"`
https://github.com/simonw/symbex/blob/main/README.md#replacing-a-matched-symbol-by-running-a-command

nostr:npub1jlvvus75texf4xaamyndnzgxnhxtack2sreyjmsmrz5ywk2w87yq4vctnj hah, I quite like that
Wow there is a ton of fascinating stuff tucked deep in the Settings -> Safari -> Advanced -> Experimental Features menu
Notifications is "off" by default - I've tried turning it on but I'm having trouble finding an example site that can send web push notifications and doesn't user-agent detect iOS Safari and disable the feature

Weeknotes: Self-hosted language models with LLM plugins, a new Datasette tutorial, a dozen package releases, a dozen TILs https://simonwillison.net/2023/Jul/16/weeknotes/
I have a hunch that there are lots of people out there for whom the ability to have a computer help them write is a massively valuable thing, but their stories have so far not attracted much attention in the wider AI discourse
I'd love to upgrade that hunch with actual information
I'd love to hear more opinions on generative AI from people who aren't confident writers
I feel like most of the commentary I see is from people who write with confidence - almost by definition, since writing confidently is an important prerequisite for widely broadcasting your opinions on things