yeah llms can execute code via tool/function calls. Thats how dave works:

nostr:nevent1qqstjlq2lxmhmc00j6j3ttq2dwwchgfjrpu58q2fe0ef2yyq9l4upwcpz4mhxue69uhhyetvv9ujumt0wd68ytnsw43qz9rhwden5te0wfjkccte9ejxzmt4wvhxjmcpzdmhxw309aex2mrp0yhx5c34x5hxxmmdqyg8wumn8ghj7mn0wd68ytnhd9hx2n6h0x8

Reply to this note

Please Login to reply.

Discussion

Is this a generalizable framework that can be extended to all LLMs? My idea behind ephemeral runtime protocol is to create a universal language for LLMs to do this. If you’ve made one already, then great!

Where can I find out more about tool call?

most ai backends try to support the openai api, which has tool calls and responses as a part of their conversations api

https://platform.openai.com/docs/guides/function-calling?api-mode=chat