yeah llms can execute code via tool/function calls. Thats how dave works:
Discussion
Is this a generalizable framework that can be extended to all LLMs? My idea behind ephemeral runtime protocol is to create a universal language for LLMs to do this. If you’ve made one already, then great!
all llms with tool call support yeah. Heres a demo with a local instance of qwen
Where can I find out more about tool call?
most ai backends try to support the openai api, which has tool calls and responses as a part of their conversations api
https://platform.openai.com/docs/guides/function-calling?api-mode=chat