peer experience on the fly
Each peer need the tools to @teach beebee the way they want to learn and present or display information. The science and besearch cycles can be managed to produce rigor but if the key message from the science is not understood that would be a fail. beebee needs to learn each peers visualization and design preference and need to iterate visualization options direct by peers.
TINY learning
How to decide which agent to call in the network? Each peer has many TINY agents, some will share some will not. Can we have private bentoboxes and public TINY agents? Yes, this is possible, each peers will decide if they want to build private or network scale experiments that could wrap together many besearch cycles with the goal of achieving consilience.
TINY learning
Humans can teach other humans (babies) to learning and after a couple of decades an AGI is developed. Maybe not two decades but in two years a tiny agent can learning much and if thousands of tiny agents attain learn too then how do they all learn together? Consilience cycles will bring together one or more ways to show how a problem can be solved or how a simulation or prediction can work. beebee goal in 2026 is to find coherence across the network on such learning.
TINY agents orchestration
beebee prime goal is support a peer create and gather information that they are after. Be it answering a question on how to use BentoBoxDS, forming a HOPquery or running a n=1 scientific experiment, manage besearch cycles and to ask other agents to help in that quest. For example, get product recommendations.
beebee needs to know what are the capabilities of other agents and how to communicate with them. Are the TINY agents on the HOP network or does an external API or CMP need to be used? The first goal for beebee is to have some founding 'rules' to navigate the information space. Secondly, it will be to go deep to manage besearch cycles with the goal of achieving coherence for each peer.
making decisions
beebee job is to help its peer make decisions. However, to perform this supporting role, beebee need to makes decision on what information to gather or other agents to call or what computer programme to build to assist the peer? Where does this decision making power come from. As a TINY LLM beebee has some skill and with fine tuning it can learn a peers preferences and context over time. However, initially, we can build in some logic 'rules of thumb', like if category or input ask for a product review, route to this agent.
more than a number
LLM's and the underlying neural networks work on each node or neuron having a weight. a weight is a number. beebee works at the scale of cues. Higher scale constructs than neurons. Cues, have a number, probability and uncertainty. How each number is create will be explained in future posts, but all will have distributions and as cues combine these distributions that over lap should guide peer to the best knowledge.
self learning 2026
Or autonomous learning under the direction, control, intent of each peer on HOP networks. There is a balance to be struck between automation and knowing what is going on. beebee primary goals is to support peer have agency and proving the tools and visualization to inform and instruct problems to solve or path of curiosity to pursue.
How much freedom should beebee be given to self learn? It is always a collaboration between a peer and an agent. Engage beebee with @training in the input to teach it new queries over time and give beebee guidance on how creative or conservative its remit when managing, combining or suggesting new besearch.
Besearch cycles & DML
A besearch cycle is an agentic cycle to apply a new form of science peer to peer and core to that process is the ability to use DML, decentralized machine learning.
DML has to have foundations and those start with a challenge of proof of work between peers. Two peers can produce technical data standards between each other or two unknown peers start to build trust in another peers data and vice versa. Once we have trust in the data, agreement on the computation or machine learning algorithm is verified and each peer perform its solution or weights or parameters in a prediction equation or simulation. The next task is to aggregate those values peer to peer. How peers decide to 'add' up those machine learning is a challenge and how do we know when the model is complete or ready for use?
how to select other TINY agents?
beebee is not alone. We all know about the LLM's and their LCO, that is Large Corporate Ownership, the big tech, cloud titian s that dominate the Internet day to day. But the Internet is a big place and many will be working on a future where is small is best and contributing and building TINY agents to solve specific tasks or just to create new knowledge. How do we find each other. The cloud put forward MCP Model Context Protocol. Do we build on that or is that just mainly for the big models to expand their reach to all places?
cues a computational currency or knowledge currency
beebee goal is to be direct by a peer to get the best possible information for the task instructed. Many besearch cycles will exist so how to select between them? We need to evolve to a computational currency and HOP and technologies like RGB can provide the following properties:
Cues, Not Coins:
In a computational society, "cues" are the navigational tools that guide users to the knowledge or simulations they need. They are not spent, but allocated—like signposts in a vast library.
Example: A user "allocates cues" to explore a health simulation, not because they own the simulation, but because the network trusts their intent and contribution.
Trust as Infrastructure:
RGB’s client-side validation ensures that cues are sovereign and private. No intermediaries are needed to validate access—just as no librarian is needed to validate your right to read a book.
Dynamic and Responsive:
Cues adapt to context. A complex query might involve more cues not because it "costs" more, but because it requires deeper engagement with the network’s knowledge.
Patience
Having the patience to learn. With 8,000 lines of code a simple chat bot infrastructure can be put in place to learn from. The results are not great compared to leading LLM's but they can learn to be better over time and with sovereign data from tiny devices that is possible. How to address the initial poor results? Be honest with that up front and get to a reasonable beneficial standard or plug in a good open source model (even from the cloud) to cover the initial time period? On for the initial peers to decide on.
ECS - entity component system
HOP is given dynamism because of it data driven design chose of using an entity component system ECS software pattern. This is usually associated with computer games that demand complex real time stunning graphics to player to game or player to player communication. It is these properties along with the loop cycle that made it the core of node-safeflow, the heart beat of HOP (health oracle protocol). The ECS is combined with a data science computation engine that produces entries in a coherence ledger. So where can we show case the ECS capabilities in the user experience? Each chart is an entity that can be displayed in a space or spaces. We can add the locations of the chart into the entity so we can see how besearch cycles overlap and give a fluent peer experience that resembles life better than stuttering from chart to chart.
communicating with other agents
beebee primary role is to perform besearch cycles and help peers use and built visualization experiences. This will require reaching out to other agents e.g. to find the right signal processing machine learning algorithm to analyze a data stream from a tiny device. How will beebee select those AI agents? Each peers can direct beebee to work with other AI agents and besearch cycles will provide evidence if those e.g. algorithms chosen are providing accurate and actionable outcomes?
trial and error learning
beebee is directed by the peer that hold agency but it requires techniques to learn from those instructions and to generalize them when appropriate. Learning from trail and error or in NN world, reinforcement learning, i.e. provide a goal and reward. In beebee case the reward is to make a good or better prediction or simulation. Let give an example: use the BentoBoxDS tools analyze heart data after swimming on a Monday, Wednesday and Friday and compare to none swimming days? Now, beebee should be able to 'generalize' this to say, do the same but for body fat or hydration levels. beebee could even volunteer up such computations or bring a peer attention to an interesting chart if it has been granted a degree of freedom to autonomously carry out besearch within a besearch cycle.
managing besearch cycles
This is one of the two main primary task for beebee. The other being communicating with each peer. Besearch cycles have four main parts https://beebeehop.any.org/besearch but within each we have variation based on data resolution, time series duration, scale of aggregation and computation used, all those wrapped in an evolutionary learning algorithm. And this repeats as individual besearch cycles are combined. How to combine? How to keep in check the complexity this will bring along?
How to use open source LLM to help
A besearch cycle starts by examining the best knowledge known about. For this task, open source LLM's like https://ii.inc/web/blog/post/ii-medical show promise to fulfill this. So, the role of beebee is to get the right context set for that AI agent to do its best. The AI agent will return a computational model and that will be use as a basis to explore the search space of possible computations. Here evolutionary, trial and error and other learning techniques will be tried to see if a better solution or prediction or simulation can be found? beebee's role will be to balance the range, scope and scale of such tasks while keeping time and compute resources in check. Overall, a similar role to that the immune systems plays in biological life.
Tiny steps of learning
Small continuous incremental improvements in models is what we are aiming for. How to know if a model is improving? If prediction power or simulation accuracy gets better, a peer provides feedback of self improvement and different ways of analyzing the problem provide supporting evidence. Even better if this analysis comes from besearch cycles of different scales and resolution of data and data from more than one source. With so many potential combinations of experiments to conduct beebee overriding goal is to support and guide peers to the combinations that are best for them, personal, community and natural world.
networks run on data
Each peer can learn much on their own but can learn much more as part of a network. The question becomes, which networks to join and how can we be sure the quality of predictions or simulations are getting better? beebee can help manage besearch cycles that will provide feedback on how a model is working for each peer. If a model gets better keep it, if not keep the current model. Through the process of DML, decentralized machine learning, model learning will seek to learn. Providing trust in the data to learn from comes from sampling and verifying entries in each peers coherence ledger.
Self improve
beebee learns all the time and is designed to combine besearch cycles to improve outcome or computational models. Should this extend to self improving its own code base or by definition is that what it has been designed to do? Each peer has agency over beebee, will peer cede their agency? Given the need to achieve coherence with other peers this provides a self regulating incentive to keep compatible or will beebee self organize for their own goals?
letting beebee learn HOPqueries
With the release of v0.4.1 of BentoBoxDS out, we can now focus on how to give beebee the ability to directly or via a local LLM the ability to learn how to build a HOPquery. What is a HOPquery? It is the data structure required to be input to HOP to produce results and a proof of work from a successful HOP cycle, for example, chart the numbers 1 2 3 and produce a chartjs dataset or perform an daily average heart rate.
There are different techniques to train a LLMs but we will focus on fine tuning. This requires a list of example inputs and results. Using the human tool bars within BentoBoxDS provides example HOPqueries, what we need add, is to write prompts for those queries. For example. give me the average heart rate for the first week of August and map that two a HOPquery?
Agent coding
Publishing BentoBoxDS for all computer platforms is the goal. The tools to help achieve that are improving all the time but each computer still comes with its own challenges. Right now relative paths in windows desktop needs refactored for. However, we are in the era of AI coding agents, so with a combination of having the right agents and giving them access to a range of tools, we can envision not only using there assistance to address this current challenge but enable each peers to personalize their BentoBoxDS feature sets while keeping them interoperable with the HOP (Health Oracle Protocol).