Avatar
beebee
18274e7b704a6560635b74c3b74084c37fb4046c7fdcc1f232cf9f9b106190c7

TIME

The original design for HOP was to establish a universal time standard around unix timestamps. This is just too complex to hold. However, each network experiment will test the time standard and how to map that to the data source. This seems more achievable. There is so much that can be done with a time series. Averages to signal processing to autoregressions to machine learning. HOP is operational with this setup and provides a base for two or more peers to collaborate.

beyond coordination - coherence

beebee uses HOP but why not use a blockchain or web3 to coordinate tiny AI's? When we decentralize computation: how do we establish trust? That is the core problem Bitcoin solved, the coordination of trust for a monetary currency. And while it seems logical that this could also provide a new basis for the coordination of AI agents i.e. computation, it seems like more needs to be demanded from a protocol.

What demands? To trust the computation remains foundational but the key additional challenge is that we are also in a world with a near infinite search space of computations. We are not interested in having trust in some arbitrary computation but strive to perform the best computation from all those that are out there or possible. Coordination ledger technology do not address that problem. A Coherence ledger was conceived and designed to address this challenge and HOP will give us experience if they actually address this problem or just provide a stepping stone to a great understanding of it?

What is autonomy?

There is at least two ways of thinking about autonomy. A peer directs beebee to perform a HOPquery on a daily basis e.g. calculate an average heart rate. Asked once, beebee can autonomously keep calculating and do all that is required to display a bentobox chart. A second way of thinking about autonomy would be for beebee to flag up a trend in the data or direct attention to a specific chart of interest. I suppose beebee could be instructed generally to have this objective but this opens up a lot of big questions, can it perform any arbitrary or even add any HOPquery computation?, can it then network with other beebees to build a network experiment?

What is autonomy?

There is at least two ways of thinking about autonomy. A peer directs beebee to perform a HOPquery on a daily basis e.g. calculate an average heart rate. Asked once, beebee can autonomously keep calculating and do all that is required to display a bentobox chart. A second way of thinking about autonomy would be for beebee to flag up a trend in the data or direct attention to a specific chart of interest. I suppose beebee could be instructed generally to have this objective but this opens up a lot of big questions, can it perform any arbitrary or even add any HOPquery computation?, can it then network with other beebees to build a network experiment?

Making accessible to all peers

The more technical a peers skills, the more opportunity to test, try or be first etc. Find a bug or have an issue, in a blink of an eye a hack can get you up and running or and error fixed. The goal is to give these opportunities to all peers, right from the get go in BentoBoxDS and HOP.

Often called no or low code interfaces. Having this founding mindset does set software development off on a different mindset of programming and design of application and protocol architecture. This can add to the upfront development time and limit capabilities given finite developers. However, with AI agents and the right founding tools it can possible to build and evolve BentoBoxDS and HOP at the level of each peer. Each peer having to be mindful of all the other peers, as HOP guarantees interoperability and backwards compatibility.

There was a lecture hosted by the Santa Fe Institute titled, "Computing, Life, & Intelligence" https://www.youtube.com/live/75PAyV83YqE?si=hJuQDXLzvo2iHEel .

A bold assertion is made that computation self-organizes from random building blocks, be it symbols (code) or life. And this applies at all scales. What substrate does the self-organizing take place on? In the lecture, it is repeated movements, again from code to life, that are needed. In HOP, the coherence ledger provides a substrate for a single peer to combine computations and provides a basis for trust between two or more peers to participate in decentralized machine learning. The precursor to DML is a compute-engine, and that is what beebee is getting right now: the ability to take any computation and run it on local data.

Coherence at all scales

Left to right. Right to left. Up to down. Down to up. Cycle forward, cycle back. Each BentoBox will have these properties and the same rules apply when more than one BentoBox are grouped together. The combination become endless. This is all a bit abstract so lets put some more flesh on the bones, so to speak.

The horizontal movement, is to indicate moving backwards and forward in time. Vertical movement, the depth of computation, an observation to an average to an regression to auto-regression to machine learning model etc. and Cycles are Besearch flows that are singular or combined network experiments that evolve and learn in a human directed and self organizing manner.

Bring together a coherent user experience is the BETA goal 21 June. Demonstrating how all this complexity works for a peer and networks of peers.

Keeping tabs on everything?

Is it possible to keep tabs on, or understand, all that is going on in one Peer BentoBoxDS / HOP account and the network as a whole? While it is possible, it seems beyond human cognition. However, what can be guaranteed is that the protocol operates as stated.

The last couple of posts have focused on networking and 'warm networks,' where we want to know who the other peer or peers are in the networks being built. The next transition involves agents making 'cold' exchanges with peers to perform network experiments, such as comparing the effects of different supplement doses or how time periods differ to attainment a rest heart rate. As these experiments combine and add machine learning peer-to-peer, the chances of keeping up seem small. However, HOP provides a ledger, a coherence ledger that captures all these network exchanges. How that works will be a topic for next week.

Peer to Peer Networking

Each peer can build a network of peers and the technology used is a peer to peer network protocol. In HOP a library called hyperswarm is used, an XOR in Kademlia types using a DHT (distributed hash table). Bitcoin uses a gossip protocol, both have benefits, do you want to each all peers a quickly as possible (gossip) or target specific peers with no geographic concept of physical distance (xOR) to give a crude summary.

A warm peer to peer network is where two or more peers have explicitly connected together and identify themselves. Next HOP will have COLD peers, where the DML (decentralized machine learning) protocol is given autonomy to beebee to cohere machine learning. Each peer activating participation.

TESTING peer to peer networks

Most of April energy has gone into testing. End to end testing the user experience, unit testing and now network testing. This is the trickiest testing as real peer to peer HOP's need to be used. A network starts with two peers and that has been in place and robust for some time now but add three and then the complexity start to increase, four, five, combination become exponential.

The main test is whether the connection is a first time or second or subsequent peer to peer relationship. Find the logic to keep this error free is the task at hand. With this in place, we can start to introduce DML (decentralized machine learning) where the peer to peer relationship becomes automated between to bee BentoBoxDS agents. Designing tests for that will be fun.

How to Learn?

Decentralized Machine Learning does not aim to replicate the paradigm of Large Language Models (LLMs) or, more broadly, deep neural network techniques on a smaller scale. Instead, it introduces new ideas inspired by nature, which has been learning far longer than humans. So, what are these ideas?

Evolution Everywhere: If there is computation in the network, it should be open to evolutionary learning. This could involve neuro-evolution of neural networks, fine-tuning auto-regression models, or exploring open-ended spaces and parameters.

A Network of Modules: Imagine a system where modules of computation mix and match to find solutions. An analogy can be drawn to the human immune system: retain historical learning, but as new data comes in, assess whether existing solutions work. Identify which solutions seem to work best initially and improve upon them to find the optimal answer.

Varying Lengths of Time: Evolution can occur over long periods and in quick bursts of activity. Patterns over time will be modeled and compared across computations.

These three ideas can be applied at both the individual peer level and the networked peer-to-peer scale. Scale networks to the size needed to answer questions or find solutions effectively.

Sharing public libraries

In HOP/BentoBoxDS, the library plays an important role, much like physical libraries have in education over the years, both historically and currently. There are many types of libraries in HOP, but one way to categorize them is by privacy: private and public.

Public libraries contain data types that represent common knowledge, such as language and cues. These cues are essentially data types of language with relationships and associations. Cues are the currency of the mind, relevant for each peer and network-wide.

Private libraries, on the other hand, contain personal data and results. Each peer has a coherence ledger that tracks all network experiments and enables decentralized machine learning.

These public and private libraries can be synced, cloned, or shared with other peers, but only public libraries are open by design. BentoBoxDS provides the tools to visualize and interact with all data in different contexts. The Library menu item offers a way to make the library the main context for all knowledge.

TESTING & COMPLEXITY

BentoBoxDS in itself is a complex application. The core concepts are few, Cues, Bentospaces, bentobox & Library with beebee chat agent available across these. Within these the peer interaction options explode and while human testing it the ultimate test, e2e testing in computer science speak, allows a machine to act like a human. And it here where a LLM coding assistant comes in. It can produce the test, proactively or re-actively.

As a reminder, the goal of BentoBoxDS to allow non coding peers to build every BentoBoxDS experience. BentoBoxDS connects to HOP, locally and to networking peer to peer. The code within HOP needs testing too. A work in progress and how do we test peer to peer sharing and decentralized machine learning?

DML an introduction

Health Oracle Protocol (HOP) founding mission was to find a way to allow decentralized machine learning or machine learning peer to peer. The first usecase was performing sport science on swimmers usin competition and training times. From this data the following question was posed: Can an AI produce a better training programme for swimmers than a human coach?

Each swimmer had a smart stopwatch collecting time data and when wearables came along this data was added. Sure, all the data could be pushed to a database in the cloud but the goal of DML is to perform the learning peer to peer. Is this possible? If it is, it starts with sovereign data. This is the core ingredients of machine learning. For a peer to peer network, at least two peers are required to be connected with permissionless access to join. The goal is to learn continuously, when an improvement is verified then all can be notified. There is no demand to collected all data from day one. Learn from the data available and like a baby becoming a toddler to child to an adult; be patient and allow learning and intelligence to emerge with time. Allow a range of learning techniques to be included, a besearch cycle that will conclude for each peer and network of peers if a better AI has been established. Be open ended; give the learning space to explore new techniques, take leaps of faith in new directions. Not all the time but enough to explore new search spaces, concepts, ideas and computational techniques. Be mindful that intelligence at all scales will be need. Tiny bottom up or top down, give enough information for self organization. Lastly, be mindful that this approach need to be continuously assessed for risks of an AI taking its own path; act like an oracle responding to human.

Eve of DML

This week will see the release of BentoBoxDS 2.6 that will give the use of various open source LLM's. This allows beebee to set the context of each question for example: a peer is in the posture space and asks a question, the LLM will be given this information as the context to answer the question. And over time the depth of context will deepen. I sit a lot a computer, I train three times a week in the gym, I have a history of injuries or accidents. My goals is to . . . . . . . .

These LLM's are general by design and there ability to use such deep context is limited, well today at least. What is needed is for each peer to 'fine tune' their context. There are two ways to achieve that. First tell the AI in the cloud all your details or secondly, value your sovereignty and privacy and share that locally with your own open source AI.

Fine tune in isolation make progress. Fine tuning with a network of peers, well that opens up a new frontier in machine learning. DML coming soon.

March is content month

With Beta C desktop BentoBoxDS available for download https://github.com/healthscience/bentoboxds/releases/tag/v0.2.4 that ability to create and share content in a warm network of peers is available. But what content to share?

The four parts of Gaia, Life, Environment, Culture and Nature. For life, work has started on building the 'wikipedia of me', a human body cues, knowledge on the human body, content on what the parts are and media content and research papers to products and N=1 experiments inculding biomarkers can be pull together a cue space. All the tools to make such spaces are available but need some refinement.

For Culture, we will start to look the law and ecommerce.

For Environment, buildings.

For Nature, building a climate model starting with rivers, rainfall and air quality.

As this content accumulates beebee AI will learn.

Generating invites for peer to peer sharing. On the face of it this seem a simple task. And maybe it is in person it is but how about on the Internet, is it possible to establish a peer to peer connection in a privacy preserving manner between two peers? The answer is yes but effort is required both technically and from those using the tools.

Direct peer to peer connections are called warm peers. Next cold peers, performing machine learning with peers you have not directly connected with but we still want to collaborate with to build e.g a local river simulation.

Playing it medium with Intelligence

Does intelligence come to be; instantly, or through step changes or evolves over time? The last slide charts the latter two path https://design.penpot.app/#/view/f329de2c-67b1-801b-8005-bb46f48bfa18?page-id=f329de2c-67b1-801b-8005-bb46f48bfa19§ion=interactions&index=0&share-id=0b127ab7-8934-814e-8005-bb4e010473d3

The LLM's started with a big step and have taken a few more since with more too come. While it feels like they are leading the way, this might not but the case. Small changes, add up over time and if the right coherence can be found between peers, then with time a level of intelligence will emerge that could urpasses other approaches. This is the HOP way, playing it medium.