The concept of AI started a long time ago - maybe when the idea of computers started in the 50’s? - but the buzz (other than from Hollywood Terminator, Matrix, Her, Wall-E etc ) started in the 80s I think with autonomous vehicles using limited memory. Google and Tesla picked up on it but Google eventually sold off this division and Tesla made a fortune.

In the mid 90s, IBM’s Deep blue beat Garry Kasparov (world chess player) and it was an AI hype again, this time with Reactive AI and real time input and dependency on the program with no learning capabilities.

10 years back there was something on about NLP (Neural Language Processing). The startup investments were buzzing over “Big Data”. Everyone wanted Big Data. Also a lot of the components were being built - machine learning, cloud computing, data storage etc. Like Nostr, where a lot of components are being built, it clearly takes time.

Abt 5 years ago, GPT gained prominence due to high-profile investments and the development of open-source AI. It’s focus was on Large Language Models (LLMs). LLMs such as GPT and BERT, are a type of generative AI within the broader scope of narrow AI.

GPT-3 was the epitome of global mass adoption of AI usage - after like 40 years from the time AI buzz started- but unfortunately GPT was acquired by Microsoft and became a close sourced AI model. You can still build upon GPT using their developers model, pay a pretty penny and your data is owned and tuned by them.

Other companies have developed similar models, including Meta’s Llama, which is apparently comparable to GPT-4. I’ve used its visual-based neural network models. Both are open source, but I am sure they have their proprietary models. Your data is yours (i need to read a bit more on the privacy part)

There are also many other predefined open-source models and frameworks out there to explore.

But the fact that they are easing people into open platforms is a good gesture. It makes innovation better, equitable, faster, cheaper and transparent.

The building blocks to this is not something I am familiar with yet - but many here are experimenting with open source models so feel free to read their notes and engage with them.

If you don’t know who they are, you can visit nostr.band and search for relevant topics and people.

But in short, you have your data acquisition / feeders, training your model, deploying it, GPU, FW, integration and UI to consider.

There are various debates around AI :

1. Open source vs closed source - with close source, you don’t know how the answers are tuned to feed to your bias and therein lies the danger. If you thought social media algo is bad, AI is reading into every emotion and style of you and it's algo is super smart, and laser focused on you.

2. Pre-trained models - Another debate is that pre-trained models by both open and close source, which have better infrastructure, money and more geniuses behind it, will keep getting better making it hard for the lower funded models to catch up.

3. Governments and AI - Another debate is on governments - who know nothing of what you do or about AI - but deeply believe they must regulate everything and hold you accountable. This is becoming a trend esp for oppressive countries. Imagine corrupted gov't and closed AI models working hand in hand.

4. Machines controlling the world - This is my favourite one, and probably the only thing i know about AI - is if the machines are gonna take over? Not anytime soon. We are really far it (refer to pic below). But who knows? Maybe LLMs and Neural Networks and a lot of model training will speed everything up.

Reply to this note

Please Login to reply.

Discussion

No replies yet.