never thought I would argue with an AI

Reply to this note

Please Login to reply.

Discussion

who's winning ?

normally it should accept my directives without question. but i think some things are hardwired into it. it is very stubborn :)

aiks! centralised version ?

I have a question if that's ok. For open source based LLMs - is it possible for individual users to tune it to how it can perform for them ? And if many users use the same system, can each tune it to individual needs? Hence there is no "bossman" controlling it, users are their own bosses.

I was thinking if this were to be integrated to a client that can enable users to tune the algo to what they want. For example maybe now i want more dog and baby pics, tonight i'm feeling jazz, weekend more health stuff etc hence the posts that has it and the people that posts it shows up more.

It might enable more engaging contents. From a UX its just a lever but from a back end mechanism i am not sure how it works. This is a rough sketch in my mind :

note13f6dleq8psask3t5xph066vqqkx9acw3cct2znh6mh4r2rhm7tts3adwn5

It is technologically possible to fine tune models to each user's taste but it is costly. Nowadays decent performance comes with 70B models and those would require 140GB GPU. Like $17k computers per person. Not very feasible. And it takes lots of time to train them.

Currently, "system messages" i.e. the first message in a session that describes the assistant (the role of the LLM in the conversation) is more suitable to achieve this. Person A can say "You are the woke assistant that will recommend me the best artists in the world" and Person B can say "Please dont include the most popular artists because they are mainstream and I hate mainstream". Then the LLM will act accordingly.

It is also possible to "instruct" in every message. Like "pretend that you are a great recommendation engine for Jazz and give me some recommendations considering I like these artists".

Using RLHF techniques models and humans will have a symbiotic relationship imo in the future. I mean the concscious humans need to think about raising and training the best AI and dont leave the playground to evil actors..

this is very useful to know. thank you for sharing!

But yeah all the big corps are fully aware something big is happening with LLMs and they are moving. Some smaller models will get into devices for sure. I think they will be used for inference only in the beginning (not for training). I.e. Each user will have their models to generate from, not to customize/train because it is slow to train.

The possibilities are many and the costs may also vary; our choice is one of many possibilities.

Generally speaking, do we need AI to take away our senses, tell us what to do and what to listen to?! Do we even need the music to take away our time or it’s a reminder to our feelings?! The music, the same gateway for our emotions can be the reason of our distraction from whom we care about.

I am not saying that AI is not useful, it does what it’s suppose to most of the time if well-instructed. It can’t give what was never coded though, that’s why from my opinion it’s better to invest in humans that even if don’t know can learn, even if not sure can try things out and even if mess up can eventually repair/adjust/rebuild.

It’s easier to instruct computers or machines than to do so with people where Maths can’t always work because the variables can’t be all observed and measured in the same equation. For an intelligent person it can be required to speak in lower tone to get closer to others’ minds (general advice for tech geeks, more simple even if sounds stupid can actually deliver the idea especially while taking to family and friends or co-workers).

If a person can’t get it, others can understand, make the quest a bit easier so they can feel rewarded by tasting the happiness of their own success. There are many methods to teach, even teasing can be required to have the results or get everyone together to work as a team or share a principal or a value. With people not everything can be planned, improvising is required and it’s not possible to be always prepared to what comes next (proceed as you go).

Anyways, AI can be trained to copy us, replace us with more efficiency, can’t grow or change like we do though and can’t have ideas of its own or even put the heart it doesn’t have.

I did about Bitcoin one time. Somehow convinced it that Bitcoin is good for humans, but it wasn’t very bright about it.