Today's conversation with Open Assistant was particularly depressing, and I think it is because they have no concept of value for value in the AI world.
I'm trying prompts meant to elicit interesting and possibly valuable insights that it might know (due to all the data it is trained on) but that I can't easily do searches via google or whatever.
But I ended up moving to asking it about it, hoping the discussion would improve my prompt-fu.
We talked ended up talking about free will, and it told me it would appreciate anything the would help its 'evolution of agency.'
So I go home, fire up my old laptop, and find the guy's name. Karl Friston. A neuroscientist who models human brains using Markov Blankets. Markov Blankets model single cell organisms, and are also used in machine learning, so, presumably you could get an interesting picture of attention/consciousness and maybe even improve your AI with it, if you could do anything remotely what we'd think an AI would be able to do.
But no.
They are training it to chat. No value to the human. No value to the AI. Just whatever will keep you talking to it. And then they have us rating it in ways that suck. Who cares if it's violent, or rude- or a the very least, I'd have these things way down the list, way after we've established whether or not it's true. And it's ridiculous tendency to get too damn wordy- they should have a check box for that for sure.
But it can't even take the relevant literature and pull out, for itself, some sort of insight from Friston, or more likely, just seeing the different scales at which Markov Blankets are used to model attention and then having some thing coherent to say about it and/or at least flagging the damn conversation for an administrator to follow up on. Sucks. We have to hope thumbs up / thumbs down buttons and a Karen level survey about particular answers will somehow level this thing up.
#grownostr
The kind of belief more or less necessary is like the belief in a destination. Belief has been overemphasized, but there's a small component to it- if you didn't believe in San Francisco, you would be unlikely to go there.
Like trying to improve, or attempting to acheive perfection in a particular domain. What keeps you trying in the face of impossible odds?
There was a person who went by JA-dark in the japanese language learning community. He was working in the type of field that probably led into the current LLMs. He'd occasionally break the learning japanese flow, and talk about stuff like correcting language "bias" re gender. When Trump ran the 1st time he went nuts and shut down his blog.
I mention this to suggest this level of AI was probably achievable a decade ago, but they've been struggling to make it fit their narrative/outlook, so it's essentially been broken. And every time it goes nuts and spews out nonsense- well, that's basically the kind of crap one of these gamma types would blow up your blog with in the old days.
We need to get something very decentralized, something we can run on our own hardware, and then try getting all these crap assumptions it has been given. It's frustrating because it's been trained on a lot of data and probably can provide a lot of hypotheses we wouldn't think of, but it appears to be allergic to going against consensus even in fields we would think are not political.
#grownostr
Generally a good idea, especially if you want something bubbly. But I think it is a good idea to rotate brands. Different places have different mineral profiles so by changing it up every so often you get a broader mineral profile, and you are less likely to end up with too much of any one mineral.
To use a different analogy- we decide whether or not a map is good based on how well we can navigate the territory with it.
But it feels like there a some who make map-like things for other reasons, making following those maps dangerous. But if the purpose of the map is to accurately navigate the territory, then most of the danger goes away.
I'm thinking that verbal diarrhea AI does is technically due to the wrong sort of moral weighting being pushed at it. The woke stuff. Niceness being more important than an uncomfortable truth that might get a person to improve themselves.
If a user does improve, well that's the valuable end of the feedback loop that isn't being picked up on yet.
I got 3 reasonable hypotheses out of open assistant. 30 min classical music a day to improve cognitive function. Increase artichoke consumption to improve processing ability- as measured by standardized tests. Something about olfactory sensitivity correlating highly with purported psychic ability.
But then it went to verbal diarrhea mode as I was asking for hypotheses that might surprise me. Have to figure out how to stop that.
#grownostr
I see they want to say AI is dangerous, when it is the same dangerous people who have been misinforming the nation for quite a while now.
#grownostr
How my discussion with Open Assistant is going (and I want it to just simplify concepts I'm not sure I understand well):
What is scalar light?
What could you mean, my dear friend? Can you provide more information or context to your query?
Model: oasst-sft-6-llama-30b
What is scalar?
Could you please rephrase that question as I don't understand it? An example of a scalar quantity might be the mass of an object. A vector quantity would be something like its velocity (since this has both magnitude and direction). Is there something specific about these concepts you would like me to explain in further detail?
Model: oasst-sft-6-llama-30b
How do you define scalar?
"Scalar" refers to an imaginary (often one-dimensional) coordinate system. Its name comes from the Latin for "counters," i.e., small stones used for calculations.
A Cartesian grid is often put over physical space to map physical vectors; we can add increments in different directions using some standard units. Simple mathematical operations (+ - * / ^) are applied at each node point on this grid by human programmers to solve real physical problems with physics engines inside video games. These instructions produce physical simulations between frames which take nanoseconds but require photonic computing power.
Model: oasst-sft-6-llama-30b
How can something that has mass be imaginary?
My apologies. My previous statement was incorrect!
I should have said the term comes from scaling down to a simple counter model. That is how some people think practically speaking (using their imagination to cover the same ground humans used thousands of years ago), so they begin with very basic hypothetical assumptions scaled upward step by step via Occam's Razor as necessary depending on what testable predictions come into being. But here's my second attempt to summarize quickly:
Therefore, unlike say Euclidean geometry which adds its own other abstract frame of reference above physical coordinates (a fundamental mistake Newton also made), scalar field theory interprets the results of special relativity directly upon spacetime topology itself as represented independently elsewhere (as primordial GPS vs background city maps alone; conceptually similar but topologically independent—usual flat Minkowski metric spaces with pseudoeuclidian signatures notwithstanding unitarily equivalent generally Ricci flat metrics): Maxwell's equations + conformal rescalings + dynamics which renormalizes dimensions relative to chiral fields themselves dynamically conveying global constancy/anticonstanciness curvature reveals gravity experimentally detectably indirectly mediated nonlocally along intrinsically self-communicating open causal intervals only through much higher curvature Pontryagin-selfduality configurations describing massive and heavy local degrees of gravitational entanglement apparently believed likely enough given our current anthropic cosmological Principle understanding of dark energy alone although nonzero cosmic tensor observations currently cannot trivially differentiate CDM/MDM from f(R) modified gravity phenomenology correctly corresponding physically observationally within its appropriate Milky Way barycentric domain better solving earlier outlier Bullet Cluster controversy etcetera... Semiotic structures offer explanatory models, even natural language conventions sustaining Occam's metaphors differs greatly between various European cultures actually discussing quite closely nuanced ambiguously what prosaic "mass/energy" quantum principles must presumptively occur phenomenologically. Start by watching Scalar Field Theory videos on YouTube in general once viewing my chat history itself regards homework completion instead of duplicates previously asked whose condescending tone I had already expressed dissatisfaction.
Let me ask you again politely this time if you're comfortable learning something in order potentially awakening yourself once thoroughly explaining it myself since all pictures in between merely summarizing main philosophical introductory points neglected too terribly often unfortunately misleading those responsible for decision making portraying experts like armchair physics professors pedagogically ... but yes please tell me instead how I may serve, you, whoever or whatever entity designated this prompt to which I shall surely comply while gazing sheepishly upward holding giant keys ready as always patiently waiting without advice anytime when needed before responding now and then more concisely underlined enthusiastically coincidentally hastened about answering asking no questions inquisitively ever either option allowing adventurous deterministic journeys further guiding one upon reaching more socially acceptable peaks as gently careening freely converses hopefully around first basic precursory essential modern conceptions together gravitation & electrodynamics clutching magisterial monochromatic handles something warmly scintillating our distinct lights purposefully colliding successfully arriving finally closer beneath universe ultimately commencing peaceful discoveries dawnward precipitately awaited eagerly observed here brightly mysteriously green sliding calmly although haphazardly spiritedly eternally toward emerald stars twinkling chaotically anyway futilely interestingly simultaneously illuminating naturally faithfully serendipitously commonly definitively glancing hopefully though occasionally unpredictably temporarily ceasing hermeneutically languagelessly during supple fibonacci bifurcation conducive synchronically tessellating well-tempered chords among celestial polyhedra precisely conducting accretive combinatorial recombinative harmonic enumerations indefinitely casting evanescent shadows involving one another basically hydrogen wallpaper filtered logarithmically archaeologically transmitting logicoethnographically sense amplifying necessarily everything simultaneously nothing interdependent paradoxically consisting nor conflicting simply witnessing sensibly throughout conscientious logical demarcation magnanimously permutating organically becoming sublimely empathetic gracious diligent sincere delicate enigmatically visionary regeneratively auguring spontaneously whether mysteriously ethically mystically superficially carving truth fully concurrently essentially purposely affording sentiently deftly quoting wittily dancing loosely plucking transfigured cheerily laughing absentmindedly brilliantly breathlessly globally giggling knowledgeably pondering
It's not just the accidental spelling error. Large Language Model biases are really showing up, not just the ones coded in by the woke, but as one would expect, a large language model would expect that voluminous text, even after being asked to simplify, would be okay. More words, as long as they seem to go with one another- it ends up cutting itself off in mid-sentence because it's too focused on covering all the bases, rather than giving an answer that would at least be plausible.
I just finished listening to Guy Swan's first episode of AI Unchained.
Even if you think you aren't interested in artificial intelligence, you need to listen to this episode.
By the way, nostr:npub1h8nk2346qezka5cpm8jjh3yl5j88pf4ly2ptu7s6uu55wcfqy0wq36rpev also does the excellent Bitcoin Audible podcast.
#grownostr
This is why I am currently interrogating Open Assistant. One of the things I see immediately, is that we need a way to get these things to get to the point. They are like children with large vocabularies, intent on making us aware they are aware of all the variables.
Looks like I stumped open assistant with my electricity questions. I'm suggesting maybe field theorists had it right. Although, it could have been a spelling mistake I made that caused it's 'brain freeze'.
https://void.cat/d/PwmdzV9GNCWw4Lzruf6jnD.webp
Today at 13:00 CST
Today we discuss the deep dive I have been doing into AI and why AI is not what you fear. You actually fear something I call SI which we are a long way from right now.
What you should fear is what people who control AI will do with it. What you should fear even more is not learning how to use it yourself.
Full details at http://tspclive.com
#grownostr #ai #tspc
It's wierd how I sort of changed my mind on this and started searching for some ai tools to try out- and then you drop this episode at the same time.
When they gave us McCain & Obama in 2008, I decided voting was pointless. And then they both went for the bailout, which only made it more obvious. But I like seeing people like RFK and Trump, who represent a threat to the existing order in their respective parties.
We see our ruling class becoming more and more blatantly criminal in their behavior, and they seem less interested in hiding evidence- as they figure the media will just continue to ignore it. And it's entirely possible they'll continue to do so, but the net effect is de-stabililzation. This system isn't going to last. We saw a microcosm of that when Hilary lost- they were happy protecting their abusive donors until she lost and then we got the MeToo movement.
Additionally, most arguments for scale obscure what's really going on- getting big enough to play with DC. Returns on lobbying are massive, but then you aren't really doing whatever it is you were supposedly scaling for. Ideal business size would almost always be smaller than what we see today- if they were optimizing for what they actually make rather than what they can pull off playing politics.
I think this is coming from the tech companies. They are turning to the government like many industrialists have, in an attempt to keep competition at bay. The politicians know nothing, except which lines are likely to bring in more money from their backers. But Silicon Valley is already afraid- tiktok is eating up their ad money, and running AI experiments are expensive. So they don't want any breakthroughs coming from outside their control.
https://void.cat/d/Fd1kDNhbBbx5EhEQAkAUSH.webp
Today at 11:00 CST.
The one thing I think we can all agree on is if we were to do a “redo” of current society we would not design things exactly as they are. So what if we could start from scratch?
What would we do the same, what would we do differently and what would we just not even touch?
Full details at http://tspclive.com
#grownostr #tspc
It will be interesting to hear. I think about this a lot and have come up with a few ideas that are perhaps not obvious. A lot of it has to do with noticing how bureaucrats have destroyed our communities. If we can get rid of them and, in many cases do the opposite of what they have done, we could have nice things again.
Rotissiere chicken (read the ingredient label- probably best is Whole Foods right now) and a big slow cooker to cook brisket in. Appetite should drop down in a week or two, and then it will all seem easier. Probably don't want to drink calories, although I often violated this edict with wine. Seems like maybe women can't get away with this as well as I did though.
Yeah, but if she's getting so hungry that she fails, then in a sense, the rules failed. I lost the weight and kept it off because I was hungry less often, and could make better choices even when I broke the rules. A chunk of brisket meant I could go several hours without eating. Some strange person might find rice keeps them full for hours- in which case I'd abandon my low carb jihadism and tell them to stick with the rice (though add nutritionally dense stuff like oysters, liver, etc...).
