i'm all about the low level CS stuff, cryptography, protocols, concurrency, i just don't really care frankly, i don't see the use in it, all the stuff i've seen AI do is just garbage because it takes a human brain to recognise quality inputs and thus most of what is in the models is rubbish

the hype around ML with regard to programming is way overblown, GPTs generate terrible, buggy code

about the only use i see for them might be in automating making commit messages that don't attempt to impute intent and for improving template generation

outside of that, it's really boring, you can instantly tell when the text is AI generated and when it's an AI generated image, there is no way it is ever going to get better than the lowest common denominator and everyone is determined to learn that the hard way

Reply to this note

Please Login to reply.

Discussion

There’s an entire field of data science and analytics, much of which leverages ML, which has absolutely nothing to do with “AI” or LLMs

i'm aware of this, but much of it works on the same basic algorithms, proximity hashes and neural networks... GPT is just one way of applying it

what's most hilarious about the hype over GPT is that it's literally Generative Predictive Text

it's literally spinning off on its own a prediction of what might follow the prompt, anyone used a predictive text input system? yeah... they are annoying as fuck because they don't actually predict very well anything and certainly not with your personal idiom

Also much of it, in fact most of it, does not work on neural networks or other “black box” unsupervised models.

The most common uses of ML for a business or researcher are correlation, categorization, and recommendation systems, or some form of forecasting/predictive modeling (I probably missed some). None of these require a neural net, or anything that resembles what we commonly call “AI”.

yeah, and much of it is calculus based, i've done a lot of work with difficulty adjustment which uses historical samples to create close estimates of the current hashpower running on a network

i'm not a data scientist though, my area of specialisation is more about protocols and distributed consensus - and the latter (and including spam prevention) tend to involve statistical analytical tools

That’s cool. I am working as a data analyst in the Bitcoin mining world (happy to disclose where outside of Nostr). I’ve also done a lot of work with on-chain data, pool shares, and things related to hashrate.

Would love to look at your work on the ‘close hashrate estimates’ if it’s open source!

well, dynamic difficulty adjustment is pretty much ... hmmm not really sure, it's not something that needs new tools, and it was only because i happened to be working for a shitcoin project that was building a hybrid proof of work/proof of stake algorithm that the CTO happened to be a trained physicist and noticed that difficulty adjustments were like a type of device he was familiar with from his work in physics, the PID controller

PID stands for Proportional Integral Derivative and it uses a set of parameters for each of those over a historical sample of data points to adjust a system, usually a linear parameter, to the inputs its getting

in my experimenting with it, i built a simulator that tested parameters for P, I and D I found it was possible to adjust smoothly, or be more accurate but it had a high noise component

i've since read a little more about how to work with these things and learned that the derivative can help a lot but in my tests it just added noise - the trick was to apply a band pass filter to cut the high frequencies out, and that probably would allow it to become faster at adjusting to changes without adding the noise factor that the P and I factor create when tweaked for fast adjustment

the fact is that the bitcoin difficulty adjustment is actually sufficient for the task, and due to its simplicity is preferable, but i could write a dynamic adjustment that is resistant to timewarp attacks and would reduce the amount of variance of solution times

Gotcha. I thought you meant you developed a more accurate way to estimate the total network hashrate on Bitcoin, than simply deriving it from difficulty and block count per day/week/etc

no, i was talking about a dynamic difficulty adjustment, which uses statistical analysis to derive an estimate of the current difficulty target

you got me thinking though... would be quite interesting to create a simple app that just follows the bitcoin chain and demonstrates what a better dificulty adjustment would give you at each block height versus the existing scheme

Yea that would be a cool study. But in practice, I think a more dynamic system like that would be easier for a large miner to game or cause havoc.

well, i have studied the subject pretty close, i think that if it's properly written it can be better

it's a hard job because it really needs to be as simple as possible

i'd love to do something like this though, just to demonstrate it... easy to capture the data, and you can have an app that derives the system's estimations and shows you the error divergence at each block due to the super simple adjustment scheme, and lets you see several alternative methodologies applied... i think it would be a great educational tool and maybe it would lead to an upgrade of this element of bitcoin protocol due to the clear advantage

it really isn't that complicated... current system is like a thermostat that adjusts every 2 weeks... advanced adjustment systems have been long settled in other fields of tech, like the segway/hoverboard things... that is exactly this math applied to motion, and it's extremely stable now, ridiculously stable, hell i remember 14 years ago it was being applied to military jets to improve their maneuverability

There’s also the whole range of analytics use cases that does not need any form of ML. The Python ecosystem has extremely robust tooling that makes this work easy. Rust tooling for dataframes etc is getting there, but it’s not reasonable to expect all analysts to learn Rust.

Does Golang have any libraries for dataframes and ad-hoc analysis, that works in something like a Jupyter notebook?

as i mentioned elsewhere, my thing is protocols and distributed consensus, and go is the best language for this kind of work, and it is what the language was built to do, it's just my assertion that it generally makes for better quality code IF PEOPLE FOLLOW THE IDIOM unlike way too many go coders in the bitcoin/lightning/nostr space