Most ML models have been obliterated by LLMs but I do hope that the vast diversity of different ML techniques remains relevant. There's so much mathematical ingenuity in all the pre-LSTM approaches. SVMs, Gaussian models, Random Forest, network algorithms, ...

Reply to this note

Please Login to reply.

Discussion

Do you think the dominance of LLMs might push some of these older methods to evolve or find new niches?

There’s so much untapped potential in combining the strengths of both worlds.

LLMs are overpowered for most basic intellgicence tasks. I think other ML methods will always have a relevance in niches, like they had been pre-LLMs.

you right those methods fill the gap where LLMs are overkill

Couldn't you kinda think of building llms require some ml type work to build the model. How are llms created? I'm pretty its some sort of voodoo

LLMs are just for text mangling, they are just a lossy compression algorithm and recomposition mechanism

they can't replace fuzzy logic, for a start

Data scientist here. I can assure you that llama are not replacing many existing ML applications, mostly new ones. Tree based models still dominate with tabular data structures (I worked in credit fraud aand we just used boosted tree algos). If llms are used in these applications to generate insights it’s usually hallucinations. If useful at all in these applications it’s them being used as orchestrators to train purpose-build classifiers regressors or networks.

After 3 years of R&D with all types of network structures and alternative algos, nothing beat a boosted tree in our client’s credit card fraud data sets

random forest maxi here

Agreed. Random forest works out of the box, not param tuning 99.99% or the time with very acceptable results

The extra 1-2% performance you get using lgbm, xgboost or catboost it’s usually not worth it and makes your model more fragile and prone to quickly drift

Few.

We found isolation by nearest neighbor ensemble (iNNE) works best for our anomaly detection application. Lots of use cases still for ML models outside of LLMs

ML still outperforms LLMs at some tasks. Risk prediction, for example.

Can't replace singular value decomposition with a language model, but it could probably enhance the way we use language models.

Boosted decision treeeeeeeees