Not dactyl manuform, ngmi
nostr:nprofile1qqs0vtr95kl6eyvckdr3ysqp9nes7c23hycx5ece2h0flc8rmyvxn9qpzamhxue69uhhyetvv9ujumn0wd68ytnzv9hxgtcvuher7 feature req: forced default playback speed
"There can be no such thing as conditional property"
- Ragnar Danneskjöld
Chatbots gonna accelerate language learning like crazy
Just changed the system prompt to solo hablas en espanol and its working great
Speech to speech would be crazy but readings good too
Transformers one has some charm
But its like the origin of optimus vs Megatron
Megatron starts out good and they make a big deal out of him changing with his willingness to fight / kill transformers
Meanwhile the "good" guys are gleefully killing transformers too.
But its only bad when Megatron does it
Buy the company that's stealing the bitcoin
Buy the company that is the bitcoin
Nostr has 1,461,501,637,330,902,918,203,684,832,716,283,019,655,932,542,976 users
When will it be malpractice to not invest in BTC? I'm thinking 2028
Idk why I enjoy loading up my 3090 with vLLM requests so much
That's hilarious
"We're afriad that results showing that [thing] is bad may be used by people who don't support [thing]"
Telling on yourself that the main goal isn't truth/helpfulness
Or their in so deep their worldview forces them to ignore contrary evidence
Dang I just lost like all enthusiasm for this project
8b on akimbo 3090s can "read" ~3.8 pages/sec
OK so srsly limiting the response output gave giant speed boost. Token generation is like 10x slower than prompt reading.
Outlines let's you constrain model output but I don't see how to have a conditional requirement. Like result:boolean quote:required if true
So gonna break it into 2 calls, one for the result and one for the quote
Lmao so the speedup was BC half the requests bounced BC the prompt was over the context limit
Cutting the output length by 60% gave me a 2x speedup.
A huge LLM application is turning unstructured data into structured data
Fading the nostr consensus that trump would be better. I think Harris is long term better by being worse
Adversity creates strength or something.
extends the cheap SATs period and/or hastens the collapse
"O(n^3) won't be that bad, my input is small"
420 LLM calls and 7 minutes later for a 1/7th scale test



