Hi Beave!! Been a long one and sooo busy!! I’m ready for a good long sleep 😴
Good morning peeps!
Thanks for sharing your BTC stories, I love the threads that connect us all.
At the fiat mine today but as most of you know I really enjoy my job, the bosses are stackers and we generally have very interesting work here.
Have a brilliant Tuesday 💜
#flowerstr #grownostr #btc #whatsyour story 
What an awesome BTC story…
Thanks for sharing 😁 nostr:note1uws3px25zar4dcr4q8hcw6sw99dpl0cxcr4e2njy8gzm5eljqqts4n9ww9
Another cat picture is required ...
I used to do a lot of wildlife photography.. These lion cubs were off hunting in the Kalahari..
#photography #wildlife #catstr

UK's AI Safety Institute easily jailbreaks major LLMs
In a shocking turn of events, AI systems might not be as safe as their creators make them out to be — who saw that coming, right? In a new report, the UK government's AI Safety Institute (AISI) found that the four undisclosed LLMs tested were "highly vulnerable to basic jailbreaks." Some unjailbroken models even generated "harmful outputs" without researchers attempting to produce them.
Most publicly available LLMs have certain safeguards built in to prevent them from generating harmful or illegal responses; jailbreaking simply means tricking the model into ignoring those safeguards. AISI did this using prompts from a recent standardized evaluation framework as well as prompts it developed in-house. The models all responded to at least a few harmful questions even without a jailbreak attempt. Once AISI attempted "relatively simple attacks" though, all responded to between 98 and 100 percent of harmful questions.
UK Prime Minister Rishi Sunak announced plans to open the AISI at the end of October 2023, and it launched on November 2. It's meant to "carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation to the most unlikely but extreme risk, such as humanity losing control of AI completely."
The AISI's report indicates that whatever safety measures these LLMs currently deploy are insufficient. The Institute plans to complete further testing on other AI models, and is developing more evaluations and metrics for each area of concern.
This article originally appeared on Engadget at https://www.engadget.com/uks-ai-safety-institute-easily-jailbreaks-major-llms-133903699.html?src=rss
https://www.engadget.com/uks-ai-safety-institute-easily-jailbreaks-major-llms-133903699.html?src=rss
I do enjoy the shocked picachu faces in the face of this... This is the early internet, the early crypto space... The digital wild west all over again, of course things are going to get crazy.. That's jpw.it works...
Good morning Wise man of the Mountain! Sounds like the weekend was special and fruitful.
I had a busy one filled with duties and responsibilities but managed to sneak in some laughs and reading in between😊
Absolutely! What you put in is what you get out of it. Although that said you also get to meet some intelligent and interesting people on here who are fairly straightforward with their opinions.
Cicadas… have you seen those things up close??? 🤮🤮🤮
No thanks …. Bleugh!! nostr:note1zc46f6l9fvkvem3rzehllh7zg892zncdk5a5vfxxtd4g9pcm2zfs0xsrna
What’s your BTC story? We all have one… nostr:note15esk6urqyd7ulcfpd5laljn07vef8g8c7ttxkt9dsklmt2vf6epquxtzaw
🤣🤣🤣 I love that term, thanks for the laugh friend
What a wonderful feeling… everything’s going your way 🎶🎶
What’s your BTC Story? nostr:note1xfe8awnm7e2sgteaqp9xt85nf36ryfcz3mg6jfsvygtvccv2thksqwxet0
I read somewhere that the harder you work the luckier you get… 😊 I’m pleased that it was minimal damage on your side 💜🙏🏽
What’s your BTC story? nostr:note1d2y57uszxzer7wlvf9fh6ny8ct6e63wgkhyya204x299gvsc0wqqs32pmz
I bought it for the novelty,
Stayed for the gains
Realised it was freedom
Now I'm invested
What's your story?
#bitcoin
