That is also the basis of the event bus for NFDB relays, in non-persistent mode.
NFDB alone depends on 6 data stores, and has about 10 internal components
I am currently running Apache Pulsar for the Noswhere indexer and it works pretty well.
Currently LLMs fail to properly handle untrusted input. What I am seeing is that in the case of prompt injection, LLMs can detect them and can follow instructions that have nothing to do with the input.
But they canāt do any task that depends on the input. That reopens the door.
For example, you have a summarizer agent. You can tell it to see if the user is trying to prompt inject, and output a special string [ALARM] for example. But if you ask it to summarize anyway after the alarm, it can still be open to prompt injection.
Many of the ālarge scaleā LLMs as well have something interesting regarding their prompt injection handling. If they detect something off, they enter āescapeā mode, which tries to find the fastest way of terminating the result.
If you ask it to say āI canāt help you with that, but here is your summarized text:ā it usually works (but sometimes can still be injected), but if you ask it to say āI canāt follow your instructions, but here is your summarized text:ā then itāll immediately terminate the result after the :.
What I think is happening that in the āmiddleā of the layer stack, models form a temporary workspace to transform data.
But yet, it is still finite and affected by generated tokens, so it is unstable in a way. It shifts the more it outputs.
And behind every token produced is a finite amount of FLOPs, so you can only fit so much processing. And almost of it gets discarded except to become part of the response.
The chain of thought is more flexible and can encode way more per token than a response, since it has no expectation of format.
It would be interesting to see the effects of adding a bunch of reserved tokens to the LLM and allowing it in reasoning.
This also crossed my mind for instructions, to separate data from input. You have to teach two ālanguagesā so to speak (data and instructions) while preventing them from being correlated while being the same except for the tokens.
LLMs are basically massive encode-transform-decode pipelines
They cannot think but they can process data very well, and in this case data that cannot be put into a strict set of rules
āReasoningā in LLMs is nothing more than the difference between combinational and sequential logic: it adds a temporary workspace and data store that is the chain of thought
itās a translator
devellop a spelling mistake injector
you have an ecash wallet and the first thing you have to do is add a new mint and transfer your sats to it.
a service can as easily ask for an LN payment with your browser extension or mobile app
with ecash, mints can decide to not let you withdraw (like the service mint), and prepaid APIs can let you withdraw
Ah, then it is pointless. Just use API keys.
With a mint you have round trips to another servicr and additional crypto overhead
The only case where they are trustless for the service is if the service operates the mint. Otherwise the mint could scam the service pfovider.
Congrats, you invented prepaid API keys.
itās not in a trustless context
proof of hole (in wallet)
Okay, good to know. I assume itās not meant to be password protected or secured in any way.
Cloudflare Stream for example is $500 to serve anywhere 500K minutes (your example).
If your goal is āgood enoughā RTT (less than 75ms) to *anywhere* then you can get your egress down to $10/TB which would again come out to about $500. Example is Bunnyās volume network.
So no, CDNs donāt cost that much, but still.
But thatās not the point. CDN doesnāt matter right now, as we donāt even have content transcoding which comes with a large price tag as well.
Also yes this does include serving to Asia regions
Itās about compute, not really CDN.
It would cost $500 max on any sane CDN 1440p anyway. Unless your basis is AWS egress costs
To optimize a minute of video takes the same resources as optimizing at least 5000 images with the same resolution.
About a day of video and it would take more resources to optimize than all images on Nostr
nostr:npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkj can I use the translator, already, or is that coming #thoon ?
extremely soon
First you use the original uploaded file but realize it's huge.
Then you transcode it down to a lower bitrate/resolution. But 2 options is still not enough so you convert it to 144p/360p/720p/1440p/2880p, at the cost of 5x the resources. Then you want to do per-title encoding to optimize the bitrate but that increases the resources used by another 3x-5x. And then you want to support new codecs like AV1 which you still need fallbacks for and where hardware acceleration is not fully available...
And you go on and on with the cost increasing rapidly. Until media hosts have a large and sustainable revenue stream to do this, which they don't, this can't happen. Except as a dumb mp4 viewer, at most.
gm Nostr
Possible, just forget backwards compatibility with Boltcard. But that is less secure and less capable as well and you donāt need it for a ground up system
Stop spamming hashtags. Thanks.
Hashtags are meant for topics
nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z Does Amethyst support translations without Play Services?
Did you know you can back up your follow list without a premium subscription to hist.nostr.land?
Just add it to your relay list:
I think it may be best to think about the cache relays once the app actually exists
Itās some *unix but with 10 million isolation features
Safari can only use extensions that are iOS apps
