I just tried chatgpt(5) on a tough cryptography coding challenge with a very limited prompt [1] and its response was basically perfect.

We're screwed.

[1] In case you're curious it was: "Implement the inner product argument as described in Bulletproofs 2017 (authored by Benedikt Bunz et al), in Python".

Reply to this note

Please Login to reply.

Discussion

why screwed? I think it can be a great tool for advancing freedom tech

I mostly mean the gap between actually open ai tooling, vs what the corps can do. It's not even just their vast hardware resources, at the moment, it's all the other tricks they have to gain competitive advantage.

Still, it was a flippant remark; at the *software* level I still tend to think open source will win. But hardware, that's long term, the issue.

I agree and made this argument somewhere else as well. We need a way to eliminate the huge amounts of energy required. Either by making energy much cheaper or by radically changing hardware, possibly from the foundation up.

Right now this plays only perfectly in the hands of big corporations and not in the hands of the individual.

The llm gave the reference of the answer. So this looks like it was a google search replacement.

I didn't see any reference in my output. What LLM do you mean? Pretty sure this is "new" code (of course output like this is never really "new").

Of course this is to be fair an easier type of task than "implement an algorithm to do X" - the algorithm is already prescribed, it "only" has to "understand" it.

Mama Mia

Are you prepping something for us for in two week's time? πŸ‘€

Possibly!

if existing implementations are there, translating languages, when the model has a good set of inputs from both languages and english is trivial

i had claude 3.7 do this from javascript code to go. it also was pretty much perfect, it made one confusion that was in the original code it translated from. if the code was well formed and the names were meaningful, i would expect that even older generation models could do a sufficient job that it would execute and produce the expected results within a few queries to the LLM.

good luck getting it to design something completely new with no existing examples.

we are not screwed, you are just a pessimist, and believe the propaganda that we are going to be replaced.

spoiler alert: computers don't have a reason to participate in a market and computers can't, without the biological imperative of survival be legitimately creative.

if you had actually worked with codebases that have been largely generated by LLM you would know that the first thing that is going to go out the window is security. this requires the creativity of programmers to come up with both ways to break it, and ways to mitigate the ways to break it.

since the machine has no way to develop creativity, we are not screwed. we just need to up our game and use these tools to spend less time on things that they can do well, and focus on what we can do well.

Don’t understand much about coding but I am sure coding is native AI territory.

Coding, kinda. I would say human language is what they're best at. But they are very good at coding tasks that are not ridiculously niche.

I asked AI to do some basic financial planning and it dropped the ball hard. Then I asked it to find the current price of bitcoin and it crashed. We're going to be alright.

True. This was only half serious.

Only 2 weeks ago I did a similar experiment and noted how it was kinda dangerous; you can prompt it to go in entirely the wrong direction and then it will very confidently do the wrong thing, which is not ideal in cryptography!

I remember taking part in a CTF this year, one of my friend asked ChatGPT to solve the task. We had a python file with a crypto algo, and he asked ChatGPT to find a logic error in the code. It did and wrote code to exploit it and recover the flag. We were the only team to have solved this task πŸ˜‚

now ask it to use that pure implementation in a real world codebase that does something

or ask it to implement the sane thing in rust

the errors will abound

also you only know it's perfect because you understand the stuff and you read the code

otherwise it could have been anything and we wouldn't know

I'm sure. It kinda depends though, I've seen good stuff and bad stuff (including in Rust, including cryptography). This case did have the advantage of being a pure implementation and not "new reasoning" task, though a hard one. I've seen more good than bad results. Also, this is the latest gen. I don't know if gpt5 is particularly good compared with others.

It's not reliable generally but if you know what you're doing it's super effective.

maybe implementing these pure algorithms from a descriptive source is the best thing they can do

which is similar to porting code from one language to another, the other best thing they can do

It is an intelligence amplifier that will greatly disequalize our society. There will be those who master it, and those who are mastered by it.