By doing this back and forth review, is it today faster than if you coded yourself ?
One of things i find myself doing a lot these days is getting chatgpt to write a function and then using another chat instance to check the code. I’ll do this over and over until all the bugs are caught. Since each chat is like a new code reviewer it often finds all of the bugs after many iterations.
I’ll even give the code review from one of the AIs to the one who wrote the code, it will then give a different implementation based on the ai review. I then give that implementation to the reviewers implementation and ask if its better or not. I can very quickly get to an optimized and less buggy function this way.
This feels very *human*. This is exactly how code review tends to play out in meatspace.
Demo:
https://chat.openai.com/share/a3ac3201-eee8-4d69-9671-e1e8dd9c2917
https://chat.openai.com/share/21c189a6-f29e-4e31-b163-5f009cc48e1b
Discussion
From my personal experience, it’s waaay faster than coding it yourself. Likely 10x. It’s not just the coding, but the bug fixing. Using AIs to write the tests and then write the code generates far higher quality code really quickly.
I'm not a coder so can you explain the tests part for me? What exactly is a "test" when writing code? And how what exactly do you ask chatgpt to do/build to fix your code?
Here’s a super simple example:
To AI:
Imagine a function that takes two integers and adds them returns true if the sum is less than 100, else false, propose three tests
——-
AI might respond…
Sure, here are three test cases for such a function:
1. **Test Case 1: Two small numbers**
- Input: 5, 10
- Expected Output: True (because 5 + 10 = 15, which is less than 100)
2. **Test Case 2: Numbers summing up to exactly 100**
- Input: 60, 40
- Expected Output: False (because 60 + 40 = 100, which is not less than 100)
3. **Test Case 3: Large numbers exceeding 100**
- Input: 75, 50
- Expected Output: False (because 75 + 50 = 125, which is more than 100)
You'd want a range of test cases to ensure that the function behaves correctly for different kinds of inputs.
———
Now write the JS function and tests using Jest
—————
Certainly! Here's how you would write the first test for the function using Jest:
First, let's define the function (I'll name it `isSumLessThan100`):
```javascript
function isSumLessThan100(a, b) {
return a + b < 100;
}
```
Now, let's write a Jest test for it:
```javascript
const isSumLessThan100 = require('./path-to-your-function-file'); // Import the function
describe('isSumLessThan100', () => {
test('should return true for numbers summing to less than 100', () => {
const result = isSumLessThan100(5, 10);
expect(result).toBe(true);
});
});
```
That's super interesting thank you! I didn't realise chatgpt was capable of something like that.