I think the answer to this is lack of adoption.
I think the answer to this is lack of adoption.
Over my dead body.
In my experience LLMs do absolutely terribly with writing unit tests.
IMO this perspective that we’re all just “reimplementing basic CRUD” applications is the reason why so many software projects fail.
I’m fairly sure the crouch jump is part of the Half-Life 1 tutorial level.
Isn’t the entire point of federation to be able to do what you’re describing?
First part of the article sounds like what I’d expect.
The second part makes me wonder if this research was sponsored by some company which provides “Prompt Engineering” training.
Regarding mutation testing, you don’t write any “tests for your test”. Rather, a mutation testing tool automatically modifies (“mutates”) your production code to see if the modification will be caught by any of your tests.
That way you can see how well your tests are written and how well-tested parts of your application are in general. Its extremely useful.
On the one hand, mutation testing is an important concept that more people should know about and use.
On the other, I fail to see how AI is helpful here, as mutation testing is an issue completely solvable by algorithms.
The need to use external LLMs like OpenAI is also a big no from me.
I think I’ll stick to Pitest for my Java code.
It’s no less possible than for the tooth fairy, or Santa Claus to exist.
That’s not creepy or weird, that’s horrifying.