- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
No one uses this meme correctly and it makes me irrationally upset.
At this point, this movie is probably older than most of the people that use this meme template.
What a terrible idea.
If I were to pay for a digital item, I would get a guarantee that the data is valid and works.
As in, paying for LLMs, imo, is actually a worse deal than standard micro transactions.
They are more like loot boxes, you pay for the chance of a good result.
In which case would a competent dev use an LLM?
Boring, tedious shit that doesn’t require brainpower, just time, when fixing whatever comes out of the LLM is less annoying than doing it myself.
When the documentation is shit and you do not have time to scroll through 100 classes to find that one optional argument that one method accepts, I found LLMs very useful. They are pretty good at text understanding and summarizing, not so much at logic though, which is key for developing.
Looking up how to do something, as an improved stackoverflow. Especially if it provides sources in the answer.
Boilerplate unit tests. Yes, yes, I know - use parametrized test, but it’s often not practical.
Mass refactoring. This is tricky because you need to thoroughly review it, but it saves you annoying typing.
I’m sure there’s more, it’s far from useless. But you need to know what you want it to do and how to check if done correctly.
I am so far from trusting and LLM to do mass refactoring even with heavy review. Refactoring bugs can be super insidious.
Boilerplate unit tests.
It will generate bad tests, so you will have lots of tests blocking your work, but won’t actually test the important properties.
Mass refactoring.
That’s an amount of trust in the LLM capacity to not create hidden corner cases and your capacity to review large-scale changes that… I find your complete faith disturbing.
As always, the specific situation matters. Some refactors are mostly formulaic, and AI does great at that. For example, “add/change this database field, update the form, then update the api, update the admin page, update the ui, etc.” is perfectly reasonable to send an AI off to do, and can save plenty of programmer time.
Until you don’t properly check the diff, a +/- or </=/>/<=/>= was reversed, and you now have an RCE in test, soon to be in prod.
I very rarely find result summarizers useful. If I didn’t find something normally, there won’t be anything in there.
I sure love tests and huge codebases with errors in them. In the time I read and understood an LLM’s output, I could write it myself. And save on time later when expanding/debugging.
When yarn/react/next.js/amplify breaks in some new and idiotic way, Claude is helpful more often than not. Why spend hours googling and sifting through github/stack overflow/etc when Claude can tell me what option to tweak to fix it in a fraction of the time?