Quote:
Originally Posted by Quoth
No, LLM/Generative AI is just broken search destroying the environment. It's designed to produce plausible, not correct, output and nearly half the time completely wrong, but without being an expert you don't know which half.
Use search and learn how to do stuff.
|
Have you actually tried to us the AI for coding?
I have and I have been astonished how well it did work.
You have to write a good prompt. I wanted to process some *.json files from a smart watch and produce graphs for heart rate for each day. I wrote a prompt asking aistudio.google.com to write a code for me and it produced a 190 line python script that processed the files and produced output pdf files with graphic representation of the data formatted exactly to my description. And the script ran (in a sandbox with a copy of the data) on a first try. The script had nice comments explaining what it does and even covered edge cases that I did not ask for in my prompt.
Yes, I have seen an AI to hallucinate, and even had an argument with it, and it even refused to admit it was wrong even when I pointed to an error with a proof. And yes, I can create a prompt that will return a wrong answer, but I am actively using AI and it is very helpful in most cases. Just do not run the script or a command on a production system without a backup. And do not run things you do not understand how they are supposed to work. Also, take everything it produces with a healthy dose of skepticism. You should do that anyway, even if you get an answer from a person.
I have friends that are professional coders and their big-name software house employer pays for private instance of Claude AI for them to use it at work.