The cost is not just tokens, you need an actual human contributor looking into the issue, prompting, checking output, validating, deploying,... Difficult to compute the actual AI ROI. If $300K didn't matter without AI, it probably still doesn't matter with AI.
Crypto didn't "win", the technology is there but people are mostly gambling, or doing shady stuff. Shall I mention NFTs? It didn't change the life of the average joe, nor business. It's a niche.
Many people are still coding without AI and doing perfectly fine. When you design serious things, coding is not where most time is spent anyway. Maybe it'll become unavoidable at some point, by that time the experience will be refined and it'll be easier to learn.
Point is, it's never too late. If you don't need to be cutting edge on a new tech, it may not make sense to put the extra effort of early birds. If you put that effort, you better not do it for free.
Having you own morale and ethics is far from futile. Each individual should be able to question the law and object taking part in something they don't agree, as long as it doesn't break the law.
Killing someone is legal in certain countries for different reasons (I'm not talking about war). Not sure I would like to get involved in that business, for instance if I don't agree on how and why people are sentenced to death in my country.
Some people are built with low ethics. Sure, if it's not made illegal, they'll always find someone to do it. Looks like in that case it might be illegal, as TV makers are sued.
Assembly experts still write code that runs faster than code produced by compilers. Being slower is predictable and solved with better hardware, or just waiting. This is fine for most so we switched to easier or portable languages. Output of the program remains the same.
Impact of having 1.7x more bugs is difficult to assess and is not solved that easily. Comparison would work if that was about optimisations: code that is 1.7x slower / memory hungry.
> Assembly experts still write code that runs faster than code produced by compilers.
They sometimes can, but this is no longer a guaranteed outcome. Supercompilation optimizers can often put manual assembly to shame.
> Impact of having 1.7x more bugs is difficult to assess and is not solved that easily.
Time will tell. Arguably the number of bugs produced by AI 2 years ago was much higher than 1.7x. In 2 more years it might only be 1.2x bugs. In 4 years time it might be barely measurable. The trend over the next couple of years will judge whether this is a viable way forward.
reply