I think you have a good point, but I think the paradigm shift here is that people are chasing careers and money using LLM tools in a way that wasn't possible with calculators, and enforced differently within well-paying, white collar engineering professions.
For example, there's actual liability (legal and financial) involved in building a bridge that subsequently falls apart - not so with small bits of code block. Similarly there's a level of academic rigor involved in the certification process for structural/mechanical/etc. engineers that doesn't (and maybe can't?) exist within software engineering.
>certification process for structural/mechanical/etc. engineers that doesn't (and maybe can't?) exist within software engineering
NCEES has a PE license related to controls software. The difficulty is most engineering work falls under an industrial exemption. It seems like the way to enforce that type of liability would be to remove the industrial exemption.
I'm not really sure what problem you're trying to point out here. There are legal standards and liability for engineering, and if someone violates them using an LLM they are held just as liable as they would be had they done the work themselves.
But the same is true for code? You are held to the same standards as if you had written it yourself, whatever that may be. Frequently that is nothing.
I think all of those are great, but don't think that has much to do with AI tbh. How you get to the outcome, and the standards that it meets, should be all that matters
You don’t think the black box nature of much of AI has any impact on its impact in safety critical applications?
Just look at the diverse and haphazard way AI has been used in autonomous driving. I would argue it’s a misplacement of the “move fast and break things” (in some cases at least) that has no place in public-facing safety critical applications.
It brings up some difficult questions regarding adequacy of testing at the very least when the underpinnings are not very interpretable.
Ah, my understanding was that this discussion is about AI as dev-time tool, where the output is code, which gets reviewed : merged / deployed like any other bit of code whether written internally or installed via a library.
Using LLMs or other ML as components in systems themselves is a whole other thing, and I agree with you wholeheartedly.
That’s a real distinction, but auto-generated code has special complications in safety critical code review as well. It’s also not relegated to ML/AI, just that adds yet another complication to good verification and validation.
For example, there's actual liability (legal and financial) involved in building a bridge that subsequently falls apart - not so with small bits of code block. Similarly there's a level of academic rigor involved in the certification process for structural/mechanical/etc. engineers that doesn't (and maybe can't?) exist within software engineering.