Yeah, it's pretty laughable that the source they link to for those "possible definitions" right away says this:
> AI is defined in many ways and often in broad terms. The variations stem in part from whether one sees it as a discipline (e.g., a branch of computer science), a concept (e.g., computers performing tasks in ways that simulate human cognition), a set of infrastructures (e.g., the data and computational power needed to train AI systems), or the resulting applications and tools. In
a broader sense, it may depend on who is defining it for whom, and who has the power to do so.
I don't see how they can possibly enforce "if it doesn't have AI, it's false advertising to say it does" when they cannot define AI. "I'll know it when I see it" is truly an irksome thorn.
Deterministic if/then statements simulate a surprising coverage of average human cognition, so who's to say a program comprised of them is neither artificial nor intelligent? (That's hand-waving over the more mathematical fact that even the most advanced AI of today is all just branching logic in the end. It just happens to have been automatically generated through a convoluted process that we call "training" resulting in complicated conditions for each binary decision).
In general I like the other bullet points, but I find it really bizarre they'd run with this one.
Principle problem being that expert systems required meticulous inputs from domain experts, codified by skilled engineers. People don't have time or startup capital for actual expertise...
And AI requires the same thing, we just call them data scientists and ML engineers. Using linear-ish algebra instead of decision trees doesn't change the the fact that you need time and capital to hire experts.
The big difference is that data scientists only work on the model architecture and data sources, whereas expert systems need people who have expertise in the subject matter itself. One of the biggest changes from 'old AI' to modern ML is that we no longer try to use human domain knowledge as much, instead getting the model itself to see the same pattern from data.
Yes, but there is a whole field of artificial intelligence called unsupervised learning that tries to identify labels without pre-defined labels. At the extreme end there are no externally imposed / defined labels and artificial labels are determined by empirical clusters or some orthogonal data pattern or algorithm. Unsupervised learning is much less effective and not as mature as supervised learning. In the case of LLMs the label is "next words" and it's inferred from a corpus of text.
I'd say labels (for supervised ML) are fundamentally different from rules (for expert systems), because
- labels are easy to decide in many cases
- rules require humans to analyze patterns in the problem space
- labels only concern each data point individually
- rules generalize over a class of data points
Large language models are the thing the average joe in 2023 would call AI the most, and at the end of the day, if you go deep enough down the 500 billion parameters rabbit hole, it's just a "veryyyyyyy loooooong chain of if-then-else's" obtained after 10s of thousands of hours of computing time over basically all of the text generated by humans over 30 years of internet existence. I know it's not EXACTLY that, but it could be pretty much "recreated" using this metaphorical long chain.
> AI is defined in many ways and often in broad terms. The variations stem in part from whether one sees it as a discipline (e.g., a branch of computer science), a concept (e.g., computers performing tasks in ways that simulate human cognition), a set of infrastructures (e.g., the data and computational power needed to train AI systems), or the resulting applications and tools. In a broader sense, it may depend on who is defining it for whom, and who has the power to do so.
I don't see how they can possibly enforce "if it doesn't have AI, it's false advertising to say it does" when they cannot define AI. "I'll know it when I see it" is truly an irksome thorn.
Deterministic if/then statements simulate a surprising coverage of average human cognition, so who's to say a program comprised of them is neither artificial nor intelligent? (That's hand-waving over the more mathematical fact that even the most advanced AI of today is all just branching logic in the end. It just happens to have been automatically generated through a convoluted process that we call "training" resulting in complicated conditions for each binary decision).
In general I like the other bullet points, but I find it really bizarre they'd run with this one.