“It’s an ambiguous term with many possible definitions.”
“Does the product actually use AI at all? If you think you can get away with baseless claims…”
Last I checked basic optimization techniques like simulated annealing and gradient descent - as well as a host of basic statistical tools - are standard parts of an introductory AI textbook. I’ve been on the receiving end of government agency enforcement (SEC) and it felt a lot like a shakedown. This language carries a similar intent: if we decide we don’t like you, watch out!
Yeah, it's pretty laughable that the source they link to for those "possible definitions" right away says this:
> AI is defined in many ways and often in broad terms. The variations stem in part from whether one sees it as a discipline (e.g., a branch of computer science), a concept (e.g., computers performing tasks in ways that simulate human cognition), a set of infrastructures (e.g., the data and computational power needed to train AI systems), or the resulting applications and tools. In
a broader sense, it may depend on who is defining it for whom, and who has the power to do so.
I don't see how they can possibly enforce "if it doesn't have AI, it's false advertising to say it does" when they cannot define AI. "I'll know it when I see it" is truly an irksome thorn.
Deterministic if/then statements simulate a surprising coverage of average human cognition, so who's to say a program comprised of them is neither artificial nor intelligent? (That's hand-waving over the more mathematical fact that even the most advanced AI of today is all just branching logic in the end. It just happens to have been automatically generated through a convoluted process that we call "training" resulting in complicated conditions for each binary decision).
In general I like the other bullet points, but I find it really bizarre they'd run with this one.
Principle problem being that expert systems required meticulous inputs from domain experts, codified by skilled engineers. People don't have time or startup capital for actual expertise...
And AI requires the same thing, we just call them data scientists and ML engineers. Using linear-ish algebra instead of decision trees doesn't change the the fact that you need time and capital to hire experts.
The big difference is that data scientists only work on the model architecture and data sources, whereas expert systems need people who have expertise in the subject matter itself. One of the biggest changes from 'old AI' to modern ML is that we no longer try to use human domain knowledge as much, instead getting the model itself to see the same pattern from data.
Yes, but there is a whole field of artificial intelligence called unsupervised learning that tries to identify labels without pre-defined labels. At the extreme end there are no externally imposed / defined labels and artificial labels are determined by empirical clusters or some orthogonal data pattern or algorithm. Unsupervised learning is much less effective and not as mature as supervised learning. In the case of LLMs the label is "next words" and it's inferred from a corpus of text.
I'd say labels (for supervised ML) are fundamentally different from rules (for expert systems), because
- labels are easy to decide in many cases
- rules require humans to analyze patterns in the problem space
- labels only concern each data point individually
- rules generalize over a class of data points
Large language models are the thing the average joe in 2023 would call AI the most, and at the end of the day, if you go deep enough down the 500 billion parameters rabbit hole, it's just a "veryyyyyyy loooooong chain of if-then-else's" obtained after 10s of thousands of hours of computing time over basically all of the text generated by humans over 30 years of internet existence. I know it's not EXACTLY that, but it could be pretty much "recreated" using this metaphorical long chain.
While I don't disagree with the basic premise ("AI" as a specific falsifiable term is hard to pin down due to the ubiquity associated with the term); I do think there are specific cut-and-dry circumstances where the FTC could falsifiably prove your product does not include AI.
For example, using an alternative of Amazon's Mechanical Turk to process data is clearly a case where your product does not use AI. Which I believe is more likely the kind of scenario envisioned when the author was writing that sentence.
On the other end of the spectrum, calling a feature of a product "AI" seems to imply some minimal level of complexity.
If, for example, a company marketed a toaster that "uses AI to toast your bread perfectly", I would expect that language to indicate something more sophisticated than an ordinary mechanical thermostat.
that would require the 'AI' to do something that computers are really good at - detect when a particular event (perfectly toasted) has been achieved via inputs monitored at the millisecond without deviation and then change a state - toasting to not toasting - based on detecting that event.
It makes sense to protect investors from falsely investing in new "AI" tech that isn't really new AI tech, but why do consumers need to be protected? If a software product solves their problem equally well with deep learning or with a more basic form of computation, why is the consumer harmed from false claims of AI?
To put it another way, if you found out that Chat GPT was implemented without any machine learning, and was just an elaborate creation of traditional software, would the consumer of the product have been harmed by false claims of AI?
If you buy a painting advertised as a Monet, you are similarly not harmed if it wasn’t actually painted by Monet. But people like to know what they’re buying.
Less sarcastically, info about how a thing is made helps consumers reason about what it’s capable of. The whole reason marketers misuse the term is to mislead as to what it’s capable of.
Yeah - it needs to be clear to investors if the tech will scale as the business grows and if the tech has a good chance of improving if trained on a larger dataset or ML techniques improve generally.
Consumers should care about if a product is able to solve an AI-like problem that normally requires domain knowledge. Shouldn't care if done by ML, rules-based systems, or people. (Except perhaps may want assurance the product will continue to be able to support them as the customer scales.) Also should care about how the decision-making works.
I know of at least one startup that claimed to use AI (including having AI in the company name), but in actuality humans did nearly all of the work. Hoped that once they got enough customers (and supposedly "proved the concept"), they could figure out how to use AI instead. I bet this is/was somewhat common.
I also see many (particularly "legacy") products say they're "AI-driven" or "powered by AI", when in actuality one minor feature uses some AI, even in the broadest sense.
> Are you exaggerating what your AI product can do?
> Are you promising that your AI product does something better than a non-AI product?
> Are you aware of the risks?
I'm guessing everyone here has come across examples of "AI" tossed onto something that either 1) 10 years ago wouldn't have been called AI or 2) the thought of something with a more recent interpretation of "AI" being core to the function of the product is a little scary and/or feels a little unnecessary.
Maybe it is a shakedown/warning. I think that's fair. We should have better definitions so that these agencies can't overstep, and products should have a better explanation of what "AI" means in their context. Until then yeah, vague threats versus vague promises.
It sounds like you may have missed the stampede of “AI” companies coming out of the woodwork the last few months.
For every legitimate AI project, there have been a thousand “entrepreneurs” who spend 4 hours putting a webflow site on top of GPT APIs and claim they’ve built an “AI product”. There’s no limit on the amount of BS benefits they claim. They seem like the same people who just finished running the crypto scams.
It seems quite obvious to me that this cohort is the target of this statement.
> spend 4 hours putting a webflow site on top of GPT APIs
GPT _is_ AI though, no? I would think that this would count. Might violate "a re you exaggerating what your AI product can do" or "are you aware of the risks" instead though.
Not all of us would agree. We would only take that expression for a rhetoric simplification (a shortening for "part of a broad AI realm"). We would pivot near the concept of "AI" as "a problem solver that could do the job of a professional". This in way excludes e.g. "build convincing text", because it is not (or should not) be a professional task - though it can surely be part of research.
Doubts are possible on all four FTC points - plus more in the linked guidance post from E. Jillison (e.g. "Do more good than harm" - difficult measure on engines which have "relaxed sides").
>In the 2021 Appropriations Act, Congress directed the Federal Trade Commission to study and report on whether and how artificial intelligence (AI) “may be used to identify, remove, or take any other appropriate action necessary to address” a wide variety of specified “online harms.”
>We assume that Congress is less concerned with whether a given tool fits within a definition of AI than whether it uses computational technology to address a listed harm. In other words, what matters more is output and impact. Thus, some tools mentioned herein are not necessarily AI-powered. Similarly, and when appropriate, we may use terms such as automated detection tool or automated decision system, which may or may not involve actual or claimed use of AI.
I find it hard to sympathize with companies whose websites are full of AI, blockchain, and quantum trash. Honestly, idgaf if they get shaken down. If you have a product that people like, just market your product based on its features, and remove all the BS about using <insert the buzzword of the day>.
If the FTC tells OpenAI to stop mentioning AI, I would be surprised. Even if that happens, I am sure ChatGPT will remain just as popular.
There is also the high level question of why exactly the government needs to police this. If it turns out that some Stable Diffusion frontend was actually sending the prompts to a team of Indians who happen to draw really quickly; that is no reason to get the enforcers involved.
If examined closely, the finger wagging in this post is remarkably petty. This guy was likely part of the angry crowd who didn't like Steve Jobs describing the iPhone as "magical". The standard should be "a lie that causes measurable, material harm", not some company exaggerated in their advertising. Advertisers exaggerate, that is just something people have to live with.
The problem is that this ends with everybody calling their product magic and the word losing its original meaning; soon after it will have a meaning closer to "disappointing" or "lame".
It doesn't really matter what the standard is... What matters is that there aren't some companies who push the limits far harder than others. If there are, then those companies who push the limits of what is allowed harder will be at an advantage, to the detriment of the public and the american economy as a whole.
> ...those companies who push the limits of what is allowed harder will be at an advantage, to the detriment of the public and the american economy as a whole...
Be careful with comments like that. I would remind you that [y]our performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions. You need adequate proof for that kind of comparative claim, too, and if such proof is impossible to get, then don’t make the claim.
If you think you can get away with baseless claims that [companies using these tactics are going to be at an advantage over companies that just make good/cheap/effective products], think again.
>If it turns out that some Stable Diffusion frontend was actually sending the prompts to a team of Indians who happen to draw really quickly; that is no reason to get the enforcers involved.
Well, if the enforcement agency is the SEC I would think that it made a good deal of difference to the actual value of your company?
I’m sure there’s a company out there who uses some linear equation in their app that they came up with by dumping the data they had into Excel and running the linear regression “data analysis” on it.
> “Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.”
So is y=7.4x+5 an “AI” running inside our app or is it just the output from an “AI tool” FTC?
Replace x and y with matrices and wrap everything in a non-linearity. Swap the 7.4 and 5 constants for variables a and b and set their values by taking the partial derivative of the with respect to the difference between the ground truth value and the predicted y.
String together a bunch of these "smart cells" and observe that we can process sequences of data by linking the cells together. Further observe that if we have a separate set of cells (technically it's an attention vector, not quite a group of neurons) whose loss function is with respect to each individual token in the sequences, we can "pay attention" to specific segments in the sequences.
Add a few more gates and feedback loops, scale up the number of cells to 10^12, and you basically have a state of the art chatbot. Capiche?
I’m just curious where the FTC would draw the line. The root commenter made a good point that they seem to make a value judgement of what AI means. We can stretch that meaning pretty far if we want :)
>Last I checked basic optimization techniques like simulated annealing and gradient descent - as well as a host of basic statistical tools - are standard parts of an introductory AI textbook.
maybe the textbook needs to be investigated
that's meant to sound ironic no matter which side of the issue you're on
Given the ambiguity of the term it would actually be better if FTC didn’t step in at all. To let the term dilute itself in its own marketing, to the point where consumers don’t care about it at all or actively avoid products with “AI”.
That's true if you use the sci-fi definition ("machines that think as well as humans") but the technical definition is a lot broader than that. In academic terms, a sentient machine would be "strong AI" or "AGI (artificial general intelligence)"; we've had "weak AI" for decades.
You can't use simulated annealing or gradient descent in your product and claim that you have built something intelligent. That would be laughable and validate these kind of messaging from the government.
AI is indeed a very ambiguous and subjectively defined term. In my own personal subjective opinion anything that does not have survival instinct is not remotely intelligent. By that definition unicellular organisms are more intelligent than a Tesla self driving vehicle.
A person can certainly claim the product uses "AI". The currently used definition of AI might be absurd, but you can't say such a person is lying or deceiving.
Bayesian regression is technically analytical, too, I suppose. Really exposes the blurriness of such a vague term as even "machine learning" let alone "AI".
Bayesian regression is often not really considered AI, unless it's incorporated in a more complicated pipeline (e.g. Bayesian optimization). Same goes for linear regression, then: alone it is just a model.
and there are many more things like this. Back in the day "expert systems" were AI. For any given piece of software, it will meet some definition of AI from some time period.
Also video game AI’s. I like to use that as a quick test of a definition of “AI” because video game AI’s span a very wide range of levels of sophistication as well as algorithms, many of which look like deterministic decision trees, most of which don’t use any ML or even regressions.
Arguably that’s a different overload of the term “AI” the way it’s used in business, but I think it’s a good reminder that AI as a field has a long history that developed separately from ML and data science.
“Does the product actually use AI at all? If you think you can get away with baseless claims…”
Last I checked basic optimization techniques like simulated annealing and gradient descent - as well as a host of basic statistical tools - are standard parts of an introductory AI textbook. I’ve been on the receiving end of government agency enforcement (SEC) and it felt a lot like a shakedown. This language carries a similar intent: if we decide we don’t like you, watch out!