What I think we are starting to see is the beginning of widespread use of computer based discovery through essentially machine learning techniques. I think AI is pretty far off but utilizing all the computing power we have to discover new materials and create useful things isn't very far off. I wonder what will happen if we can eventually tell computers to create us a better laptop, bike, phone, or lamp.
The title of the press release is misleading, there was no non-trivial machine learning/AI involved. "Computers Create Recipe for" translates to: the researchers picked a class of materials, ran DFT simulations (the usual way to simulate this sort of thing) for all combinations of elements in that class and fabricated the ones that were predicted to have interesting properties.
The regression mentioned in the press release was only used to predict one property (the Curie temperature) of the materials based on experimental data for similar materials.
It's still a really impressive piece of work, just nothing to do with AI.
I think we need to rename AI. It's not really intelligent. It probably won't be for a long while. The machines don't know, in an existential sense, what they are looking at or making. We need to call it Artificial Insight.
They spot facts in byte streams that we don't see. We can then contextualize the info into another part of the domain, or drive it deeper.
From what I understand, we already have; that's why you usually see it called "machine learning" in research and academia - and "artificial intelligence" everywhere else.
That isn't to say there's no crossover in both directions, but usually if you are being serious on the subject, you call it ML - if you're trying to hype it or build interest, you call it AI.
Note that this only goes back so far; prior to the early 2000s or so, the terms were used more or less interchangeably. At one point, the term "machine intelligence" was used, then died out, but I've seen it used again recently.
Ask humans to tell you what "cat" means, and you'll receive as many answers as there are respondents. Some will derive from science, some from common experience; some will describe the relation of cats to their environment, others will talk about personal emotional connections with particular cats.
Ask a convolutional neural network what "cat" means, and the best you can get is a probability distribution of pixels on a grid. It's not intelligence, but just an encoding of facts provided by an actual intelligence.
No, you'll get the same kind of answer. It's not like one of the neural networks will write me a poem in response, on its own initiative. The form of the answer was decided by the human intelligence that created the neural net encoding.
The form of the human's answer was decided by the genetic code that led to the formation of the brain and the experiences the brain was exposed to up to the question. The brain is more complex by many orders of magnitude than your garden variety artificial neural network, so it is only expected that the range of possible answers is also broader.
Because they do tasks that people think require intelligence. It's like calling a water mill and a fusion reactor both devices that can generate energy.