I know I am going to be in the minority, but I don't understand why we can't let Python be Python. Static typing is great, and there are already other statically typed languages for all your needs. Why not use them?
Well, at least it doesn't create two incompatible Pythons like async and (I assume) free threading.
Because Python has a lot of things it's great at (numeric stuff, ML, computer vision, scripting), and with types you can actually rely on it to work. It's the best of both worlds.
I sometimes felt that Python was rather strong in many parts of typing. As such being able to track what type of something is would often have been useful. Instead of waiting it to crash to some error.
Like back in 2.7 difference between byte-array and string... Offloading such cases is mentally useful.
This article seems to fall straight into the trap it aims to warn us about. All this talk about "true" understanding, embodiment, etc. is needless antropomorphizing.
A much better framework for thinking about intelligence is simply as the ability to make predictions about the world (including conditional ones like "what will happen if we take this action"). Whether it's achieved through "true understanding" (however you define it; I personally doubt you can) or "mimicking" bears no relevance for most of the questions about the impact of AI we are trying to answer.
It matters if your civilizational system is built on assigning rights or responsibilities to things because they have consciousness or "interiority." Intelligence fits here just as well.
Currently many of our legal systems are set up this way, if in a fairly arbitrary fashion. Consider for example how sentience is used as a metric for whether an animal ought to receive additional rights. Or how murder (which requires deliberate, conscious thought) is punished more harshly than manslaughter (which can be accidental or careless.)
If we just treat intelligence as a descriptive quality and apply it to LLMs, we quickly realize the absurdity of saying a chatbot is somehow equivalent, consciously, to a human being. At least, to me it seems absurd. And it indicates the flaws of grafting human consciousness onto machines without analyzing why.
"Making predictions about the world" is a reductive and childish way to describe intelligence in humans. Did David Lynch make Mulholland Drive because he predicted it would be a good movie?
The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI.
> Did David Lynch make Mulholland Drive because he predicted it would be a good movie?
He made it because he predicted that it will have some effects enjoyable to him. Without knowing David Lynch personally I can assume that he made it because he predicted other people will like it. Although of course, it might have been some other goal. But unless he was completely unlike anyone I've ever met, it's safe to assume that before he started he had a picture of a world with Mullholland Drive existing in it that is somehow better than the current world without. He might or might not have been aware of it though.
Anyway, that's too much analysis of Mr. Lynch. The implicit question is how soon an AI will be able to make a movie that you, AIPedant, will enjoy as much as you've enjoyed Mulholland Drive. And I stand that how similar AI is to human intelligence or how much "true understanding" it has is completely irrelevant to answering that question.
> how soon an AI will be able to make a movie that you, AIPedant, will enjoy as much as you've enjoyed Mulholland Drive
As it stands, AI is a tool and requires artists/individuals to initiate a process. How many AI made artifacts do you know that enjoy the same cultural relevance as their human made counterparts? Novels, music, movies, shows, games... anything?
You're arguing that the types of film cameras play some part in the significant identity that makes Mulholland Drive a work of art, and I'd disagree. While artists/individuals might gain cultural recognition, the tool on its own rarely will. A tool of choice can be an inspiration for a work and gain a certain significance (e.g. the Honda CB77 Super Hawk[0]), but it seems that people always strive to look for the human individual behind any process, as it is generally accepted that the complete body of works tells a different story that any one artifact ever can.
Marcel Duchamp's Readymade[1] (and the mere choice of the artist) gave impact to this cultural shift more than a century ago, and I see similarities in economic and scientific efforts as well. Apple isn't Apple without the influence of a "Steve Jobs" or a "Jony Ive" - people are interested in the individuals behind companies and institutions, while at the same time also tend to underestimate the amount of individuals that makes any work an artifact - but that's a different topic.
If some future form of AI will transcend into a sentient object that isn't a plain tool anymore, I'd guess (in stark contrast to popular perception) we'll all lose interest rather quickly.
> unless he was completely unlike anyone I've ever met,
I mean ... he is David Lynch.
We seem to be defining "predicted" to mean "any vision or idea I have of the future". Hopefully film directors have _some_ idea of what their film should look like, but that seems distinct from what they expect that it will end up.
Intelligence has been a poorly defined moving goal post for as long as AI research has been around.
Originally they thought: chess takes intelligence, so if computers can play chess, they must be intelligent. Eventually they could, and later even better than humans, but it's a very narrow aspect of intelligence.
Struggling to define what we mean by intelligence has always been part of AI research. Except when researchers stopped worrying about intelligence and started focusing on more well-defined tasks, like chess, translation, image recognition, driving, etc.
I don't know if we'll ever reach AGI, but on the way we'll discover a lot more about what we mean by intelligence.
If you look at my comment history you will see that I don't think LLMs are nearly as intelligent as rats or pigeons. Rats and pigeons have an intuitive understanding of quantity and LLMs do not.
I don't know what "the lowest form of intelligence" is, nobody has a clue what cognition means in lampreys and hagfish.
Im not sure what that gets you. I think most people would suggest that it appears to be a sliding scale. Humans, dolphins / crows, ants, etc. What does that get us?
Well yes, any creation tries to anticipate some reaction, be it audience, environment, or only the creators one.
A prediction is just a reaction to a present state, which is the simplest definition of intelligence: The ability to (sense and) react to something. I like to use this definition, instead of "being able to predict", because its more generic.
The more sophisticated (and directed) the reaction is, the more intelligent the system must be. Following this logic, even a traffic light is intelligent, at least more intelligent than a simple rock.
From that perspective, the question of why a creator produced a piece of art becomes unimportant to determine intelligence, since the simple fact that he did is sign of intelligence already.
It may be reductive but that doesn't make it incorrect. I would certainly agree that creating and appreciating art are highly emergent phenomena in humans (as is for example humour) but that doesn't mean I don't think they're rooted in fitness functions and our evolved brains desire for approval from our tribal peer group.
Reductive arguments may not give us an immediate forward path to reproducing these emergent phenomena in artificial brains, but it's also the case that emergent phenomena are by definition impossible to predict - I don't think anyone predicted the current behaviours of LLMs for example.
I think that intelligence requires, or rather, is the development and use of a model of the problem while the problem is being solved, i.e. it involves understanding the problem. Accurate predictions, based on extrapolations made by systems trained using huge quantities of data, are not enough.
Imagine LLM is conscious (as Anthropic wants us to believe). Imagine LLM is made to train on so much data which is far beyond what its parameter count allows for. Am I hurting the LLM by causing it intensive cognitive strain?
I agree that whether AI is conscious is an important question. In fact, I think it's the most important question above our own existential crisis. Unfortunately, it's also completely hopeless at our current level of knowledge.
I've definitely experienced mental states that were worse than being dead. I don't regret remaining alive because of all the positive experiences I've had afterwards. But if we are talking about extending suffering that's only followed by death, I don't see the point.
Wondering if anyone had success with this yet. I have several ideas for poetry and prose that I don't have the skill to pull off. I periodically plug them into new models and so far all the results have been completely unsatisfactory.
This was an era where language modeling was only considered as a pretraining step. You were then supposed to fine tune it further to get a classifier or similar type of specialized model.
That was in 2015, with RNN LMs, which are all much much weaker in that blog post compared GPT1.
And already looking at those examples in 2015, you could maybe see the future potential. But no-one was thinking that scaling up would work as effective as it does.
2015 is also by far not the first time where we had such LMs. Mikolov has done RNN LMs since 2010, or Sutskever in 2011. You might find even earlier examples of NN LMs.
(Before that, state-of-the-art was mostly N-grams.)
Thanks for posting some of the history... "You might find even earlier examples" is pretty tongue-in-cheek though. [1], expanded in 2003 into [2], has 12466 citations, 299 by 2011 (according to Google Scholar which seems to conflate the two versions). The abstract [2] mentions that their "large models (with millions of parameters)" "significantly improves on state-of-the-art n-gram models, and... allows to take advantage of longer contexts." Progress between 2000 and 2017 (transformers) was slow and models barely got bigger.
And what people forget about Mikolov's word2vec (2013) was that it actually took a huge step backwards from the NNs like [1] that inspired it, removing all the hidden layers in order to be able to train fast on lots of data.
[1] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, 2000, NIPS, A Neural Probabilistic Language Model
Ngram models had been superceded by RNNs by that time. RNNs struggled with long-range dependencies, but useful ngrams were essentially capped at n=5 because of sparsity, and RNNs could do better than that.
This was the first product demo I've watched in my entire life. Not because I am excited for the new tech, but because I'm anxious to know if I'm already being put out of my job. Not this time, it seems.
Interesting. This is quite contrary to my experience. Using LLMs for things ouside my expertise produces crappy results which I can only identify as such months later when my expertise expands. Meanwhile delegating the boring parts that I know too well to agents proved to be a huge productivity boost.
Can someone please explain why we still have chargebacks in 2025? Don't accept cards without 3DS, problem solved. If not, why not? Are some important consumer segments still stuck with cards that don't have it?
Chargebacks aren't just for stolen credit cards, but also fraudulent merchants.
If I buy a physical good with a credit card, and the merchant either never sends me anything, or sends me an empty box and ignores my emails, well, that's a use-case for chargebacks. 3DS doesn't help with that.
Also for merchants that are just too dysfunctional to do the right thing. A while back, I ordered a phone online from Best Buy, and they shipped me a different model from what I ordered. I contacted their customer service, who told me to mail the phone back for a refund. I did so, and then they mailed it right back to me with a note saying they couldn't accept the return because I'd sent them a different item from the one I ordered. (No shit: that's why I needed to return it!) They didn't have fraudulent intent, I'm sure; one hand just didn't know what the other was doing. A chargeback resolved that situation, and I'm very glad I had the option.
Chargeback is not just for when your card is stolen. It’s also for e.g. when you never received your order, or your order is substantially different from what’s promised. It’s basically a last resort customer service option.
Well, at least it doesn't create two incompatible Pythons like async and (I assume) free threading.