Hacker Newsnew | past | comments | ask | show | jobs | submit | entropyneur's commentslogin

I know I am going to be in the minority, but I don't understand why we can't let Python be Python. Static typing is great, and there are already other statically typed languages for all your needs. Why not use them?

Well, at least it doesn't create two incompatible Pythons like async and (I assume) free threading.


I used to be of the same opinion, but after giving type hints a real try, I changed my mind.

You should not see type hints as real, hard types, but more as a kind of documentation that helps your linter and type checker catch mistakes.


Because Python has a lot of things it's great at (numeric stuff, ML, computer vision, scripting), and with types you can actually rely on it to work. It's the best of both worlds.


I sometimes felt that Python was rather strong in many parts of typing. As such being able to track what type of something is would often have been useful. Instead of waiting it to crash to some error.

Like back in 2.7 difference between byte-array and string... Offloading such cases is mentally useful.


> Why not use them?

Because you can now use typing WITH the entire Python ecosystem.


This article seems to fall straight into the trap it aims to warn us about. All this talk about "true" understanding, embodiment, etc. is needless antropomorphizing.

A much better framework for thinking about intelligence is simply as the ability to make predictions about the world (including conditional ones like "what will happen if we take this action"). Whether it's achieved through "true understanding" (however you define it; I personally doubt you can) or "mimicking" bears no relevance for most of the questions about the impact of AI we are trying to answer.


It matters if your civilizational system is built on assigning rights or responsibilities to things because they have consciousness or "interiority." Intelligence fits here just as well.

Currently many of our legal systems are set up this way, if in a fairly arbitrary fashion. Consider for example how sentience is used as a metric for whether an animal ought to receive additional rights. Or how murder (which requires deliberate, conscious thought) is punished more harshly than manslaughter (which can be accidental or careless.)

If we just treat intelligence as a descriptive quality and apply it to LLMs, we quickly realize the absurdity of saying a chatbot is somehow equivalent, consciously, to a human being. At least, to me it seems absurd. And it indicates the flaws of grafting human consciousness onto machines without analyzing why.


"Making predictions about the world" is a reductive and childish way to describe intelligence in humans. Did David Lynch make Mulholland Drive because he predicted it would be a good movie?

The most depressing thing about AI summers is watching tech people cynically try to define intelligence downwards to excuse failures in current AI.


> Did David Lynch make Mulholland Drive because he predicted it would be a good movie?

He made it because he predicted that it will have some effects enjoyable to him. Without knowing David Lynch personally I can assume that he made it because he predicted other people will like it. Although of course, it might have been some other goal. But unless he was completely unlike anyone I've ever met, it's safe to assume that before he started he had a picture of a world with Mullholland Drive existing in it that is somehow better than the current world without. He might or might not have been aware of it though.

Anyway, that's too much analysis of Mr. Lynch. The implicit question is how soon an AI will be able to make a movie that you, AIPedant, will enjoy as much as you've enjoyed Mulholland Drive. And I stand that how similar AI is to human intelligence or how much "true understanding" it has is completely irrelevant to answering that question.


> how soon an AI will be able to make a movie that you, AIPedant, will enjoy as much as you've enjoyed Mulholland Drive

As it stands, AI is a tool and requires artists/individuals to initiate a process. How many AI made artifacts do you know that enjoy the same cultural relevance as their human made counterparts? Novels, music, movies, shows, games... anything?

You're arguing that the types of film cameras play some part in the significant identity that makes Mulholland Drive a work of art, and I'd disagree. While artists/individuals might gain cultural recognition, the tool on its own rarely will. A tool of choice can be an inspiration for a work and gain a certain significance (e.g. the Honda CB77 Super Hawk[0]), but it seems that people always strive to look for the human individual behind any process, as it is generally accepted that the complete body of works tells a different story that any one artifact ever can.

Marcel Duchamp's Readymade[1] (and the mere choice of the artist) gave impact to this cultural shift more than a century ago, and I see similarities in economic and scientific efforts as well. Apple isn't Apple without the influence of a "Steve Jobs" or a "Jony Ive" - people are interested in the individuals behind companies and institutions, while at the same time also tend to underestimate the amount of individuals that makes any work an artifact - but that's a different topic.

If some future form of AI will transcend into a sentient object that isn't a plain tool anymore, I'd guess (in stark contrast to popular perception) we'll all lose interest rather quickly.

[0]: https://en.wikipedia.org/wiki/Honda_CB77#Zen_and_the_Art_of_...

[1]: https://en.wikipedia.org/wiki/Fountain_(Duchamp)


> unless he was completely unlike anyone I've ever met,

I mean ... he is David Lynch.

We seem to be defining "predicted" to mean "any vision or idea I have of the future". Hopefully film directors have _some_ idea of what their film should look like, but that seems distinct from what they expect that it will end up.


I look at it the complete opposite way: humans are defining intelligence upwards to make sure they can perceive themselves better than a computer.

It's clear that humans consider humans as intelligent. Is a monkey intelligent? A dolphin? A crow? An ant?

So I ask you, what is the lowest form of intelligence to you?

(I'm also a huge David Lynch fan by the way :D)


Intelligence has been a poorly defined moving goal post for as long as AI research has been around.

Originally they thought: chess takes intelligence, so if computers can play chess, they must be intelligent. Eventually they could, and later even better than humans, but it's a very narrow aspect of intelligence.

Struggling to define what we mean by intelligence has always been part of AI research. Except when researchers stopped worrying about intelligence and started focusing on more well-defined tasks, like chess, translation, image recognition, driving, etc.

I don't know if we'll ever reach AGI, but on the way we'll discover a lot more about what we mean by intelligence.


If you look at my comment history you will see that I don't think LLMs are nearly as intelligent as rats or pigeons. Rats and pigeons have an intuitive understanding of quantity and LLMs do not.

I don't know what "the lowest form of intelligence" is, nobody has a clue what cognition means in lampreys and hagfish.


Im not sure what that gets you. I think most people would suggest that it appears to be a sliding scale. Humans, dolphins / crows, ants, etc. What does that get us?


Well, is an LLM more intelligent than an ant?


I would say yes. But is it more intelligent than an ant hill?


Well yes, any creation tries to anticipate some reaction, be it audience, environment, or only the creators one.

A prediction is just a reaction to a present state, which is the simplest definition of intelligence: The ability to (sense and) react to something. I like to use this definition, instead of "being able to predict", because its more generic.

The more sophisticated (and directed) the reaction is, the more intelligent the system must be. Following this logic, even a traffic light is intelligent, at least more intelligent than a simple rock.

From that perspective, the question of why a creator produced a piece of art becomes unimportant to determine intelligence, since the simple fact that he did is sign of intelligence already.


"David Lynch made Mullholland Drive because he was intelligent" is also absurd.


But "An intelligent creature made Mullholland Drive" is not


It may be reductive but that doesn't make it incorrect. I would certainly agree that creating and appreciating art are highly emergent phenomena in humans (as is for example humour) but that doesn't mean I don't think they're rooted in fitness functions and our evolved brains desire for approval from our tribal peer group.

Reductive arguments may not give us an immediate forward path to reproducing these emergent phenomena in artificial brains, but it's also the case that emergent phenomena are by definition impossible to predict - I don't think anyone predicted the current behaviours of LLMs for example.


> "Making predictions about the world" is a reductive and childish way to describe intelligence in humans.

It also happens to be a leading theory in neuroscience: https://news.ycombinator.com/item?id=45058056


How would you define intelligence? Surely not by the ability to make a critically acclaimed movie, right?


He was trying to predict what movie would create the desired reaction from his own brain. That's how creativity works, it's just prediction.


I think that intelligence requires, or rather, is the development and use of a model of the problem while the problem is being solved, i.e. it involves understanding the problem. Accurate predictions, based on extrapolations made by systems trained using huge quantities of data, are not enough.


From a practical standpoint, all the talk of "true understanding", "sentience" and the likes is pointless.

The only real and measurable thing is performance. And the performance of AI systems only goes up.


But only goes up in the sense that it's getting closer to a horizontal asymptote. Which is not really that good.


It does, but the limit isn't "human performance". AI isn't bounded by human performance. The limit is the saturation of the benchmark in question.

Which is solvable with better benchmarks.


Imagine LLM is conscious (as Anthropic wants us to believe). Imagine LLM is made to train on so much data which is far beyond what its parameter count allows for. Am I hurting the LLM by causing it intensive cognitive strain?


I agree that whether AI is conscious is an important question. In fact, I think it's the most important question above our own existential crisis. Unfortunately, it's also completely hopeless at our current level of knowledge.


Why would that hurt?


You are made to memorize entire encyclopedia but you have biological limit of only 1000 facts.


Predict and create, that's all that matters.


Have you been engulfed in flames before?

I've definitely experienced mental states that were worse than being dead. I don't regret remaining alive because of all the positive experiences I've had afterwards. But if we are talking about extending suffering that's only followed by death, I don't see the point.


Not yet fortunately. All life is some amount of suffering and then you die. I agree I’m probably not correct about how I would feel in the moment.


The path is in the opposite direction altogether.


Wondering if anyone had success with this yet. I have several ideas for poetry and prose that I don't have the skill to pull off. I periodically plug them into new models and so far all the results have been completely unsatisfactory.


How does one look at gpt-1 output and think "this has potential"? You could easily produce more interesting output with a Markov chain at the time.


This was an era where language modeling was only considered as a pretraining step. You were then supposed to fine tune it further to get a classifier or similar type of specialized model.


At the time getting complete sentences was extremely difficult! N-gram models were essentially the best we had


No, it was not difficult at all. I really wonder why they have such a bad example here for GPT1.

See for example this popular blog post: https://karpathy.github.io/2015/05/21/rnn-effectiveness/

That was in 2015, with RNN LMs, which are all much much weaker in that blog post compared GPT1.

And already looking at those examples in 2015, you could maybe see the future potential. But no-one was thinking that scaling up would work as effective as it does.

2015 is also by far not the first time where we had such LMs. Mikolov has done RNN LMs since 2010, or Sutskever in 2011. You might find even earlier examples of NN LMs.

(Before that, state-of-the-art was mostly N-grams.)


Thanks for posting some of the history... "You might find even earlier examples" is pretty tongue-in-cheek though. [1], expanded in 2003 into [2], has 12466 citations, 299 by 2011 (according to Google Scholar which seems to conflate the two versions). The abstract [2] mentions that their "large models (with millions of parameters)" "significantly improves on state-of-the-art n-gram models, and... allows to take advantage of longer contexts." Progress between 2000 and 2017 (transformers) was slow and models barely got bigger.

And what people forget about Mikolov's word2vec (2013) was that it actually took a huge step backwards from the NNs like [1] that inspired it, removing all the hidden layers in order to be able to train fast on lots of data.

[1] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, 2000, NIPS, A Neural Probabilistic Language Model

[2] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, 2003, JMLR, A Neural Probabilistic Language Model, https://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf


Ngram models had been superceded by RNNs by that time. RNNs struggled with long-range dependencies, but useful ngrams were essentially capped at n=5 because of sparsity, and RNNs could do better than that.


This was the first product demo I've watched in my entire life. Not because I am excited for the new tech, but because I'm anxious to know if I'm already being put out of my job. Not this time, it seems.


Interesting. This is quite contrary to my experience. Using LLMs for things ouside my expertise produces crappy results which I can only identify as such months later when my expertise expands. Meanwhile delegating the boring parts that I know too well to agents proved to be a huge productivity boost.


Can someone please explain why we still have chargebacks in 2025? Don't accept cards without 3DS, problem solved. If not, why not? Are some important consumer segments still stuck with cards that don't have it?


Chargebacks aren't just for stolen credit cards, but also fraudulent merchants.

If I buy a physical good with a credit card, and the merchant either never sends me anything, or sends me an empty box and ignores my emails, well, that's a use-case for chargebacks. 3DS doesn't help with that.


Also for merchants that are just too dysfunctional to do the right thing. A while back, I ordered a phone online from Best Buy, and they shipped me a different model from what I ordered. I contacted their customer service, who told me to mail the phone back for a refund. I did so, and then they mailed it right back to me with a note saying they couldn't accept the return because I'd sent them a different item from the one I ordered. (No shit: that's why I needed to return it!) They didn't have fraudulent intent, I'm sure; one hand just didn't know what the other was doing. A chargeback resolved that situation, and I'm very glad I had the option.


Chargeback is not just for when your card is stolen. It’s also for e.g. when you never received your order, or your order is substantially different from what’s promised. It’s basically a last resort customer service option.


You are free to believe whatever you chose of course and state it here, but the sentence you cited does nothing to support your claim.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: