Hacker Newsnew | past | comments | ask | show | jobs | submit | jstgunderscore's commentslogin

Some phrase I heard a while back, I think it was from bill hader. "People are ALWAYS right when telling you something doesn't work. They're rarely right when telling you how to fix it."


Exactly. Complaints are a Yes And situation. You have to improvise off of their cue but have fun with it or soon nobody will. Each loud user will try to pull the app in a different direction and it’ll end up being schizophrenic if you don’t stick to your own voice.

It’s really questions where I delight the users. They know something is wrong but they cannot articulate it so they assume it’s them and ask how to do something. Which tells you that the feature is missing or not discoverable. If four people ask you the same question it’s a bug not a feature and you should fix it.


When the camera was invented, no one was claiming a photo was a painting.

The camera replaced painters for the "I want to capture a static image" market but not the "I want art as expressed by a painter" market. While tragic for a lot of painters at the time, it does seem like the cost of progress.

In writing, there is no "objective capture" like with cameras; there's no tech that can take a picture of your conscious thoughts and translate that to words on a page in a way that's reproducible. So there is only a single arena of "written expression" that LLMs and traditional writers are competing in. And while there is a strong desired market for "art as expressed by the [human] writer," the product itself is much more difficult to distinguish from the new tech (LLM writing) than a photo from a painting. And the low effort of entry into this desired market with LLM writing is driving its dilution.

The analogy would work if instead they invented a magical camera that could turn any scene into a painting in any art style reliably, and hide the painting's origins.


As part of the 300 club, I have to say that there was no underwear involved in my cohort, just shoes (a requirement).


I am jealous -- can you expand on what I read about having to be careful about breathing as you ran outside for fear of freezing your lungs?


This discussion always seems to revolve around art requiring one or more of the following factors:

- Intention to create

- Effort in creation

- Transformation of the medium/canvas

- Originality

- Meaning as interpreted by the artist

- Meaning/influence to the consumer

- Cultural influence of the art

Without an extensive discussion to define all of these terms, I think its fair to say that there are many human-created works with little-to-no amount of many of these factors, yet a lot of people would still classify them as art. Yet if a AI creates something that satisfies just as many or more of these factors, people seem far more hesitant to call it art.

I'm neither Pro or Anti "AI can create art," as defining what qualifies as art has been a futile exercise since forever. I feel similarly about the AI intelligence and consciousness questions; if we can't define it for ourselves, how can we hope to define it for another entity? I think the discussions can be productive in fleshing out your viewpoint, but otherwise are fruitless.

Ultimately I think humans are highly functional biological machines that have created something that can mimic us convincingly, and we should just come to terms with that without getting bogged down in debates over definitions.


Yes but I actually (tend to) believe Hinton and the other CS scientists, so the terms aren't even the main issue, whereas this author's typical mainstream revolving terms consists of anthrocentric worries about what is really a scientific crisis--it smacks of rearranging the deck chairs while the Titanic is about to hit the iceberg that is the AI/AGI technological revolution/singularity.


at times, we must accept the inherent humanity in others' creations when humanity is, in fact, involved.

we must not accept the charade of humanity in machine-generated regurgitations of the utmost average.


It's not such a weird take from a perspective of someone who's never had quite enough money. If you've never had enough, the dream is having more than enough, but working for much much more than enough sounds like a waste of time and/or greed. Also, it's hard to imagine pursuing endeavors out of passion because you've never had that luxury.


Wage suppression is anytime a worker makes less than the absolute maximum an employer is willing to pay? That would include just about everyone making a paycheck.

Based on my cursory knowledge of the term, wage suppression here would be if FB manipulated external factors in the AI labor market so that their hire would accept a "lowball" offer.


Firstly, I think a lot of commenters here should ask themselves "Do you believe that ANY machine could EVER be intelligent?"

Unless you believe in the magic sauce of a supernatural soul/mind/etc, our brains function as deterministic biological machines. There's essentially nothing that separates the processing and memory potential of silicon from neurons. And if the building blocks can be made analogous, then all the same emergent properties are possible. There's no reason to believe a circuit couldn't be made to behave in exactly the same way as a human brain. I'm not saying that's where LLMs are; only that it is theoretically possible. So if you imagine such a machine, and you deem that it is not intelligent, you have reserved intelligence as an exclusive human trait and this entire discussion is meaningless.

Secondly, although I'm not in either pro or anti LLM-intelligence camp, I find a lot of the arguments against machine intelligence disingenuous and/or unbalanced.

For instance the "Can't process information it's not familiar with" argument. Another commenter stated the case of scientific papers that it doesn't have any reference for, that it may hallucinate a garbage interpretation of the paper. Not surprising, but guess what, a human would do the same thing if they were forced! Imagine holding a gun to someone's head and telling them to explain a concept or system they've never heard of. That's essentially what we're doing with LLMs; obviously we don't need to threaten, because we haven't given them agency to say no.

Another example is the "Can't be novel, unique, or create something completely new." First of all, difficult to prove, but okay let's take it as given that an LLM can't be novel. Can you prove that a human can? We make all these assumptions on how intelligent and creative we are as humans, and how original our thoughts can be... but how original are they, and can we prove it? How do you know your original thought, or Beethoven's 5th, or the fast inverse square root trick was completely separate from any prior influences? Or... was that "original thought" the conglomeration of a thousand smaller inputs and data points that you trained on, that became part of your brains subconscious processing system, and came together in a synthesis that looks like brilliance.

Finally, whenever this discussion comes up with friends I ask them to think of the least intelligent person they know. Then imagine how many of those there are in the world (likely millions). Could you imagine any conceivable test of any length or depth which would designate all the those humans as intelligent, and all the LLMs as not? I certainly can't. I highly doubt there could be anything approaching 100% accuracy at this point.

Ultimately I think we should ditch both the intelligence and consciousness questions. We can't define them in ourselves, we certainly can't define them in another entity. Let's just come to terms with the fact we're highly functioning biological machines who are both scared and excited to have created something so similar to us.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: