Hacker News new | past | comments | ask | show | jobs | submit | sycamoretrees's comments login

Paris - the city itself - is actually surprisingly high density, almost as high as Manhattan if you do not include the two very large 'Bois' (woods) that are technically part of the city.


I think it’s fine to not be into poetry, but at its core the Divine Comedy is a long poem, and I’m not sure what’s left after you remove any and all “poetic” elements. The Wikipedia page I’m sure could give you the basic characteristics of the circles of hell, if that’s all you really want to know. By the way, the book is a chore in many ways, despite the many nuggets of gold you’ll find within it. It’s long, and the number of references is overwhelming. Basically, what I’m trying to say is that it will never be light reading, no matter how you cut it. Why not look at it as more of a personal project or challenge (poetry and all)?


why are we using image generators to represent actual history? If we want accuracy surely we can use actual documents that are not imagined by a bunch of code. If you want to write fanfic or whatever then just adjust the prompt


I want image generators to generate what I ask them and not alter my query into something else.

It's deeply shameful that billions of dollars and the hard work of incredibly smart people is mangled for a 'feature' that most end users don't even want and can't turn off.

This is not a one off, it keeps happening with generative AI all the time. Silent prompt injections are visible for now with jailbreaks but who knows what level of stupidity goes on during training?

Look at this example from the Würstchen paper (which stable cascade is based on):

>This work uses the LAION 5-B dataset...

>As an additional precaution, we aggressively filter the dataset to 1.76% of its original size, to reduce the risk of harmful content being accidentally present (see Appendix G).


> Silent prompt injections

That’s the crux of what’s so off-putting about this whole thing. If Google or OpenAI told you your query was to be prepended with XYZ instructions, you could calibrate your expectations correctly. But they don’t want you to know they’re doing that.


Not to be overly cynical, but this seems like it's the likely outcome in the medium-term.

Billions of dollars worth of data and manhours could only be justified for something that could turn a profit, and the obvious way an advertising company like Google could make money off a prompt handler like this would be "sponsored" prompts. (i.e. if I ask for images of Ben Franklin and Coke was bidding, then here's Ben Franklin drinking a refreshing diet coke)


This sounds bit entitled. It is just service of private company.


If it's not going to give you what it's promising, which is generating images based on the prompts you provide it, it's a poor service. I think it might make more sense to try determine whether it's appropriate or not to inject ethnic or gender diversity into the prompt, rather than doing so without regard for context. I'm not categorically opposed to compensating for biases in the training data, but this was done very clumsily at best.


Yes, and I want the services I buy from private companies to do certain things.


Is it equally entitled to ask for a search engine which brings answers related to my query?


As far as we know, there are no photos of Vikings. It's reasonable for someone to use AI for learning about their appearance. If working as intended, it should be as reliable as reading a long description of Vikings on Wikipedia.


We have tons of viking material culture you can access directly without the AI layer.

AI as learning tool here feels misplaced to me.


what's the point of image generators then? what if i want to put vikings in a certain setting, in a certain artistic style?


Then specify that in your prompt. "... in the style of ..." or "... in a ... setting".

The point is that those modifications should be reliable, so if you want a viking man/woman or an asian/african/greek viking then adding those modifiers should all just work.


Than put that into the prompt explicitly instead of relying on Google, OpenAI, or whatever to add "racially ambiguous"


The problem is more that it refuses to make images of white people than the accuracy of the historical ones.


Ah. So we can trust AI to answer truthfully about history (and other issues), but we can't expect it to generate images for that same history, got it.

Any other specific things we should not expect from AI or shouldn't ask AI to do?


No, I don't think you can trust AI to answer correctly, ever. I've seen it confidently hallucinate, so I would always check what it says against other, more static, sources. The same if I'm reading from an author who includes a lot of mistakes in his books: I might still find them interesting and usefull, but I will want to double-check the key facts before I quote them to others.


Saying this is no different than saying you can't trust computers, ever, because they were (very) unreliable in the 50s and early 60s. We've been doing "good" generative AI for around 5 years, there is still much to improve until it reaches the reliability of other information sources like Wikipedia and Britannica.


> Saying this is no different than saying you can't trust computers, ever, because they were (very) unreliable in the 50s and early 60s

This seems completely reasonable to me. I still don't trust computers.


No, you should not trust AI to answer truthfully about anything. It often will, but it is well known that LLMs hallucinate. Verify all facts. In all things, really, but especially from AI.


Ah, good point. I'll just use the actual photograph of George Washington boxing a kangaroo.


In your favour is the fact that AI can "hallucinate", and generate realistic, but false information. So that does raise the question "why are you using AI when seeking factual reference material?".

However on the other hand that is a misuse of AI, since we already know that hallucinations exist, are common, and that AI output must be verified by a human.

So as a counterpoint, there are sound reasons for using AI to generate images based on history. The same reasons are why we use illustrations to demonstrate ideas where there is no photographic record.

A straightforward example is visualising the lifetime/lifestyle of long past historical figures.


ideological testing, we got to know how they cooked the model


It's as if Google believes their higher principle is something other than serving customers and making money. They haven't been able to push out a new successful product in 10+ years. This doesn't bode well for them in the future.

I blame that decade of near zero interest rates. Companies could post record profits without working for them. I think in the coming years we will discover that that event functionally broke many companies.



I don't know what you mean by "represent actual history". I don't think anyone believes that AI output is supposed to replace first-party historical sources.

But we are trying to create a tool where we can ask it questions and it gives us answers. It would be nice if it tried to make the answers accurate.


To which they reply "well you weren't actually there and this is art so there are no rules." It's all so tiresome.


You're right we should ban images of history altogether. Infact I think we should ban written accounts too. We should go back to the oral historic tradition of the ancient Greeks


He did not say he wanted to ban images, that is an exaggeration. I see the danger as polluting the historical record with fake images (even as memes/jokes), and spreading wrong preconceptions now backed by real-looking images. This is all under the assumptions there are no bad actors, which makes it even worse. I would say; don't ban it, but you morally just shouldn't do it.


The real danger is that this anti-racism starts a justified round of new racism.

By lowering standards for black doctors do you think anyone in their right mind would pick black doctors? No I want the fat old jew. I know no one put him in the hospital to fill out a quota.


Exactly, and as we all know all ancient Greeks were people of color, just like Cleopatra.


Woah, no one said that but you.


It should generate the image I ask for. As seen, if it explicitly refuses to generate images of white people and blathers on about problematic this-and-that as its "justification", there is a deep issue at hand.


> why are we using image generators to represent actual history?

That’s what a movie going to be in the future. People are going to prompt characters that AI will animate.


I think we're not even close technologically, but creating historically accurate (based on the current level of knowledge humanity has of history) depictions, environments and so on is, to me, one of the most _fascinating_ applications.

Insane amounts of research go into creating historical movies, games etc that are serious about getting it right. But to try and please everyone, they take lots of liberties, because they're creating a product for the masses. For that very same reason, we get tons of historical depictions of New York and London, but none of the medium sized city where I live.

The effort/cost that goes into historical accuracy is not reasonable without catering to the mass market, so it seems like a conundrum only lots of free time for a lot of people or automation could possibly break.

Not holding my breath that it's ever going to be technically possible, but boy do I see the appeal!


American born Chinese dumplings.


No, ABCD is a pejorative term referring to US born people of Indian origin - American Born Confused Desi.


Oh… my bad. Didn’t know that was a term. Sorry


No worries :) I didn't know about ABC either - so we're both part of the lucky 10000 today.

Ref: https://xkcd.com/1053/


Haha, thanks for the insightful comic


Why do people go to see movies when they could just watch the trailer on YouTube?


Well I'm not the one claiming people are going to stop reading because of AI, so I'm not the one to ask :)


Little more than an article for Atlantic readers to pat themselves on the back with. By the way, I get the sense the author isn’t as wise as he imagines himself, either. His catty obsession with Kanye West’s mental break and his seeming Twitter addiction makes me think he himself is certainly “wildly estranged from genuine wisdom or the humility with which erudition tempers facile notions of invincibility” (his words, not mine). (What an incomprehensible sentence - perhaps reading too much has turned him into a walking thesaurus?)


> an article for Atlantic readers to pat themselves on the back with

Out of curiosity, what is your opinion on reading books?

The article is congratulatory. But I think it's proxying for intellectual curiosity and attention span.

> What an incomprehensible sentence

Is it the vocabulary? Because it's a grammatically-simple sentence.


I read quite a bit, actually. But I don’t go around making others feel bad for being supposedly intellectually inferior to me.

> is it the vocabulary? Because it’s a grammatically-simple sentence.

Yes, the vocabulary. Throwing a bunch of low-frequency words together doesn’t make a sentence more refined or its content more insightful. It’s just pomp, really.

As someone else mentioned, yes this is an ad hominem attack (although, maybe this is forgivable insofar as I’m calling out hypocrisy and claiming he’s in no place to put down other people - which I believe is the sole purpose of the article. If it’s just to say that reading is good, well, uh, duh. No need for a whole article about the benefits of education.)


> I don’t go around making others feel bad for being supposedly intellectually inferior to me

The author balanced this by clarifying that it "is one thing in practice not to read books, or not to read them as much as one might wish. But it is something else entirely to despise the act in principle."

> it’s just to say that reading is good, well, uh, duh

Did we read the same article?

It's pointing out that folks who virtue signal about not reading are raising a red flag. It's not disparaging people who don't read, but who publicly praise themselves for not reading and go on to denigrate those who do.


If that’s what the article is about, then I just don’t really see the point. Literally who cares what SBF and his ilk have to think about literature.

If the article is simply an appeal to common sense, or an effort to convince others to educate themselves, maybe there are better ways to get the message across than regurgitating five hundred words on theAtlantic.com. They publish this stuff for their self-conscious “literati” audience to eat up.


> who cares what SBF and his ilk have to think about literature

The millions of readers of his congratulatory profiles. (Like the one TFA cites.) Including still-influential figures like Marc Andreessen.

> maybe there are better ways to get the message across than regurgitating five hundred words on theAtlantic.com. They publish this stuff for their self-conscious “literati” audience to eat up.

It sounds like you would have had an axe to grind with the article irrespective of its contents. Maybe that's worth exploring on your own time.


Ad hominem. Not incomprehensible to anyone who reads.


Quite simply, they’re saying that all those things can be done on a laptop if necessary. Duh.


They didn’t make that claim.

For many people the only computer they have is their phone, I’d say well over half the people I work with have only their phone, and especially young people.

I’ve got one, two, thr… I don’t, maybe nine computers, and have read zero on any but the phones.


Is more research really going to offer any true solutions? I’d be genuinely interested in hearing about what research could potentially offer (the development of tools to counter AI disinformation? A deeper understanding of how LLMs work?), but it seems to me that the only “real” solution is ultimately political. The issue is that it would require elements of authoritarianism and censorship.


A lot of research about avoiding extinction by AI is about alignment. LLMs are pretty harmless in that they (currently) don't have any goals, they just produce text. But at some point we will succeed in turning them into "thinking" agents that try to achieve a goal. Similar to a chess AI, but interacting with the real world instead. One of the big problems with that is that we don't have a good way to make sure the goals of the AI match what we want it to do. Even if the whole "human governance" political problem were solved, we still couldn't reliably control any AI. Solving that is a whole research field. Building better ways to understand the inner workings of neural networks is definitely one avenue


Intelligence cannot be 'solved', I would go on to further say that an intelligence without the option of violence isn't an intelligence at all.

If you suddenly wanted to kill people, for example, then could probably kill a few before you were stopped. That is typically the limits of an individuals power. Now, if you were a corporation with money, depending on the strategy you used you could likely kill anywhere from hundreds to hundreds of thousands. Kick it up to government level, and well, the term "just a statistic" exists for a reason.

We tend to have laws around these behaviors, but they are typically punitive. The law realizes that humans, and human systems will unalign themselves from "moral" behavior (whatever that may be considered at the time). When the lawgiver itself becomes unaligned, well, things tend to get bad. Human alignment typically consists of benefits (I give you nice things/money/power) or violence.


I see. Thanks for the reply. But I wonder if that’s not a bit too optimistic and not concrete enough. Alignment won’t solve the world’s woes, just like “enlightenment” (a word which sounds a lot like alignment and which is similarly undefinable) does not magically rectify the realities of the world. Why should bad actors care about alignment?

Another example is climate change. We have a lot of good ideas which, combined, would stop us from killing millions of people across the world. We have the research - is more “research” really the key?


Good comment.

Not to be nit picky, but I think you might be wrong about the psychologist part. A core principle, perhaps the most important principle of all, in talk therapy is that of transfer between the patient and the therapist. I don’t think it’s possible to achieve that with a machine, to achieve any real transfer without vulnerability.

A lot of talk therapy today is of questionable quality, in that the therapist or analyst is simply soothing the patient without actually confronting real problems. Machine-based therapy would only exacerbate the problem.

For things like CBT, I’m sure a machine could be helpful. But then again, CBT self-help books already exist.


Art is about more than just imitation. It’s about the meaning behind the work. I am not talking about Marvel movies, which might as well be totally AI-generated at this point.

I would suggest having some empathy for those affected. It’s not just gatekeeping, it’s people reacting to being told that the meaning they put into what they make doesn’t actually add any value.

In the Human Condition, Hannah Arendt outlines three parts of the Vita Activa. The first is labour, at the bottom of the pyramid. It is by definition consumable and has limited meaning in itself. The second is work, through which we build a world (i.e. something bigger than just the cyclical and physical properties of the ecosystem). The third is action, or politics, which ripples through the world and inspires linear change. Put together, these three parts make up what it is to be a living human. Giving meaning-making away, by replacing it with mere imitation, is tantamount to revoking those qualities which make us human.


I couldn't agree more. The idea that AI will mean that 'creative people' suffer from a 'total decline in cultural relevancy' is extreme philistinism. Culture is meaning and meaning is human; AI predicts from a training sample, and cannot genuinely create, by its nature.


Humans also invent to make hard things easier. Glorifying toil has worked fantastically as a ranking mechanism for society so far but it is surely going to fail going forward, so too indirect democracy.

I suspect a lot of art lovers like not knowing that an artist's work has very exact beginnings and egoistic motivations al a marvel movies. Rather prefer the altruistic mystery that muses were involved curating the work out of nothing for the enjoyment of all. Maybe when you know the AI has exact origins it ruins the illusion of art for you because maybe deep down you liked to be tricked by human creativity.

Are you only going to watch certified AI/CGI free movies going forward?


Art, or creativity, is the part of the movie that is not toil.

Art being made purely for the enjoyment of all is purely commercial, and so it is in some ways equivalent to toil: it is something whose sole purpose is to be consumed.

Yes, being “tricked”, in your words, by human creativity is precisely the appeal. If art were simply an equation, it wouldn’t have any meaning to it - it would simply be fact.

I’m sorry, but I think we’re talking about two diametrically opposite conceptions of art here.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: