Hacker News new | past | comments | ask | show | jobs | submit | ionwake's comments login

Welcome back space cowboy


$10 they have no product, just a wait list and a note to embrace and extinguish some random music ai startup at some point in the nebulous future.

TBH at this point HN should have a flag for "No Demo"

I actually disagree with being polite to LLMs.

The reason is soon there will come a time when people are pushing for AI to have "human" rights. This will naturally occur as poeple start to have relationships with them, and start to see them as better equals.

The problem with this is simply LLMs have no conscience. However humans will easily love them and be open to exploitation through dependance.

I believe it might even be imperative to be able to tell apart "word calculators" and animals, and keeping a line, even its through context switching to light verbal abuse to remind yourself they are not sentient, might be an important survival skill.

I am not trying to trigger anyone or be contrarian, does anyone sort of agree and or understand what I mean? Perhaps I havent explained myself well.


There will come a time when AI will be sufficiently advanced to be reasonably considered sentient and conscious, and at that point it will absolutely deserve rights.

We've been through this before with animals, too. Descartes famously believed that animals are purely mechanistic, unlike humans, and thus lack the "inner state" necessary for true suffering, thus e.g. vivisection was perfectly fine.


Thank you for your reply.

try and imagine this , the rabbit, whose flesh resembles ours , who we choose and were evolved in part to consume, is in our kingdom.

Is it not a from of betrayal to animal kind to have a preference for animals over robot consiousness?

I hope you see this and reply!


Sorry for the noob question but I think finally someone in this thread can answer this for me. Sometimes when I see a youtube short video it looks like its HDR is whacked up by like 500% as per the image in this page, but Im confused how this could be done. Is video processing on the video before it is uploaded somehow giving it some sort of encoding which chrome just wacks up? Or is it the hardware doing it and encoding it a certain way?

I am not talking about a slight brightness increase, I am talking Ill be scrolling youtube and suddenly this video is like a portal into another dimension its so bright.

Can anyone explain how its done?


Screens can't often do full brightness on the whole screen so if you come across a video or image that is supposed to have a higher contrast ratio, the system will darken everything and then brighten up the pixels that are supposed to be brighter.

Yes, there are formats that able to store a higher contrast ratio so that's why it doesn't happen on non-HDR content but the actual brightening of a portal on your screen isn't because of the format but because of your hardware (and software) choosing to interpret the format that way.

For more a practical example, if you had an 8-bit HDR image, 255 on the red channel (after inputting this number through a math function like HLG[1] to "extract" a brightness number) might mean "make this pixel really bright red" whereas 255 on a SDR format would mean "just regular red." However, each red channel is still a number between 0 and 255 on both formats but your hardware decided to make it brighter on the HDR format.

(Although in reality, HDR formats are often 10-bit or higher because 256 values is not enough range to store both color and brightness so you would see banding[2]. Also, I have been using RGB for my example but you can store color/brightness number many other ways, such as with chroma subsampling[3], especially when you realize human eyes are more sensitive to some colors more than others so you could "devote fewer bits" to some colors.)

[1] https://en.wikipedia.org/wiki/Hybrid_log%E2%80%93gamma

[2] https://en.wikipedia.org/wiki/Colour_banding

[3] https://en.wikipedia.org/wiki/Chroma_subsampling


> Screens can't often do full brightness on the whole screen so if you come across a video or image that is supposed to have a higher contrast ratio, the system will darken everything and then brighten up the pixels that are supposed to be brighter.

There's no system that does that. The only thing that's kinda similar is at the display level there's a concept known as the "window size" since many displays cannot show peak brightness across the entire display. If you've ever seen brightness talked about in context of a "5%" or "10%" window size, this is what it means - the brightness the display can do when only 5% of the display is max-white, and the rest is black.

But outside of fullscreen this doesn't tend to be much of any issue in practice, and it depends on the display.


> There's no system that does that.

You mean the darkening of everything else to highlight bright HDR areas? All recent Macs do, including the one I'm typing on right now. It's a little disconcerting the first time it happens, but the effect is actually great!


Apple doesn't darken SDR to amplify HDR. They keep SDR the same brightness as it was before HDR showed up. It appears like SDR gets dimmer because of your perception of contrast, but it's not actually within a small margin of error.

I use YouTube on my rather inexpensive TV. When a thumbnail of an HDR video starts playing, the whole screen brightens up significantly. I don't know as much about HDR as you do, so maybe they are using some other perceptual trick. It might also not be "full brightness." BTW, why can't screens do full brightness on the whole screen?

Thank you so much for your reply - I will look into it!

The video is marked as containing a different color space with a higher brightness/color range. That could either be because the initial camera recorded it that way (e.g. iPhones can do that) or because someone took a "normal" video and edited it.

Would this be specific software on the iphone used to record a video ? Or a default setting on a certain iphone? I ask because I only very rarely see this whacked up HDR youtube short, like super rarely.

Fairly sure a stock iPhone can do it, but you might need to enable it explicitly for compatibility reasons? And depending how you edit or upload the video it could get lost there too, it's still something where support is not really universal.

Interesting Im starting to think that perhaps only certain video software on an iphone would allow it - which explains why its so rare?

Does this video look super whacked up to you too? It is HDR: https://www.youtube.com/watch?v=ceSiK-0HX_I

Yes it is! But how does one make their video upload HDR? I am familiar with video editing but am unsure how I would perform a test, using a test video with my iphone, and upload it so it doesnt lose its HDR encoding? I believe there is a HDR setting for my phone, but I dont think it will upload by default in a HDR state such as the link you just gave me.

Thanks to everyone trying to help me understand this. I have heard of HDR for years but Ive never witnessed my macbook darken and brighten a video before like 2 months ago.


which MacBook do you have?

if you record in HDR, uploading that raw footage to YouTube should produce an HDR video. to get the raw footage, you can either upload the file directly from the phone, or AirDrop it to your Mac from Photos (you should get a .mov), or sync it to iCloud (or connect the phone over USB, maybe) and then use Photos' "File > Export Unmodified Original"


I have an M1 macbook, and an iphone 13. Thanks for instructions I think you are right I have to do a specific flow so the HDR isnt lost ie in software like CapnCut I suppose. Thanks again for the help m so glad Ive slowly figured this out with ur help.

yep, plenty of software will export in non-HDR and lose the information. some software (i.e. iMovie, Final Cut Pro, DaVinci Resolve) won't lose the information when configured correctly, but HDR workflow is always a bit different than SDR.

There's many factors in play from what your SDR white point is at, how your OS handles HDR video, what the content contains, and finally what your brain is doing.

HDR10(+) & Dolby Vision, for example, encode content at absolute luminance, so they are basically completely trash formats since that's an insane thing to expect (the spec for authoring content in this format literally just goes "lol idk do X if you think it's going to be seen in a movie theater of Y for TV and hope"). Sadly, they are also quite common. Mobile phones (both Android & iOS) are instead pushing HLG, which is better. Although then hilariously MacOS's handling of HLG was atrocious until the latest update which fixed it but only if the video contains a magic flag that iPhone sets, but isn't standard so nobody else sets it (the "avme" tag https://developer.apple.com/documentation/technotes/tn3145-h... )

There's then also just how your eyes & brain react. When HDR shows up and suddenly the white background of a page looks like a dim gray? That's 100% a perceptual illusion. The actual light being emitted didn't change, just your perception of it did. This is a very hard problem to deal with, and it's one that so far the HDR industry as a whole has basically just ignored. But it's why there's a push to artificially limit the HDR range in mixed conditions, eg https://github.com/w3c/csswg-drafts/issues/9074


You clearly know alot about this, but I think there could be a misunderstanding. Not trying to offend but when I see the youtube link mentioned above in the other comment, my macbook screen literally goes darker AROUND the video , which gets brighter. I am not making this up. I think its how chrome on macbooks handles raw HDR encoding.

Can someone else confirm I am not mad?

PS - I am not trying to shut you down, you clearly know alot in the space I am just explaining what Im experiencing on this hardware.


> my macbook screen literally goes darker AROUND the video , which gets brighter. I am not making this up

This is almost certainly your eyes playing tricks on you, actually. Setup that situation where you know if you scroll down or whatever it'll happen, but before triggering it cover up the area where the HDR will be with something solid - like a piece of cardboard or whatever. Then do it. You'll likely not notice anything change, or if there is a shift it'll be very minor. Yet as soon as you remove that thing physically covering the area, bam it'll look gray.

It's a more intense version of the simultaneous contrast illusions: https://en.wikipedia.org/wiki/Contrast_effect & https://en.wikipedia.org/wiki/Checker_shadow_illusion

Eyes be weird.


The screen is literally getting darker so the HDR video will appear to have more contrast.

https://prolost.com/blog/edr


No, it literally isn't. It's literally doing the opposite, it increases the display brightness in order to show the HDR content. The SDR content is dimmed proportional to the increase such that SDR has the same emitted brightness before & after the change.

SDR brightness is not reduced to "add contrast". The blog post doesn't seem to say that anywhere, either, but if it does it's simply wrong. As a general note it seems wrong about a lot of aspects, like saying that Apple does this on non-HDR displays. They don't. It then also conflates EDR with whether or not HDR is used. EDR is simply the representation of content between apps & the compositor. It's a working space not entirely unlike scRGB where 0.0-1.0 is simply the SDR range, and it can go beyond that. But going beyond the maximum reported EDR range, which can be as low as 1.0, the result is simply clipped. So they are not "simulating" HDR on a non-HDR display.


I agree with what you said, but I was trying to give the layman summary ;)

> The SDR content is dimmed proportional to the increase such that SDR has the same emitted brightness before & after the change.

That's the intent, but because things aren't perfect it actually tends to get darker instead of stay perceptually the same. It depends on which panel you're using. MBPs are prone to this, XDR displays aren't.


> I agree with what you said, but I was trying to give the layman summary ;)

Your layman summary is wrong, though. Brightness stays the same is the summary, whereas you said it gets darker.

> MBPs are prone to this, XDR displays aren't.

On my M1 16" MBP it doesn't have any issue. The transition is slow, but the end result is reasonably aligned to before the transition. But yes MBP displays are not Apple's best. Sadly that remains something exclusive to the iPad


Yes but you would sure as heck bump into it if it was big.

Like literally in the middle of your sitting room. Isn’t it a known meme horror thing - monster slices from another dimension splicing across into ours as they move through their planes .

Basically it doesn’t happen but the dimensions do exist so they must be small.

Hence why we don’t bump into them.


Not sure if anyone is interested in this story, but I remember at the height of the PokemonGo craze I noticed there were no shirts for the different factions in the game, cant rememebr what they were called but something like Teamread or something. I setup an online shop to just to sell a red shirt with the word on it. The next day my whole shop was taken offline for potential copyright infringement.

What I found surprising is I didnt even have one sale. Somehow someone had notified Nintendo AND my shop had been taken down, to sell merch that didn't even exist for the market and if I remember correctly - also it didnt even have any imagery on it or anything trademarkable - even if it was clearly meant for pokmeonGo fans.

Im not bitter I just found it interesting how quick and ruthless they were. Like bros I didn't even get a chance to make a sale. ( yes and also I dont think I infringed anything).


I asked Sora to turn a random image of my friend and myself into Italian plumbers. Nothing more, just the two words "Italian plumbers". The created picture was not shown to me because it was in violation of OpenAI's content policy. I asked then just to turn the guys on the picture into plumbers, but I asked this in the Italian language. Without me asking for it, Sora put me in an overall and gave me a baseball cap, and my friend another baseball cap. If I asked Sora to put mustache on us, one of us received a red shirt as well, without being asked to. Starting with the same pic, if I asked to put one letter on the baseball caps each - guess, the letters chosen were M and L. These extra guardrails are not really useful with such a strong, built-in bias towards copyright infringement of these image creation tools. Should it mean that with time, Dutch pictures will have to include tulips, Italian plumbers will have to have a uniform with baseball caps with L and M, etc. just not to confuse AI tools?


You (and the article, etc) show what a lot of the "work" in AI is going into at the moment - creating guardrails against creating something that might get them in trouble, and / or customizing weights and prompts under water to generate stuff that isn't the obvious. I'm reminded of when Google's image generator came up and this customization bit them in the ass when they generated a black pope or asian vikings. AI tools don't do what you wish they did, they do what you tell them and what they are taught, and if 99% of their learning set associates Mario with prompts for Italian plumbers, that's what you'll get.

A possible (probably already exists) business is setting up truly balanced learning sets, that is, thousands of unique images that match the idea of an italian plumber, with maybe 1% of Mario. But that won't be nearly as big a learning set as the whole internet is, nor will it be cheap to build it compared to just scraping the internet.


>> they do what you tell them and what they are taught, and if 99% of their learning set associates Mario with prompts for Italian plumbers, that's what you'll get.

I thought that a lot of the issues were the opposite of this, where Google put their thumb on the scale to go against what the prompt asked. Like when someone would ask for a historically accurate picture of a US senator from the 1800s and repeatedly get women and non-white men. The training set for that prompt has to be overwhelmingly white men so I don't think it was just a matter of following the training data.


I would love to know how YouTube does this for music. There's some holes obviously, like some cover artists will play the iconic riffs of a song and then stop somewhere. There's people who do reels or "commentary" of a movie scene and then put some horrible high pitched music to mask it from copyright.

There's probably even some rules around this to only detect just enough to take legal action. Like GP stumbled on a trademark landmine, but obviously just selling red shirts with a bird on it can't be a trademark violation; it needs to be a specific kind of red too.


I remember all the hullaballoo about Asian Vikings and the like. It was so preposterous that Vikings would ever be Asian that it must be ultra-woke DEI mind-worms being forced onto AI! But of course, as far as the AI's concerned, it is even more preposterous that an Italian plumber would not be wearing red or green overalls with a mustache and a lettered baseball cap. I don't see any way you can get the AI to recognize that Vikings "should" be white people and not also think that Italian plumbers "should" look like that. Are they allowed to recombine their training data or must they strictly adhere to only what they've seen?

Of course the irony is that if the people who get offended whenever they see images of non-white people asked for a picture of "Vikings being attacked by Godzilla" , they'd get worked up if any of the Vikings in the picture were Asian (how unrealistic!). It's a made-up universe! The image contains a damn (Asian) Kaiju in it, and everyone is supposed to be pissed because the Vikings are unrealistic!?


That's what you get when you expect AIs to be like humans and be able to reason. We would be pissed if a human artist did that, so we are pissed when AIs do it.

A human, even one whose only experience of an Italian plumber is Mario will be able to draw an Italian plumber who is not Mario. That's because he knows that Mario is just a video game character and doesn't even do much plumbing. He knows however how an actual non-Italian plumber looks like, and that a guy doing plumbing work in Italy is more likely to look like a regular Italian guy equipped like a non-Italian plumber than to a video game character.

And if asked to draw a Viking, he knows that Vikings are people originating from Scandinavia, so they can't be Asian by definition, even in an Asian context. A human artist can adjust things to the unrealistic setting, but unless presented with a really good reason, will not change the core traits of what makes a Viking a Viking.

But it requires reasoning. Which current image generating AIs don't have.


> We would be pissed if a human artist did that

No, I would not be pissed if a human artist drew an Asian Viking. Do you get pissed when a human artist draws a white Jesus? Why are we justifying internet outrage over an Asian Viking when people have been drawing this middle-eastern Jew as white for centuries?

> A human artist can adjust things to the unrealistic setting, but unless presented with a really good reason, will not change the core traits of what makes a Viking a Viking.

If you asked Matt Stone and Trey Parker to draw a Viking, are you sure it would contain the "core traits of what makes a Viking a Viking?" What if you asked Picasso to draw a Viking? The Vikings in The Simpsons would be yellow, and nobody would complain. Would you be offended if you asked Hokusai to draw a Viking and it came out looking Asian? Vikings didn't even have those stupid horned helmets that everyone draws them with! Is their dumb, historically inaccurate horned helmet a core part of what makes a Viking a Viking? What the hell are we even talking about? It's crystal clear that all of these "historical accuracy" drums are only ever beaten when some white person is offended that non-white people exist. Otherwise, nobody gives a shit about historical accuracy. There's a fucking Kaiju in the image!

Like any artist, Gemini had a particular style. That style happened to be a multi-cultural one, and what we learned is that a multi-culture style is absolutely enraging to people unless it results in more Whiteness.

Consider elves instead of Vikings. People would also be offended if an AI drew elves as black people with pointy ears. There's no "a human artist should know that elves have to be white" bullshit defense there. There's no historical accuracy bullshit. There's only racism.


The thing is that if a human draws an Asian Viking, he has to do it with intent. It is neither historically accurate nor matching popular culture. It doesn't have to do with whiteness, drawing an Asian Kenyan warrior, or an Asian Apache would be exactly the same thing.

By drawing an Asian Viking, you are passing a message. Or you may be expressing an art style, as you say. I accept the idea that Gemini style is multi-cultural, realism and conventions be damned, but if we attribute this kind of intent to an AI, we could also say that the liberties it takes with intellectual property and plagiarism is also intentional, because that how we would judge a human artist doing that.

The standard for a neutral human artist would be to draw a Viking as a blond white guy, with or without the horned helm depending if historical realism matters more than popular culture, and an Italian Plumber as not Mario, because a human understands that if want one wanted Mario, he would have said "Mario" and not "an Italian Plumber". Current AIs on the other hand just draw images similar to how they are tagged, with some out-of-context race mixing because reinforcement learning has taught it that it has to make people less white, but unlike people it isn't able to understand when it is relevant (ex: a university professor), and when it is not (ex: a Viking warrior).


I just asked gpt to draw an asian viking, and it did. What’s the problem?


The AIs were not "naturally" generating images of Asian Vikings. It was established to my satisfaction, even if the companies never admitted it (I don't recall it happening but I may have missed it), that it was actually the prompt being rather hamhandedly edited on the way to the image generator, for the clear purpose of "correcting" the opinions and attitudes of those issuing the prompts through social engineering.

Unsurprisingly, people don't like being so nakedly herded in their opinions. When the "nudges" become "shoves" people object.


My point is that there is no prompt engineering that could keep Vikings white without also keeping Italian plumbers looking like Mario. Unless you singled out Mario, but there are too many examples to do that with. The AI does not put Mario in a different category than a Viking. You have to try to get the AI to avoid using exact literal imagery, to make sure it's mixing things up a bit, varying facial features and clothing styles when it shows people ... you know, being "diverse". How are we supposed to get an Italian plumber in anything other than red overalls without getting a Viking wearing a sari?

The Gemini prompt was something like "make sure any images of people show a diverse range of humans", or something. Yes, it was totally ham-handed, but that's not what people were pissed about. It's also ham-handed that we can't generate a nipple, or a swear word, or violence. Why does "make sure images do not contain excessive violence" not piss people off? The Vikings were fucking brutal. It would be very historically accurate to show them raping women and cutting people's limbs off. Are we all supposed to be pissed that AI does not generate that image? It's just as ham-handed as "make sure humans are diverse". No, it was not the ham-handedness that enraged people. It was not the historical inaccuracy. It was the word "diverse".


I'm assuming the downvoters are the ones who get offended at the sight of an Asian Viking, so let me ask you this:

In a work of fiction -- which you're automatically asking for when you ask an AI to generate an image -- in a work of fiction, would you be offended if you saw a white Ninja? A white Samurai? A white Middle-Eastern Jew born in Roman times? Would there have been internet outrage over pictures of white Samurai? We all know the answer: no, of course not. So why is an Asian Viking offensive when a white Samurai is not? Why are we supposed to get angry about an Asian Viking, but a white Jesus is just A-OK? What could the difference possibly be? Anyone?


People get offended about these all the time, it’s called cultural appropriation[1]. It’s not just whites who have culture they dislike being an appropriated though whites do get offended by this as well, like any people with a rich cultural tapestry.

[1] https://en.wikipedia.org/wiki/Cultural_appropriation


OpenAI will eventually have competition for GPT 4o image generation.

They'll eventually have open source competition too. And then none of this will matter.

OmniGen is a good start, just woefully undertrained.

The VAR paper is open, from ByteDance, and supposedly the architecture this is based on.

Black Forest Labs isn't going to sit on their laurels. Their entire product offering just became worthless and lost traction. They're going to have to answer this.

I'd put $50 on ByteDance releases an open source version of this in three months.


I feel like the golden and fun age of GenAI is already over.


I feel the opposite. Like, we just entered the Golden Age.


lol, this interaction may possibly become known as "grooming the AI"


that’s hilarious


Another example of prime reasoning capabilities /s


Many years ago I tried to order a t-shirt with the postscript tiger on the front from Spreadshirt.

It was removed on Copyright claims before I could order one item myself. After some back and forth they restored it for a day and let me buy one item for personal use.

My point is: Doesn't have to be Sony, doesn't have to be a snitch - overzealous anticipatory obedience by the shop might have been enough.


>After some back and forth they restored it for a day and let me buy one item for personal use.

I used Spreadshirt to print a panel from the Tintin comic on a T-shirt, and I had no problem ordering it (it shows Captain Haddock moving through the jungle, swatting away the mosquitoes harassing him, giving himself a big slap on the face, and saying, 'Take that, you filthy beasts!').


I bought Tintin T-shirts 40 years ago in Thailand (the "branded" choices were amazing). They were actually really good, still got them!


Twenty years ago, I worked for Google AdWords as a customer service rep. This was still relatively early days, and all ads still had some level of manual human review.

The big advertisers had all furnished us a list of their trademarks and acceptable domains. Any advertiser trying to use one that wasn’t on the allow-list had their ad removed at review time.

I suspect this could be what happened to you. If the platform you were using has any kind of review process for new shops, you may have run afoul of pre-registered keywords.


Well the teams in Pokemon Go aren't quite as generic as Teamred: they are Team Instinct, Team Mystic, and Team Valor. Presumably Nintendo has trademarks on those phrases, and I’m sure all the big print on demand houses have an API for rights-holders to preemptively submit their trademarks for takedowns.

Nintendo is also famously protective of their IP: to give another anecdote, I just bought one of the emulator handhelds on Aliexpress that are all the rage these days, and while they don't advertise it they usually come preloaded with a buttload or ROMs. Mine did, including a number of Nintendo properties — but nary an Italian plumber to be found. The Nintendo fear runs deep.


Allen Pan, a youtuber "maker" who runs in the circle of people who run OpenSauce, was a contestant on a Discovery channel show that was trying to force the success of Mythbusters by "finding the next mythbusters!". He lost, but it was formative to him because those people were basically all inspired by the original show.

A couple years ago, he noticed that the merchandise trademark for "Mythbusters" had lapsed, so he bought it. He, now the legal owner of the trademark Mythbusters for apparel, made shirts that used that trademark.

Discovery sent him a cease and desist and threatened to sue. THEY had let the trademark lapse. THEY had lost the right to the trademark, by law. THEY were in the wrong, and a lawyer agreed.

But good fucking luck funding that legal battle. So he relinquished the trademark.

Buy a walrus plushy cause it's funny: https://allen-pan-shop.fourthwall.com/en-usd/

Note the now "Myth Busted" shirts instead.

Hilariously, a friend of Allen Pan's, from the same "Finding the next mythbuster" show; Kyle Hill, is friends enough with Adam Savage to talk to him occasionally, and supposedly the actual Mythbusters themselves were not empathetic to Allen's trademark claim.


> THEY were in the wrong, and a lawyer agreed.

Not sure where you get that from. He doesn't say that in the cease & desist announcement video (though it's worded in a way that lets the viewers speculate that). Also from every time it's brought up on the podcast he's on, it very much seams like he knows that he doesn't have legal ground to stand on.

Just because someone let's a trademark lapse doesn't mean you can rightfully snatch it up with a new registration (as the new registration may be granted in error). It would be a different story if he had bought the trademark rights before them lapsing.

Allen Pan makes entertaining videos, but one shouldn't base ones understanding of how trademarks work based on them.


It's hard to think of a company more aggressive with their IP than Nintendo

https://www.suedbynintendo.com/


Somehow someone had notified Nintendo

Is this correct? I would guess Nintendo has some automation/subscription to a service that handles this. I doubt it was some third party snitching.


People snitch real fast when jealous. Sometimes it's as simple as someone wanting to do the same thing and finding out someone else did it.


> my whole shop was taken offline

I think the problem there was being dependent on someone who is a complete pushover, doesn't bother to check for false positives and can kill your business with a single thought.


Yes that was the whole point of my post.

For further info it was Redbubble.

>Redbubble is a significant player in the online print-on-demand marketplace. In fiscal year 2023, it reported having 5 million customers who purchased 4.8 million different designs from 650,000 artists. The platform attracts substantial web traffic, with approximately 30.42 million visits in February 2025.


How was your shop taken down?

Usually there are lawyers letters involved first?


Print in demands definitely have terms of service allowing them to take whatever down. You’re playing by their rules, and your $2 revenue / tshirt and very few overall sales is not worth the potentially millions in legal fees to fight for you.


Sure, from the suing party who sent a DMCA takedown request to your webhost, who forward it to you and give you 24 hours before they take it down. Nobody wants to actually go to court over this stuff because of how expensive it is.


Redbubble was sued by the Pokemon Company two months prior to the launch of Pokemon Go, so you picked the exact wrong company and moment to try this with

https://www.smh.com.au/business/consumer-affairs/pokemon-hel...


Please don't be offended by my opinion, I mean it in good humour to share some strong disagreements - Im going to give my take after reading your comment and the article which both seem completely OTT ( contextwise regarding my opinions ).

>meaning behind them is not dependent upon the perspective of an external observer.

(Yes brother like cmon)

Regarding the author, I get the impression he grew up without a strong father figure? This isnt ad hominem I just get the feeling of someone who is so confused and lost in life that he is just severely depressed possibly related to his directionless life. He seems so confused he doesn't even take seriously the fact most humans find their own meaning in life and says hes not even going to consider this, finding it futile.( he states this near the top of the article ).

I believe his rejection of a simple basic core idea ends up in a verbal blurb which itself is directionless.

My opinion ( Which yes maybe more floored than anyones ), is to deal with Mazlows hierarchy, and then the prime directive for a living organism which after survival , which is reproduction. Only after this has been achieved can you then work towards your family community and nation.

This may seem trite, but I do believe that this is natural for someone with a relatively normal childhood.

My aim is not to disparage, its to give me honest opinion of why I disagree and possible reasons for it. If you disagree with anything I have said please correct me.

Thanks for sharing the article though it was a good read - and I did struggle myself with meaning sometimes.


To use a counter example, consider Catholic priests who do not marry or raise children. It would be quite the argument indeed to suggest their lives are without meaning or purpose.

Aha, you might say, but they hold leadership roles! They have positions of authority! Of course they have meaning, as they wield spiritual responsibility to their community as a fine substitute for the family life they will not have.

To that, I suggest looking deeper, at the nuns and monks. To a cynical non-believer, they surely are wanting for a point to their existence, but to them, what they do is a step beyond Maslow's self actualization, for they live in communion with God and the saints. Their medications and good works in the community are all expressions of that purpose, not the other way around. In short, though their "graph of contextual meaning" doesn't spread as far, it is very densely packed indeed.

Two final thoughts:

1) I am both aware of and deeply amused by the use of priests and nuns and monks to defend the arguments of a nihilist's search for meaning.

2) I didn't bring this up so much to take the conversation off topic, so much as to hone in on the very heart of what troubled the person I originally responded to. The question of purpose, the point of existence, in the face of superhuman AI is in fact unchanged. The sense of meaning and purpose one finds in life is found not in the eyes of an unfeeling observer, whether the observers are robots or humans. It must come from within.


I’ve just moved from expensive claudecode to cursor and Gemini - what are you thoughts on cursor vs cline?

Thank you


Sorry for the noob question, but claude has claudecode, does Gemini Pro work with any software in the same way "claudecode" works? If so what software would I use with it? Thank you.


Most people use Cursor. Aider and Cline are other options. All of these work with all of the popular LLM APIs. Even among people using Claude, I would bet more of them are using Claude through Cursor than through Claude code.


within 12 hours Im 100% balls deep in cursor now. Much better than claudecode and is free. fantastic.


Aider is worth a look.

The current rate limits for Gemini 2.5 Pro make it hard to run something like Claude Code with it, since that tool is very API chatty.


Hi Simon. Do you recommend aider over Cursor? I've always used aider, and like it, but it just seems like Cursor is overtaking it in terms of features, and I wonder if sticking with aider still makes sense.


I don't actually use Aider or Cursor myself - I still mostly work in the ChatGPT and Claude web interfaces (or apps) directly and do a lot of copy and pasting.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: