Hacker News new | past | comments | ask | show | jobs | submit | zvitiate's comments login

There's a huge assumption in your comment -- that you know how insurance works. "Most" probably aren't working in sales and marketing; I'd heavily dispute anything above 50% and I feel like 33% might be pushing it? I don't want to get overconfident here, but this claim feels off-base.

Insurance isn't like a widget. People have actual legal rights that insurers must service. This involves processing clerks, adjusters, examiners, underwriters, etc. Which then requires actual humans, because AI with the pinpoint accuracy needed for these legally binding, high-stakes decisions aren't here yet.

E.g., issuing and continuing disability policies: Sifting through medical records, calling and emailing claimants and external doctors, constant follow-ups about their life and status. Sure, automate parts of it, but what happens when your AI:

a. incorrectly approves someone, then you need to kick them off the policy later?

b. incorrectly denies someone initial or continuing coverage?

Both scenarios almost guarantee legal action—multiple appeals, attorneys getting involved—especially when it's a denial of ongoing benefits.

And that's just scratching the surface. I get that many companies are bloated, and nobody loves insurance companies. No doubt, smarter regulations could probably trim headcount. But the idea that you could insure a billion people with just 100, or even 1000 (10x!), employees is just silly.


Yup. My favorite genre by FAR is baroque. High quality recordings are not as wide as you’d expect, and no one’s really pumping out new baroque. V4.5 is noticeably better, even if the model shows the real “plagiaristic” aspect.

Still, I’m excited about the product. The composer could probably use some chain of thought if it doesn’t already, and plan larger sequences and how they relate to each other. Suno is also probably the most ripe for a functional neurosymbolic model. CPE wrote an algorithm on counterpoint hundreds of years ago!

https://www.reddit.com/r/classicalmusic/comments/4qul1b/crea... (Note the original site has been taken over, but you can access the original via way back. Unfortunately I couldn’t find a save where the generation demo works…but I swear it did! I used it at the time!)


I've mentioned it before on HN, but Sid Meier worked on an application called (appropriately enough) CPU Bach for the 3DO that would algorithmically generate endless contrapuntal music all the way back in 1994.

https://en.wikipedia.org/wiki/C.P.U._Bach


In a similar vein, there is https://aminet.net/package/mus/misc/AlgoMusic2_4u.lha, from ~96 or so.


Ohhh this looks very cool. Thank you for sharing! Will dive into this over the weekend.


It's almost week-end! But I'm pretty sure these were shared merely as historical anecdotes, and that Suno 4.5 is the bleeding edge here...

That’s what GPT-5 was supposed to be (instead of a new base or reasoning model) last Sam updated his plans I thought. Did those change again?


What if you were in an environment where you had to play Minecraft for say, an hour. Do you think your child brain would've eventually tried enough things (or had your finger slip and stay on the mouse a little extra while), noticed that hitting a block caused an animation, (maybe even connect it with the fact that your cursor highlights individual blocks with a black box,) decide to explore that further, and eventually mine a block? Your example doesn't speak to this situation at all.


I think learning to hold a button down in itself isn't too hard for a human or robot that's been interacting with the physical world for a while and has learned all kinds of skills in that environment.

But for an algorithm learning from scratch in Minecraft, it's more like having to guess the cheat code for a helicopter in GTA, it's not something you'd stumble upon unless you have prior knowledge/experience.

Obviously, pretraining world models for common-sense knowledge is another important research frontier, but that's for another paper.


No, sooner lol. We'll have aging cures and brain uploading by late 2028. Dyson Swarms will be "emerging tech".


There's a lot to potentially unpack here, but idk, the idea that humanity entering hell (extermination) or heaven (brain uploading; aging cure) is whether or not we listen to AI safety researchers for a few months makes me question whether it's really worth unpacking.


Maybe people should just don’t listen to AI safety researchers for a few months? Maybe they are qualified to talk about inference and model weights and natural language processing, but not particularly knowledgeable about economics, biology, psychology, or… pretty much every other field of study?

The hubris is strong with some people, and a certain oligarch with a god complex is acting out where that can lead right now.


It's charitable of you to think that they might be qualified to talk about inference and model weights and such. They are AI safety researchers, not AI researchers. Basically, a bunch of doom bloggers, jerking each other in a circle, a few of whom were tolerated at one of the major labs for a few years, to do their jerking on company time.


If we don't do it, someone else will.


That's obviously not true. Before OpenAI blew the field open, multiple labs -- e.g. Google -- were intentionally holding back their research from the public eye because they thought the world was not ready. Investors were not pouring billions into capabilities. China did not particularly care to focus on this one research area, among many, that the US is still solidly ahead in.

The only reason timelines are as short as they are is because of people at OpenAI and thereafter Anthropic deciding that "they had no choice". They had a choice, and they took the one which has chopped at the very least years off of the time we would otherwise have had to handle all of this. I can barely begin to describe the magnitude of the crime that they have committed -- and so I suggest that you consider that before propagating the same destructive lies that led us here in the first place.


The simplicity of the statement "If we don't do it, someone else will." and thinking behind it eventually means someone will do just that unless otherwise prevented by some regulatory function.

Simply put, with the ever increasing hardware speeds we were dumping out for other purposes this day would have come sooner than later. We're talking about only a year or two really.


But every time, it doesn't have to happen yet. And when you're talking about the potential deaths of millions, or billions, why be the one who spawns the seed of destruction in their own home country? Why not give human brotherhood a chance? People have, and do, hold back. You notice the times they don't, and the few who don't -- you forget the many, many more who do refrain from doing what's wrong.

"We have to nuke the Russians, if we don't do it first, they will"

"We have to clone humans, if we don't do it, someone else will"

"We have to annex Antarctica, if we don't do it, someone else will"


Cloning? Bioweapons? Ever larger nuclear stockpiles? The world has collectively agreed not to do something more than once. AI would be easier to control than any of the above. GPUs can't be dug out of the ground.


Which? Exterminate humanity or cure aging?


Yes


The thing whose outcome can go either way.


I honestly can't tell what you're trying to say here. I'd argue there's some pretty significant barriers to each.


I’m okay if someone else unpacks it.


I’m one of them. I love Kagi, although the first week or two had a ton of !g. Now I only really bang for local areas, conversions, or shopping (maybe).

If they can sustain, maybe they can takeoff. Search in GenAI world is hard, and Google has other focuses with talent and inference chips too.

I hope their days aren’t numbered!

I think it needs some UI improvements. It’s ugly, and I find it can hinder actual use.

More usability improvements on features. There’s a lot I’m still not leveraging because I haven’t bothered learning. Maybe they can build an LLM tool to help with this?

And I don’t care how, but make it easy to make it default on all browsers, mobile mainly. Maybe they fixed this recently, but when I was swapping browsers recently, this was annoying. If they can’t fix this, they probably won’t make it.

Just some top of mind thoughts as high-usage, 95%+ non-coding user.


If we could motivate the monkies sufficiently with bananas, we'd probably improve those odds substantially.


[Final Answer]

Language models like Claude are programmed directly by humans.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: