Hacker Newsnew | past | comments | ask | show | jobs | submit | more datahack's commentslogin

I’ve been listening to the audiobooks of the expanse via our local library. Absolutely fantastic series — you are spot on!

Also, remember to support your local library.


I watched and read the entire series. Much of it is boring and poorly written both from a style perspective as well as character development. It’s famous but doesn’t live up to many scifi masterpieces imo.


The fascinating thing to me about _The Expanse_ is the disparity between the Novels and the TV Show. It's the same content, but in a different medium and environment.

I would call the novels well executed, enjoyable and very readable action adventures, using well-worn tropes. There is nothing ground-breaking in them. It's not what's currently at the edges of the genre in the written form - It's not Greg Egan, it aint Ted Chiang or Adrian Tchaikovsky. M. John Harrison does not make an appearance. It's not even Stephen Baxter, Alastair Reynolds or Iain M. Banks.

The TV show however, is quite something, it is one of the flagship sci-fi TV series. And it does indicate that written and visual sci-fi might be different stages of development , with the TV version lagging by decades.


Strong disagree. There are very few space operas which get both world building and character building so right. They usually are either great epic stories or amazing character introspectives but rarely both.


What would you recommend?


Alastair Reynolds "Revelation Space" series was really good: https://en.wikipedia.org/wiki/Revelation_Space_series

Also, both the "Children of Time" and "Shadows of the Apt" series by Adrian Tchaikovsky are excellent: https://en.wikipedia.org/wiki/Adrian_Tchaikovsky#Bibliograph...


Scifi series:

Dune (the first 3 books) Vinge’s Deepness series Dark Forest (3 Body Problem…)


Its great!

Have you read / listened to The Sun Eater Series?


That’s the kind of unkind and rather toxic comment I would expect on Reddit, not HN.

Unimpressed.


Do you think I wanted to impress you or anyone really? GP was beating a dead horse so I said something. Somehow you ended up caring. I’m almost certain there are better uses of your time. Certainly has no influence on my opinion. Honestly people who hold Hacker News like it’s sacred are exhausting. Your opinions and mine need not mix, just use your thumb to move on?


This website is worse than Reddit. Lower your expectations.


That’s not the standard I’ve always had, and I’ve been here from the early days.

You get the community we settle for.


> Lower your expectations.

No. And you'll find people more open to your wishes if you'd avoid the bossy imperative.


I have a startup idea for you my friend…


Just want to point out that this was all possible because of the hard work of people at the icij. They do amazing work (same group that did the Panama papers and one of the last real independent investigative news organizations in the US) and deserve your support!

More info here: https://www.icij.org/

As an aside, I would also ask this question: why not democratize this and make billions using the same loopholes so that everyone gains access or they are forced to fix it? Surely it’s a good startup opportunity.


There’s a fun documentary that explores this concept:

The Town That Took On The Tax Man https://youtu.be/ipV_GU7YaQg

It’s about a Welsh town that set out to do just this. Recommended watch.


The way I understand these schemes is that they require minimum balance way above what an average person has to be effective, but not sure. Would love a professional tax engineer/CPA opinion.


There was a documentary on Dutch TV a couple of years ago, about 'DYI tax haven management' for common people. Can't remember which public channel it was on, and what the name was. Might have been part of "Tegenlicht" series.

Update: Maybe it was the program Rambam where the makers set up their own tax haven based on the methods of the big corporations. Information in Dutch:

https://pers.bnnvara.nl/rambam-ontduikt-belasting-met-medewe...


Meanwhile the Netherlands is taxing the common people like crazy, and politicians are talking about raising taxes even more.


The Netherlands also has the number two quality of life in the world so maybe don’t worry about it so much?

https://www.numbeo.com/quality-of-life/rankings_by_country.j...


it's weird when Americans day dream of giving tax breaks to billionaires because they might be rich one day; it's a another thing for people who ostensibly have the best social system in the world looking at America and say "we should do that".


There are plans for making mortgages more expensive by removing tax deductions and for taxing inheritance a whopping 75%. That is hardly giving tax breaks to billionaire, more so screwing over anyone who isn't dirt poor. Oh, and I'm already paying pretty much half my income in various taxes.

Anyone who has a significant amount of money can avoid these taxes, as always.


Americans also pay half or more their income in various taxes/premiums. Again, I’m not sure what greener grass you’re looking for.


[flagged]


The problem with inheritance tax is that the person pays taxes all through their lifetime and then when their significant others inherit that wealth (which already has been taxed once at least) it gets taxed again. The issue isn't the amount, the issue is the principle of it.

I suspect BTW the very rich won't ever pay these taxes as there are always ways to restructure the wealth or simply move it elsewhere. I know this is done in the UK. So what it does is punish the middle classes the most.


> The problem with inheritance tax is that the person pays taxes all through their lifetime and then when their significant others inherit that wealth (which already has been taxed once at least) it gets taxed again.

Yes, because that's a transfer to different people. That's not a problem.

The problem is that it's not just treated as income to the recipients—which it manifestly is—with the income tax then being modified to include both advance recognition and windfall spreading options to allow taxpayers to deal with irregular income in a fair basis with more regular income.

This is also the problem with capital gains tax. And its not th people who have the kind of income that avoids regular income taxation that are getting screwed by that.


> Yes, because that's a transfer to different people. That's not a problem

Sorry I don't agree. The tax has already been paid when the person was alive. There shouldn't be a an additional tax on top because it is given to others after they died. Which is what is happening.

> The problem is that it's not just treated as income to the recipients—which it manifestly is—with the income tax then being modified to include both advance recognition and windfall spreading options to allow taxpayers to deal with irregular income in a fair basis with more regular income.

The problem wouldn't exist if the tax was abolished.

> This is also the problem with capital gains tax. And its not the people who have the kind of income that avoids regular income taxation that are getting screwed by that.

Again another case of a problem that wouldn't exist if the tax (capital gains) was abolished.


Tax is a means to an end (paying for civil services). Whether or not something is taxed twice is not inherently wrong, it's just a choice on how we choose to pursue our needs in a way that is effective and equitable.

I mean, reductively, saying something can't be taxed twice doesn't make any sense because all taxes work like that. A company sells products, those sales (and/or value add) are taxed. That money is paid as income, then that income is taxed. That income is spent on goods or services, where the sale (and/or value add) is taxed. Ad infinitum.

A reasonable tax on inheritance, growing with wealth, makes sense in a society that has no effective wealth caps. Otherwise the "haves" accumulate wealth, which accumulates wealth, which accumulates wealth. By imposing a tax on wealth that is not earned, but entirely dependent on the circumstances of one's birth, you create a redistribution scheme that's... Quite fair?

No living person has their labor stolen, some redistribution is achieved, but the heir still receives a significant benefit.


> Tax is a means to an end (paying for civil services). Whether or not something is taxed twice is not inherently wrong, it's just a choice on how we choose to pursue our needs in a way that is effective and equitable.

Well in the UK, the civil services are crap, the police don't do anything, the NHS waiting times are extensive (my mother is waiting for over 2 years for knee surgery), the roads are full of pot holes, and we have more admirals than warships.

So the money doesn't seem to be used effectively. I don't know what you mean by equitable.

> I mean, reductively, saying something can't be taxed twice doesn't make any sense because all taxes work like that. A company sells products, those sales (and/or value add) are taxed. That money is paid as income, then that income is taxed. That income is spent on goods or services, where the sale (and/or value add) is taxed. Ad infinitum.

It almost like the tax man takes at every opportunity. Describing that they tax you many times isn't a justification for more taxes.

> A reasonable tax on inheritance, growing with wealth, makes sense in a society that has no effective wealth caps. Otherwise the "haves" accumulate wealth, which accumulates wealth, which accumulates wealth.

I don't think it is moral or fair to tax beneficiaries of inheritance. It is essentially a gift from the deceased to the beneficiaries.

That the entire point of building up an inheritance for your family/beneficiaries, is that you hope to leave your children better place. I don't know what is fundamentally wrong with building up wealth generationally.

> By imposing a tax on wealth that is not earned, but entirely dependent on the circumstances of one's birth, you create a redistribution scheme that's... Quite fair?

No it isn't fair. The wealth was earned at some point in time, presumably legally. I don't understand why it matters that the person receiving it may have done nothing more than been a family member, family friend or even someone/some organisation that the deceased thought was deserving? When they were alive it was their choice who would receive upon death.


> So the money doesn't seem to be used effectively.

That's not what we're discussing. Nor is it even the country we're discussing? The Netherlands has the second highest quality of life in the world.

> It almost like the tax man takes at every opportunity.

Taxes are a requirement of any functional nation. This just sounds like you have no intention of having a real discussion on tax policy.

> It is essentially a gift from the deceased to the beneficiaries.

Yes, that is what inheritance is. And gifts are taxed. At a higher rate than inheritance!

> I don't know what is fundamentally wrong with building up wealth generationally.

Oh, please, don't straw man me.

Nothing is wrong with generational wealth. Looking at the US, you can see how important it is for social mobility, directly affecting the outcomes of minority communities for decades to even centuries. And through systems like the private healthcare and nursing industries, how it's being targeted to extract every last cent out of American citizens before they die and can hand it off to their loved ones.

But are you seriously pretending you don't know what's wrong with a forever growing wealth inequality? Because inheritance taxes only meaningfully apply to the wealthy. We aren't talking working class folk here.

Are we supposed to wonder how will they ever survive on a mere €820.000 that they did nothing to earn? Despite the fact that being raised by someone with that kind of wealth statistically implies that they'll also be inheriting things like property. And that they will have a more stable upbringing with a better education and opportunities their working class peers would never get.


Most of the money circulating in the economy has been taxed many many times. The money your employer pays you has been taxed, the money that employer got from its customers was taxed, the money those customers used was their salary, which was taxed. That is such a stupid argument to make. It would mean that we should only have one tax at the root of money itself.


Paying for stability is not a problem.


What? I'm talking about the netherlands, like is extremely obvious from the context. Read the thread you are replying to...

And I said politicians are talking about changing the laws. There are upcoming elections. Why are you talking about the current laws? Completely irrelevant. I very clearly said:

> There are plans for making mortgages more expensive by removing tax deductions and for taxing inheritance a whopping 75%

You could just google it, but I'll do it for you: https://www.metronieuws.nl/geld-carriere/belasting/2025/07/7...


I describe the law of and linked to a government website of the Netherlands.

> And I said politicians are talking about changing the laws.

No, you said "there are plans". What you did not say is that one party leader made an insane comment in a campaign cycle met with so much backlash it took less than a day to radically change to "with 0% under €500.000" and has still been unilaterally mocked by opinion makers, politicians, and media.


Buddy, I'm dutch. I am following local politics, and clearly, you're not.

> I describe the law of and linked to a government website of the Netherlands.

No shit? How is the current law relevant to plans to change it?

> What you did not say is that one party leader made an insane comment in a campaign cycle met with so much backlash it took less than a day to radically change to "with 0% under €500.000" and has still been unilaterally mocked by opinion makers, politicians, and media.

Yeah, so he's planning it. For now they seem to have changed their campaign surrounding the subject but there's no saying what they'll do if elected.


Buddy, that's not really was "there are plans to" means. Unless you count MJT talking about Jewish space lasers "plans to" investigate Jewish space lasers from the US government. In which case, you've got a tabloid-level definition.

The quote of the current law is used to emphasize just how ridiculous sounding what you're saying even is.


Do you not understand that politicians can change laws?


This is exactly what I mean in my first comment, you clearly aren't here in good faith. Or will you say, straight-faced, that you genuinely believe that the government is actually going to be implementing this?


I am saying straight faced that there are politicians with plans to implement this. Which is what I said before, and it's factually true.

Whether they'll succeed is another matter. I hope they don't.


> Might have been part of "Tegenlicht" series.

Consistently the best thing on TV.-


No. Anyone can register a LLC in any of these places. They have minimal filing requirements going forward too.

You may be required to have a local agent, and they will add their address and names as the nominee shareholders so you remain anonymous. Then with an LLC, the company can open bank accounts and you can move money. Any money made offshore is non-taxed locally.

No different to Delaware.


There are 2 major reasons why people choose to use layers of corporations in other countries: tax minimization (in their domestic country) and obscurity of the assets-owners relationship.

The latter is used by corrupt politicians, oligarchs (extremely wealthy people who have massive influence on policy/politics), and to stifle investigations by civil investigations (divorce), to stifle criminal investigations (political corruption, sanctions avoidance, fences for thieves, a convenient vehicle for transactions or large assets so governments/ oversight can’t easily track them).

There is a minimum overhead required (you need at least a part time CPA and attorney to give you the strategy, more if they actually implement it), but I don’t think it requires you be ultra wealthy. The problem is that most law-abiding, non-sociopath people don’t benefit much from avoiding the law.


> same group that did the Panama papers

Isn't the icij "just" a network of people already doing investigative journalism who work on this stuff anyway? As in it's just a place where investigative journalists can meet/discuss investigations that cross borders? My impression were that they were just normal journalists and that groups then formed and disbanded based on interest within this network.

I certainly love their work, and I think a network like that is very important (we should probably have something similar for software developers/IT people across Europe), calling them a group seems wrong with my understanding though.


It's not one or the other, it's both. They both do original reporting on their own and act as a loose network of smaller investigative journalism organizations. It truly depends on the task at hand, but it's usually very useful to get some of the local investigative journalists involved, as they're the ones that both understand the language and are able to put the leaked data into context. Usually ICIJ is the one that publishes the English version of the story and their local partner(s) publish the same thing in the local language.

Actually, I'd say there's three such networks: ICIJ, GIJN and OCCRP. They're not really competitors, each serving slightly different purposes, and there's plenty of collaboration and overlap between their members.

Source: I'm not in that world anymore, but I knew about Panama Papers long before it was public and have my name in the credits in some of the collaborations with ICIJ.


True, but they also tend to be papers that can't consistently rely on advertising, precisely because of their investigations (publishing the deeds of corporate billionaires is a great way to not get ads placed for whatever the billionaire sells.)

So, at least subscribe to one of the papers !

(I know, sorry HN, I asked people to pay for a service that could be financed by ads to gather more data - aka more food for the algo. Sorry.)


> we should probably have something similar for software developers/IT people across Europe

what do you mean by that? support by doing data journalism? more like hacktivism?


I'm thinking something much more boring than that. Basically I think we could have a conference where a participant could share some problem they are having that they believe generalizes across other participants (maybe some regulatory reporting requirement, or maybe some question about implementing AI in sharepoint or whatever) and the other participants could then sign up for further discussion and participation in creating some software systems that would solve the problem.

I'm thinking something akin to FRONTEX JAD's or that conference the EU has where lawyers show up to discuss cases they need EU assistance with.

Concretely, my company recently did a migration into Azure, if we were at a meeting like that and somebody said they were planning to do the same, we would have a bunch of experience to share with them.

I imagine that could maybe help foster some shared European understanding about what our big tech problems are. Maybe we could even let the solutions end up as open source or something.


I guess ‘democratize’ means something different to you than me. Most Americans don’t have any substantial savings whatsoever.


> Most Americans don’t have any substantial savings whatsoever.

The median household net worth is ~200k.


How much of that is their house?


Generally around 90% or so.


> Surely it’s a good startup opportunity.

I don't think there are many VCs willing to invest into this.


To be honest, making a business out of this might be the best way to convince regulators to close the loopholes. You could even devote N% of the revenue to closing the loopholes. There’s DEFINITELY a large moral hazard though, it would be very easy to lose your soul, or to be kicked out by the board in favor of a more malleable leader :/

Edit: after reading the article, I realize most of the “loopholes” to be changes are in disparate countries, not the _source_ country. This makes the whole idea less attractive. I suppose you could potentially still get rid of the anonymous-representative option by which people conceal their connection to different assets.


Technically, if the VC and the startup is located in the blacklisted country i don't see why not. Basically we will start the torrent site like DNS witch hunt, but still possible. I assume the "elites" would not be too happy and do everything in their power to stop it?


dont forget the hard work of congress nicely putting threading loopholes, and the supreme court!


100% how our family operates.

Advertising intensely to us is the absolute best way to lose us as a customer.


There is a tremendous lack of understandings between the genx and millennial teachers and the way they see and use AI, and how younger people are using it.

Kids use AI like an operating system, seamlessly integrated into their workflows, their thinking, their lives. It’s not a tool they pick up and put down; it’s the environment they navigate, as natural as air. To them, AI isn’t cheating—it’s just how you get things done in a world that’s always been wired, always been instant. They do not make major life decisions without consulting their systems. They use them like therapists. It’s is far more than a Google replacement or a writing tool already.

This author’s fixation on “desirable difficulty” feels like a sermon from a bygone era, steeped in romanticized notions of struggle as the only path to growth. It’s yet another “you can’t use a calculator because you won’t always have one” — the same tired dogma that once insisted pen-and-paper arithmetic was the pinnacle of intellectual rigor (even after calculators arrived: they have in fact always been with us every day since).

The Butlerian Jihad metaphor is clever but deeply misguided casting AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.

The author laments students bypassing the grind of traditional learning, but what if that grind isn’t the sacred rite they think it is? What if “desirable difficulty” is just a fetishized relic of an agrarian education system designed to churn out obedient workers, not creative thinkers?

The reality is, AI’s not going away, and clutching pearls about its “grotesque” nature won’t change that. Full stop.

Students aren’t “cheating” when they use it… they’re adapting to a world where information is abundant and synthesis is king. The author’s horror at AI-generated essays misses the point: the problem isn’t the tech, it’s the assignments (and maybe your entire approach).

If a chatbot can ace your rhetorical analysis, maybe the task itself is outdated, testing rote skills instead of real creativity or critical thinking.

Why are we still grading students on formulaic outputs when AI can do that faster?

The classroom should be a lab for experimentation, not a shrine to 19th century pedagogy, which is most definitely is. I was recently lectured by a teacher about how he tries to make every one of his students a mathematician, and became enraged when I gently asked him how he’s dealing with the disruption to mathematicians as a profession that AI systems are currently doing. There is an adversarial response underneath a lot of teacher’s thin veneers of “dealing with the problem of AI” that is just wrong and such a cope.

That obvious projection leads directly to this “adversarial” grading dynamic. The author’s chasing a ghost, trying to police AI use with Google Docs surveillance or handwritten assignments. That’s not teaching. What it is standing in the way of civilization Al progress because it doesn’t fit your ideas. I know there are a lot of passionate teachers out there, and some even get it, but most definitely do not.

Kids will find workarounds, just like they always have, because they’re not the problem; the system is. If students feel compelled to “cheat” with AI, it’s because the stakes (GPAs, scholarships, future prospects) are so punishingly high that efficiency becomes survival.

Instead of vilifying them, why not redesign assessments to reward originality, process, and collaboration over polished products? AI could be a partner in that, not an enemy.

The author’s call for a return to pen and paper feels like surrender dressed up as principle and it’s rediculously out of touch.

It’s not about fostering “humanity” in the classroom; it’s about clinging to a nostalgic ideal of education that never served everyone equally anyway.

Meanwhile, students are already living in the future, where AI is as foundational as electricity.

The real challenge isn’t banning the “likeness bots” but teaching kids how to wield them critically, ethically, and creatively.

Change isn’t coming. It is already here. Resisting it won’t make us more human; it’ll just leave us behind.

Edit: sorry for so many edits. Many typos.


ChatGPT is only 2.5 years old. How are kids using AI like it's always been around? I really hope they aren't making major life decisions consulting chatbots from big tech companies, instead of their relatives, teachers and friends. I'm old enough to recall when social media was viewed as this incredibly positive tech for humanity. How things have changed. One wonders how we'll view the impact AIs in a few years.


I teach Enterpise Architecture on graduate level. I would absolutely not mind people using AI as an OS or an information source or a therapist. I would not mind them looking things up in an encyclopedia, so why mind them using AI.

What I do mind is: - the incredible generic slop AI generates. Let’s improve communication, make a better strategy, improve culture. - the unwavering belief in AI. I tell my students, why using AI will not give them a good grade. They get a case solved by all major LLMs, graded, with thorough feedback and a bad grade. I tell them, that literally writing anything at all as the answer would not give a much worse answer. And still they go and use AI and get bad grades. - the incredible intellectual laziness it seems to foster. I criticize TOGAF in my course (let’s not get into that) and explicitly state it to be outside of the course material. Repeatedly, in writing and verbally. And what do the students do? They ask a LLM, that inevitably starts referring to TOGAF. And the answer is copied in the case analysis without even an attempt to actually utilize TOGAF or to justify the choice made

My students actually get worse grades and are worse off in terms of being able to solve actual real-life problems, because they use AI. Getting a degree should increase their intellectual capabilities but people actively choose not to, thus wasting their time. And that’s I’m not OK with.


How do you test "real creativity" and "critical thinking" in a way that is both scalable and reliably tells apart those who get it and those who don't?


It's interesting to note that your comment and my comment ended up right at the end, having been downvoted, with no downvoters commenting on why they disagree with you, or my, points.

I assume it's because many of the commenters of this post are skewed towards academia, and perhaps view the disruption by AI to the traditional methods of grading student work as a challenge to their profession.

As we have seen many times throughout history, when disruptive forces of technical or demographic changes or a new set of market forces occurs, incumbents often struggle to adapt to the new situation.

Established traditional education is a massive ship to turn around.

Your comments contain much food for thought and deserve to be debated. I agree with you that educators should not be branding students as cheaters. Using AI in an educational context is a rational and natural thing to do, especially for younger students.

> ... AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.

- Yes, this is such an important point and it's why we need enlightened policy making leading to meaningful education reform.

I do disagree with you about incorporating more pen and paper activities - I think this would provide balance and some important key skills.

No doubt AI is challenging to many areas of society, especially education. I'm not saying it's a wonderful thing that we don't need to worry about, but we do need to think deeply about its impacts and how we can harness its positive strengths and radically improve teaching and learning outcomes. It's not about locking students in exam rooms with high tech surveillance.

With AI it's disappointing that the prevalent opinions of many educators are seemingly stuck and struggling to adapt.

Meanwhile society will move on.

Edit: good to see you got a response!


Decades of research into learning shows that "desirable difficulty" is not, as you put it, "just a fetishized relic of an agrarian education system designed to churn out obedient workers, not creative thinkers." Rather, difficulty means you are encountering things you do not already understand. If you are not facing difficulties then your time is being wasted. The issue is that AI allows people to avoid facing difficulties and thus allows them to waste their time.

You think we will make progress by learning to use AI in certain ways, and that assignments can be crafted to inculcate this. But a moment's acquaintance with people who use AI will show you that there is a huge divide between some uses of AI and others, and that some people use AI in ways which is not creative and so on. Ideally this would prompt you to reflect on what characteristics of people incline them towards using AI in certain ways, and what we can do to promote the characteristics that incline people to use AI in productive and interesting ways, etc. The end result of such an inquiry will be something like what the author of this piece has arrived at, unfortunately. Any assignment you think is immune to lazy AI use is probably not. The only real solution is the adversarial approach the author adopts.


Office is so 20th century.


While I enjoyed the article, it’s just another in a line of the same article with different flavors and authors that all have the same fundamental error.

The prevailing counterargument against AI consistently hinges on the present state of AI rather than the trajectory of its development: but what matters is not the capabilities of AI as they exist today, but the astonishing velocity at which those capabilities are evolving.


We've been through this song and dance before. AI researchers make legitimately impressive breakthroughs in specific tasks, people extrapolate linear growth, the air comes out of the balloon after a couple years when it turns out we couldn't just throw progressively larger models at the problem to emulate human cognition.

I'm surprised that tech workers who should be the most skeptical about this kind of stuff end up being the most breathlessly hyperbolic. Everyone is so eager to get rich off the trend they discard any skepticism.


This is confusing. We've never had a ChatGPT-like innovation before to compare to. Yes, there have been AI hype cycles for decades, but the difference is that we now have permanent invaluable and society-changing tools out of the current AI cycle, combined with hundreds of billions of dollars being thrown at it in a level of investment we've never seen before. Unless you're on the bleeding edge of AI research yourself, or one of the people investing billions of dollars, it is really unclear to me how anyone can have confidence of where AI is not going


Because the hype will always outdistance the utility, on average.

Yes, you'll get peaks where innovation takes everyone by surprise.

Then the salesbots will pivot, catch up, and ingest the innovation into the pitch machine as per usual.

So yes, there is genuine innovation and surprise. That's not what is being discussed. It's the hype that inevitably overwhelms the innovation, and also inevitably pollutes the pool with increasing noise. That's just human nature, trying to make a quick buck from the new-hotness.


I don't agree with this.

There's a big difference between something that benefits productivity versus something that benefits humanity.

I think a good test for if it genuinely has changed society is if all gen AI were to disappear overnight. I would argue that nothing would really fundamentally change.

Contrast that with the sudden disappearance of the internet, or the combustion engine.


Work doesn't benefit humanity, work is the chains that keep us living the same day over and over til we die.

Your idea of benefit to humanity clearly doesn't involve the end of work, mine does.

AI can end work for most of us but that has to be what we want, can't be limiting it all the time bc of stupid reasons and expect it to have all the answers as if it weren't limited, that's silly.

If AI disappeared tonight so too would the future where nobody works in a call center or doing data entry or making button graphics to client exact specifications for a website nobody will ever see.

This is the Old World we live in rn - I don't want it to stay.


work is what gives us purpose and meaning. or do you want to live in wall-e world?

there is no long-term happiness without struggle and mastery.

it sounds like what you want is an end of menial labor that is treated poorly. why confuse that with work?


> I would argue that nothing would really fundamentally change.

I argue that there would be a huge collective sigh of relief from a large number of people. Not everybody, maybe not even a majority, but a large number nonetheless.

So I think it has changed society -- but perhaps not for the better overall.


It will take time though, if the internet had completely disappeared in the mid 90s nothing would have fundamentally changed


Wow. Just the fact that the Internet existed at the library was enough for me to know I could know anything as a child - once we got that Internet in 95 and Win 95 PC, everything changed for me. I was quite natural to the online world by Win 98.

MY entire worldview and daily life habits would have changed.

You must be older than me.


I don't mean it would have no impact, it's just that we hadn't reorganized society around it yet


Two things can both be true. I keep arguing both sides because:

1 Unless you’re aware of near term limits you think AI is going to the stars next year.

2 Architectures change. The only thing that doesn’t change is that we generally push on, temporarily limits are usually overcome and there’s a lot riding on this. It’s not a smart move to bet against progress over the medium term. This is also where the real benefits and risks lie.

Is AI in general more like going to space, or string theory? One is hard but doable. Other is a tar pit for money and talent. We are all currently placing our bets.


point 2 is the thing that i think is most important to point out:

"architectures change"

sure, that's a fact. let me apply this to other fields:

"there could be a battery breakthrough that gives electric cars a 2,000 mile range." "researchers could discover a new way to build nanorobots that attacks cancer directly and effectively cures all versions of it." "we could invent a new sort of aviation engine that is 1,000x more fuel efficient than the current generation."

i mean, yeah, sure. i guess.

the current hype is built on LLMs, and being charitable "LLMs built with current architecture." there are other things in the works, but most of the current generation of AI hype are a limited number of algorithms and approaches, mixed and matched in different ways, with other features bolted on to try and corral them into behaving as we hope. it is much more realistic to expect that we are in the period of diminishing returns as far as investing in these approaches than it is to believe we'll continue to see earth-shattering growth. nothing has appeared that had the initial "wow" factor of the early versions of suno, or gpt, or dall-e, or sora, or whatever else.

this is clearly and plainly a tech bubble. it's so transparently one, it's hard to understand how folks aren't seeing it. all these tools have been in the mainstream for a pretty substantial period of time (relatively) and the honest truth is they're just not moving the needle in many industries. the most frequent practical application of them in practice has been summarization, editing, and rewriting, which is a neat little parlor trick - but all the same, it's indicative of the fact that they largely model language, so that's primarily what they're good at.

you can bet on something entirely new being discovered... but what? there just isn't anything inching closer to that general AI hype we're all hearing about that exists in the real world. i'm sure folks are cooking on things, but that doesn't mean they're near production-ready. saying "this isn't a bubble because one day someone might invent something that's actually good" is kind of giving away the game - the current generation isn't that good, and we can't point to the thing that's going to overtake it.


> but most of the current generation of AI hype are a limited number of algorithms and approaches, mixed and matched in different ways, with other features bolted on to try and corral them into behaving as we hope. it is much more realistic to expect that we are in the period of diminishing returns as far as investing in these approaches than it is to believe we'll continue to see earth-shattering growth.

100% agree, but I think those who disagree with that are failing on point 1. I absolutely think we'll need something different, but I'm also sure that there's a solid chance we get there, with a lot of bracketing around "eventually".

When something has been done once before, we have a directional map and we can often copy fairly quickly. See OpenAI to Claude.

We know animals are smarter than LLM’s in the important, learning day-to-day ways, so we have a directional compass. We know the fundamentals are relatively simple, because randomness found them before we did. We know it’s possible, just figuring out if it’s possible with anything like the hardware we have now.

We don’t know if a battery like that is possible - there are no comparisons to make, no steer that says “it’s there, keep looking”.

This is also the time in history with the most compute capacity coming online and the most people trying to solve it. Superpowers, superscalers, all universities, many people over areas as diverse as neuro, psych who wouldn’t have looked at the domain 5 years ago are now very motivated to be relevant, to study or build in related areas. We’ve tasted success. So my opinion is based on us having made progress, the emerging understanding of what it means for individuals and countries in terms of competitive landscape, and the desire to be a part of shaping that future rather than having it happen to us. ~Everyone is very motivated.

Betting against that just seems like a risky choice. Honestly, what would you bet, over what timeframe? How strongly would you say you’re certain of your position? I’m not challenging you, I just think it’s a good frame for grounding opinions. In the end, we really are making those bets.

My bands are pretty wide. I can make a case for 5 years to AGI, or 100 years. Top of my head without thinking, I’d put a small amount on 5 years, all my money on within 100, 50% or more within 20/30.


In 100 years the air may not be breathable, much less have enough CO2 carrying capacity for silicon based AGI


So I’d lose the bet? I’m not sure I’d be any worse off than those winning it!


The bet itself would make Earth less hospitable on a long shot. It's like shredding a winning lottery ticket in the hopes the shreds will win an even bigger lottery someday in the future.


There is another level to AI and how we fundamentally structure them that nobody is doing yet to my knowledge. This next round of innovation is fundamentally different than the innovation that is the focus now - nobody is looking to next stage bc this one hasn't achieved what we expected - bc it won't.

I sus that future iterations of AI will do much better tho.


> There is another level to AI and how we fundamentally structure them that nobody is doing yet to my knowledge

And that is...?


Another reply, different thought. I’d be keen to see what eg Carmack is up to. Someone outside of the usual suspects. There is a fashion to everything and right now LLM’s are a distraction on an S curve. The map is not the territory and language is a map.


One problem is that people assume the end goal is to create a human-cognition-capable AI. I think it' pretty obvious by this point that that's not going to happen. But there is no need for that at all to still cause a huge disruption; let's say most current workers in roles that benefit from AI (copilot, writing, throwaway clipart, repetitive tasks, summarizing, looking up stuff, etc.) lead not even to job loss but fewer future jobs created - what does that mean for the incoming juniors? What does that mean for the people looking for that kind of work? It's not obvious at all how big of a problem that will create.


> human-cognition-capable AI. I think it' pretty obvious by this point that that's not going to happen

It's obvious to some people but that's not what many investors and company operators are saying. I think the prevailing message in the valley is "AGI is going to happen" for different values of when, not if. So I think you'd be forgiven for taking them at face value.


Just like nuclear fusion, right? "When" will always be some time after the next fundraising round.


Right, but the breathless technobabble about the future of our AI-driven world crowds out actual consideration of these important topics.


It's like con artists and management consultants. They are the most susceptible because they drink the koolaid.


I think the mistake is that in the media it is extrapolating linear growth but in practice it is a wobbly path. And this wobbly path allows anyone to create whatever nearrative they want.

It reminds me of seeing headlines last week that NVDA is down after investors were losing faith after the last earnings. Then you look at the graph and NVDA is only like 10% off its all times high and still in and out of the most valuable company in the world.

Advancement is never linear. But I believe AI trends will continue up and to the right and even in 20 years when AI can do remarkably advanced things that we can barely comprehend, there will be internet commentary about how its all just hype.


You articulated my view perfectly. I just don't get the buy in from people who should know better than trust vc funded talking heads.


>> I'm surprised that tech workers ... end up being the most breathlessly hyperbolic.

We're not.


> people extrapolate linear growth

You mean exponential! No one gets out of bed for linear these days.


This comment summarize my thought in the best way.


There's a reason why so many of the people on the crypto grift in 2020-2022 have jumped to the AI grift. Same logic of "revolution is just around the corner", with the added mix of AGI millenarianism which hits a lot of nerds' soft spots.


> The prevailing counterargument against AI consistently hinges on the present state of AI rather than the trajectory of its development: but what matters is not the capabilities of AI as they exist today, but the astonishing velocity at which those capabilities are evolving.

No, the prevailing counter argument is that the prevailing argument in favor of AI taking over everything assumes that the acceleration will remain approximately constant, when in fact we don't know that it will do so and we have every reason to believe that it won't.

No technology in history has ever maintained an exponential growth curve for very long. Every innovation has followed the same pattern:

* There's a major scientific breakthrough which redefines what is possible.

* That breakthrough leads to a rapid increase in technology along a certain axis.

* We eventually see a plateau where we reach the limits of this new paradigm and begin to adapt to the new normal.

AI hypists always talk as though we should extrapolate the last 2 years' growth curve out to 10 years and come to the conclusion that General Intelligence is inevitable, but to do so would be to assume that this particular technological curve will behave very differently than all previous curves.

Instead, what I and many others argue is that we are already starting to see the plateau. We are now in the phase where we've hit the limits of what these models are capable of and we're moving on to adapting them to a variety of use cases. This will bring more change, but it will be slower and not as seismic as the hype would lead you to believe, because we've already gotten off the exponential train.


AI hypists come to the conclusion that general intelligence is inevitable because they know the brain exists and are materialists. Anyone who checks those two boxes will come to the conclusion that an artificial brain is possible and therefore AGI is as well. With the amount of money being spent then its only a matter of when


> With the amount of money being spent then its only a matter of when

Yes, but there's no strong reason to believe that "when" is "within fewer than 1000 years". What's frustrating about the hype is not that people think the brain is replicable, it's that they think that this single breakthrough will be the thing that replicates it.


Moore's law is still going as far I'm aware - there may have been clarification of sorts recently but that's kept up exponentially rather well despite everyone knowing that it can't do that.


Moore's law would improve the speed of LLMs and improve their size, but in recent weeks [0] it's become apparent that we're hitting the limit of "just make them even bigger" being a viable strategy for improving the intelligence of LLMs.

I'm excited for these things to get even cheaper, and that will enable more use cases, but we're not going to see the world-changing prophesies of some of AI's evangelists in this thread come true by dint of cheaper current-gen models.

[0] https://news.ycombinator.com/item?id=42125888


But we don't know if AI development is following an exponential or sigmoid curve (actually we do kind of, now, but that's beside the point for this post.)

A wise institution will make decisions based on current capabilities, not a prognostication.


If investors didn't invest based on expected future performance, the share market would look completely different than it actually does today. So, I can't understand how anyone can claim that.


All S-curves look exponential at some point.

It was unclear if the current wave of AI would be an exponential, or for how long, or if it would end up just being another S-curve. The potential upside hooked a lot of people into action on the VC-maths of "it doesn't matter if it's unlikely, because the upside is just too good".

It is now becoming clear however that we aren't getting AGI. What we have now is fundamentally what we're likely to have in 5-10 years time. I do think we'll find better uses, figure our shit out, and have much more effective products in that time, I think we're entering the "LLM-era" in much the same way as the 2010s were the smartphone era that redefined a lot of things, but in still the same way, a phone of ~2010 isn't materially different to a phone of ~2020, they're still just internet connected, location aware, interfaces to content and services.


But you could also say: the prevailing argument for AI consistently hinges on the (imagined, projected based on naive assumptions) trajectory of AI rather than the present state.

> the astonishing velocity at which those capabilities are evolving.

This is what is repeated ad nauseam by AI companies desperate for investment and hype. Those who’ve been in the game since before this millennium tend not to be so impressed — recent gains have mostly been due to increased volume of computation and data with only a few real theoretical breakthroughs.

Laymen (myself included) were indeed astonished by ChatGPT, but it’s quite clear that those in the know saw it coming. Remember that those who say otherwise might have reasons (other than an earnest interest in the truth) for doing so.


I honestly believe this specific case is a Pareto situation where the first 80% came at breakneck speeds, and the final 20% just won't come in a satisfactory way. And the uncanny valley effect demands a percentage that's extremely close to 100% before it has any use. Neural networks are great at approximations, but an approximate person is just a nightmare.


What is your time horizon? We're already at a date where people were saying these jobs would be gone. The people most optimistic about the trajectory of this technology were clearly wrong.

If you tell me AI newscasters will be fully functional in 10 or 15 years, I'll believe it. But that far in the future, I'd also believe news will be totally transformed due to some other technology we aren't thinking about yet.


Who gives a shit about AI newscasters?

AI allows us to see everything we track the data of rn - and see in a useful way and in real time. It also allows all the tedious and repetitive tasks done by everyone, no longer needs to be done by anyone - creating a static webpage or graphics for a mobile app, a mobile app, or game development - a of those are the easiest to do they ever have been.

AI isn't for millennials or even Gen z - it's for Alpha, they will be the first to truly understand what AI is and to use it as it will be used forever after. Til they adopt it, none of this really matters.


You're declaring you don't give a shit about the current topic. Why are you here?


the prevailing argument in favor of investing in AI is its potential.

the prevailing argument against using AI is its current lack of potential.

Those things are inherently in tension, think of it as hiring a new employee straight out of undergrad. You are hiring them based largely on the employee they will become...with increasing expectations over time balanced against increasing variability in outcomes over time. However, if one year in that employee continues to suck at their current job, their long term potential doesn't really matter. Moreso, the long term potential is no longer credibly evidenced by the inability to progress at doing the current job.

This is an investment gone bad in the current state of things. It doesn't matter what might happen it matters what did. The investment was made based on the perception of astonishing velocity, and it seems that we may need to calibrate our spedometers.


It's getting old but there's an xkcd for your kind of reasoning:

https://xkcd.com/605/


Isn't this essentially the same argument as "there are only 10 covid cases in this area, nothing to worry about"?

It's really missing the point, the point is whether or not exponential growth is happening or not. It doesn't with husbands, it does with covid, time will tell about AI.


No, because as you rightly point out we know exponential growth is very possible with Covid but we don't know if that will happen with AI.

In fact, the only evidence we have for exponential growth in AI is the word of the people selling it to us.


The graph depicts linear growth, not exponential. And neither is the growth of AI.


Transformers have been around for 7 years, ChatGPT for 2. This isn't the first few samples of what could be exponential growth. These are several quarters of overpromise and underdelivery. The chatbot is cool and it outperforms what awful product search has become. But is it enough to support a 3.5 trillion dollar sized parts supplier?


It amazes me how excited people are to see their livelihoods destroyed. I'm retired, but people designing AI in their 20's will be unemployed in a decade. Good luck dudes and dude-ettes, you're fucking yourselves.


Blue Sky feels like a cocktail party where half the guests are knitting sweaters for their cats, the other half are debating how to save democracy, and I’m just standing there wondering how I got invited.

It’s like scrolling through a group chat where everyone forgot the topic but kept texting anyway.

Honestly, it’s impressive how they’ve managed to create a platform that feels simultaneously too niche and too random.

Idk, I spent a few hours over a few days trying to find something cool about it, and couldn’t.


The astro pictures are neat. Everything else ... less so.


Not a movie I would not see.


I feel tragically unsupported by a member of the HN family


We still love you.


Wait, I just noticed the double negative. I choose to believe that means you support my life quest explicitly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: