Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I see pro-AI promoters giving weird analogies that do not fit, I get a feeling that they are either management class(aka MBA types) or are not serious professionals.

One key point we must first understand, coding is NOT software engineering or even programming! Writing code is the last bit, a minimal fraction of the job description(unless you are actually indie dev or working for consulting firms). The core tasks include actually untangling the numerous vague requirements, understanding the domain, figuring out best approaches, performing various tests and checks, validating ideas, figuring out a cost effective solution, preparing rough architecture, deciding on an actual set of tech, align a hoard of people that everyone is on the same page, then start coding.

My IDE can already read my mind by auto-suggestion since many years and patterns/frameworks exist to reduce the amount of code I need to write. The issue is that, with these AI model, I just need to abuse my fingers slightly less. The other core duties are not yet solved and remains same archaic procedural everywhere in any/all serious roles.

And speaking of consulting firms, they are also clever and often has several implementations of same stuff, which they can modify a bit and sell for big money.

So in the end, people who jump into the pit because they are afraid of the juju mask are the prime target of the juju mask, for the rest of us, life goes on with minor bumps when the MBA comes to the desk and asks if it is possible to layoff few people to jack up the stock price this quarter yet… while subscribing to that new agenting engineer product suite for double the fees of what the laid off people actually costed, because their best friend in the golf club said the price will eventually become reasonable but the benefits are immediate.



Apparently there's a quote attributed to Bill Gates: "people overestimate what they can do in one year and underestimate what they can do in 10 years."

People overestimate the changes that could happen within a couple years, and totally underestimate the changes that would happen in decades.

Perhaps it's a consequence of change having some kind of exponential behavior. The first couple years might not feel like anything in absolute terms, but give it 10 or 20 years and you'll see a huge change in retrospect.

IMHO, I don't think anyone needs to panic now, changes happen fast these days but I don't think things are going to drastically change in these ~2-3 years. But the world is probably going to look very different in 20 years, and in retrospect it will be attributed to seeds planted in these couple years.

In short I think both camps are right, except on different timescales.


I agree. But I think the change will come between 5 to 10 years.

Anecdotally the amount of hype and interest has been growing exponentially. This will push progress to a maximal pace. The next 10 years will be significantly faster then the last 10 years.


I agree with your first part but I think you are vastly underestimating how much writing code is a part of programming. I also in these discussions people, ironically, really overestimate how much “untangling requirements” is part of the day to day for the majority of programmers. There is obviously some of that but unless you are just talking about consultants that interact directly with customers, a lot of this is done at the product or project management. You’d be surprised how much programming is in programming.


Erm no, coding happens at genesis of the product. The improvement, adjustments, maintenance is 99% of the lifetime.

If you are a professional, please tell me about how much new code you write vs perform other stuff(meetings, alignment, feature planning, system design, benchmark, bug fix, release). For me, the ratio of coding:non-coding is around 10:90 on average week. Some weeks, only code I write are the suggestions on code reviews.


Your dismissal of AI as merely a glorified autocomplete tool correctly acknowledges its current limitations—but it reveals an alarming blindness to the aggressively upward trendline of technological progress. Yes, today’s AI primarily simplifies mechanical coding tasks; your assessment of its present role is accurate. However, your argument dangerously ignores the relentless momentum and historical pattern of breakthroughs clearly indicating what's on the horizon.

Consider the unmistakable trend: In the early 2010s, deep learning fundamentally transformed machine perception, image recognition, and natural language processing, setting new standards far surpassing earlier methods. By 2016, AlphaGo decisively defeated human champions, showcasing unprecedented strategic depth previously assumed beyond AI’s reach. Shortly after, AlphaFold solved the protein-folding problem, revolutionizing computational biology and drug discovery by rapidly predicting complex molecular structures. In parallel, generative adversarial networks (GANs) and diffusion models ushered in a new era of AI-driven image creation, enabling systems like DALL·E and Midjourney to generate strikingly detailed images, surreal artwork, and hyper-realistic visuals indistinguishable from human craftsmanship. AI’s ability to synthesize realistic voices and human-like speech has dramatically improved through innovations like WaveNet and advanced text-to-speech technologies, leading to widespread practical adoption in virtual assistants and accessibility tools.

Beyond imagery and voice, generative AI has also broken new ground in music composition, where models now produce compositions so sophisticated they are difficult to distinguish from professional human creations. Transformer-based models like GPT-3 and GPT-4 represent a seismic shift in language generation, enabling nuanced conversation, creative writing, complex reasoning, and contextual understanding previously believed impossible. ChatGPT further pushed conversational AI into mainstream utility, effortlessly handling complex user interactions, problem-solving tasks, and even creative brainstorming. Recent innovations in AI-driven video generation and editing—demonstrated by advancements like Runway’s Gen-2—indicate rapidly expanding possibilities for automated multimedia creation, streamlining production pipelines across industries.

Moreover, reinforcement learning breakthroughs have expanded significantly beyond gaming, improving complex logistics operations, real-time decision-making systems, and autonomous navigation in robotics and vehicles. The impressive capabilities demonstrated by autonomous driving systems from Tesla and Waymo further underscore AI’s advancing proficiency in real-world environments. Meanwhile, specialized large language models have emerged, demonstrating near-expert performance in fields such as law, medicine, and finance, streamlining tasks like legal research, medical diagnostics, and financial forecasting with unprecedented accuracy and efficiency.

These advances are not isolated phenomena—they represent continuous, accelerating progress. Today, AI assists with summarization, automated requirement analysis, preliminary architecture design, and domain-specific problem-solving. Each year brings measurable improvements, steadily eroding the barrier between supportive assistance and true cognitive engagement.

Your recognition of AI's limitations today is valid but dangerously incomplete. You fail to account for the rapid pace at which these limitations are being overcome. Each "core task" you've identified—domain understanding, requirement analysis, nuanced decision-making—is precisely within AI's increasingly sophisticated reach. The clear historical evidence signals a near-inevitable breakthrough into human-level reasoning within our professional lifetimes.

In disregarding this aggressively upward trendline, you're making the same grave error committed by those who previously underestimated transformative innovations like personal computing, the internet, and mobile technology. Recognizing current limitations without acknowledging clear indicators of impending revolution isn't merely shortsighted—it's strategically negligent.


The funny thing about the current AI hype and people echoing it is, if these technologies were as remarkable and promising as you say than they a) don't need the hype; b) you'd realize that hyping them would harm your own self interest because the rational thing to do would be to leverage them quietly while you have an edge on non-believers.

Thus the rabid propaganda is great evidence that these technologies are not revolutionary.


makes sense. So everything that was ever hyped failed by your logic. A good example is cars and planes. They were once hyped now they failed. No one flies planes or drives cars. Simple logic.


What are you talking about? When were cars and planes hyped?


Oh, you’re right—cars and planes just quietly appeared one day, and people started using them with no excitement whatsoever. No headlines, no grand expositions, no public demonstrations. The Wright brothers must have been so relieved that no one hyped their invention, and Ford was probably grateful that the Model T just organically became a cultural and economic force with zero marketing or public enthusiasm.

Good catch. History is full of revolutionary technologies that succeeded only because everyone kept them a total secret.


Despite your snarky irony, it is clear to me that there is a difference between media attention and hype. Your comparison is ludicrous.


Oh, of course—no one in the past was hyped or excited about airplanes or cars. The Wright brothers took their first flight in complete obscurity, and Henry Ford quietly introduced the Model T with no fanfare whatsoever. No crowds gathered, no newspapers breathlessly covered these developments, and certainly, no one had strong opinions about how these technologies would reshape the world. Just pure, quiet progress.

And as for hype—what else could possibly spread information besides media? In the past, it was newspapers, expositions, and public demonstrations that built excitement. Today, it’s social media amplifying discussions. But surely, widespread interest in a technology today must mean it’s all just empty propaganda, because, as history shows, real groundbreaking innovations only succeed when no one is paying attention.

That’s why the iPhone was a total flop—Apple really should have just quietly released it in a few stores and hoped people figured it out instead of, you know, making a big deal about it.


Ok then go look at the financials and timeframe of Apple investing on the iPhone and the returns and than compare to those of the current wave of investment in "AI".


what does financials have to do with hype? You can have a lot of hype and no financials.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: