In my view its a tool, at least for the moment. Learn it, work out how it works for you, and what it doesn't work for you. But assuming you are the professional they should trust your judgement, and you should also earn that trust. That's why you pay skilled people for. If that tool isn't the best to getting the job done use something else. Of course that professional should be evaluating tools and assuring us/management (whether by evidence or other means) that the most cost efficient and quality product is being built like any other profession.
I use AI, and for some things its great. But I'm feeling like they want us to use the "blunt instrument" that is AI when sometimes a smaller, more fine grained tool/just handcrafting code for accuracy at least for me is quicker and more appropriate. The autonomy window as I recently heard it expressed.
I think the reason for the negativity in this forum (and other threads I've seen over the past few months) is because people are engaged with AI and it seems are deep down not happy with its direction even if they are forced to adapt. That negativity spreads I think to people winning in this which is common in human nature. At least that's the impression I'm getting here and other places. The most commented articles on HN these days are AI (e.g. OpenAI model, some blogger writing about Claude Code gets 500+ comments, etc) which shows a very high level of emotional engagement and have the typical offensive and defensive attitude between people that benefit or lose from this. Also general old school software tech articles are drowned out in comparison; AI is taking all the oxygen out of the room.
My anecdotal observation talking to people: Most tech cycles I've seen have hype/excitement but this is the first one I've been in at least that I've seen a large amount of fear/despair. From loss of jobs, automating all the "good stuff", enriching only the privileged, etc etc people are worried. As loss aversion animals fear is usually more effective for engagement especially if it means a loss of what was before - people are engaged but I suspect negative towards the whole AI thing in general even if they won't say it on the record. Fear also creates a singular focus; when you are threatened/anxious its harder for people to engage with other topics and makes you see AI trend as something you would want to see fail. That paints AI researchers as not just negative; but almost changing their own profession/world for the worse which doesn't elicit a positive response from people.
And for the others, even if they don't have this engagement, the fact that this is drowning out other things can be annoying to some tech workers as well. Other tech talks, articles, research, etc is just silent in comparison.
YMMV; this is just my current anecdotal observations in my limited circle but I suspect others are seeing the same.
The question is not whether you can or can't, but whether it is still worth it long term:
- There is a moat of doing so (i.e. will people actually pay for your SaaS knowing that they could do it too via AI) and..
- How many large scale ideas do you need post AI? Many SaaS products are subscription based and loaded with features you don't need. Most people would prefer a simple product that just does what they need without the ongoing costs.
There will be more software. The question is who accrues the economic value of this additional software - the SWE/tech industry (incumbent), the AI industry (disruptor?) and/or the consumer. For the SWE's/tech workers it probably isn't what they envisioned when they started/studied for this industry.
It seems obvious to me it is the consumer who will benefit most.
I had been thinking of buying an $80 license for a piece of software but ended up knocking off a version in Claude Code over a weekend.
It is not even close to something commercial grade that I could sell as a competitor but it is good enough for me to not spend $80 on the license. The huge upside is that I can customize the software in any way I like. I don't care that it isn't maintainable either. Making a new version in ChagGPT5 is going to be my first project.
Just like a few hours ago I was thinking how I would like to customize the fitness/calorie tracking app I use. There are so many features I like that would be tightly coupled to my own situation and not a mass market product.
This to me seems obvious of what the future of software looks like for everything but mission critical software.
This has a lot of future implications for employment in tech of course, architecture/design decisions, etc. Why would a non-tech company use a SaaS when it just AI up something and have 1-2 engineers on the build accountable? Its a lot cheaper and amortisable over many products saving some companies millions. Not just tech implementors but sales staff would be disrupted. Especially when the SaaS is implementing a standard or requires significant customisation anyway. Buy vs build, product vs implementation, it should all change soon - the silver lining in all of this.
I sadly think if the promise of AI happens this is the likely economic outcome. The last century or so was an anomaly from most of human history; a trend created by the "arms race" of needing educated workers. The prisoner's dilemma was if you trained your workers in more efficient tech you could out-compete and take all the profits from competitors which gave those educated workers the means to strike (i.e. leverage). Now it is the "arms race" of educated AI, rather than workers which could invalidate a lot of assumptions our current society takes for granted in its structure.
That's what AI does. Makes power and politics have even more of a premium vs say learning, intelligence and hard work. Connections, wealth and power. It is almost ironic that our industry is inventing the thing that empowers the people that techies often find useless (as per the above comments) and dis-empowering themselves often shutting the door behind them.
Yes an AI will come up with more insight than many management people as many people state in this thread that a LLM can do their job. Its a mistake to assume that's what they are paid for however.
Agree with most of what you said except for the "big bucks" part. Why would I pay for your product when I can ask the AI to do it? To be honest I think I would rather use that money for anything else if I can spend a little bit of time and get the AI to do it. This is quite deflationary for programming in general and inflationary for domains not disrupted all else being equal. There's a point where Jevon's Paradox fails - after all there's only so much software most normal people want and at that point tech workers value relative to other sectors will decline assuming unequal disruption.
The ability to earn the big bucks as you state is not a function of the value delivered/produced, but the scarcity and difficulty in acquiring said value. That is capitalism. An extreme example is clear air that we breathe - it is currently free, but extremely valuable to most living things. If we made it scarce (e.g. pollution) eventually people would start charging for it; potentially at extortionary prices depending on how rare it becomes.
The only exception I see is if the software encodes a domain that isn't as accessible to people and is kept secret/under wraps, has natural protections (e.g. a government system that is mandatory to use), or is complex and still requires co-ordination and understanding. This does happen, but then I would argue the value is in the adjacent domain knowledge - not in the software itself.
In fact, in many spa towns you have already local taxes, e.g. "climate surcharge" where you actually pay as a tourist for the clean air. Usually it's a local tax that is added on top of your hotel bill.
That's kinda obvious that's their goal especially with the current focus on coding of most of the AI labs in most announcements - it may be augmentation now but that isn't the end game. Everything else these AI labs do, while fun seems like at most a "meme" to most people in relative terms.
Most Copilot style setup's (not just in this domain) are designed to gather data and train/gather feedback before full automation or downsizing. If they outright said it they may not have got the initial usage needed to do so from developers. Even if it is augmentation it feels like at least to me the other IT roles (e.g. BA's, Solution Engineers maybe?) are safer than SWE's going forward. Maybe its because dev's have a skin in the game and without AI its not that easy of a job over time makes it harder for them to see. Respect for SWE as a job in general has fallen in at least my anecdotal conversations mainly due to AI - after all long term career prospects are a major factor in career value, social status and personal goals for most people.
Their end goal is to democratize/commoditize programming with AI as low hanging fruit which by definition reduces its value per unit of output. The fact that there is so much discussion on this IMO shows that many even if they don't want to admit it there is a decent chance that they will succeed at this goal.
Stop repeating their bullshit. It is never about democratizing. If it was, they would start teaching everyone how to program, the same way we started to teach everyone how to read and write not that long ago.
Many tech companies and/or training places did try to though didn't they? I know they do boot camps, coding classes in schools and a whole bunch of other initiatives to get people into the industry. Teaching kids and adults coding skills has been attempted; the issue is more IMO that not everyone has the interest and/or aptitude to continue with it. The problem is that there's parts of the industry/job that aren't actually easy to teach (note not all of it); can be quite stressful and require constant keeping up - IMO if you don't love it you won't stick with it. As software demand grows, despite the high salaries (particularly in the US) and training, supply didn't keep up with demand till recently.
In any case I'm not saying I think they will achieve it, or achieve it soon - I don't have that foresight. I'm just elaborating on their implied stated goals; they don't state them directly but reading their announcements on their models, code tools, etc that's IMO their implied end game. Anthrophic recently announced statistics that most of their model usage is for coding. Thinking it is just augmentation doesn't justify the money IMO put into these companies by VC's, funds, etc - they are looking for bigger payoffs than that remembering that many of these AI companies aren't breaking even yet.
I was replying the the parent comment - augmentation and/or copilots don't seem to be their end game/goal. Whether they are actually successful is another story.
I'm not sure where construction and physical work goes into your categories. Process and chores maybe. But I think AI will struggle in the physical domain - validation is difficult and repeated experiments to train on are either too risky, too costly or potentially too damaging (i.e. in the real world failure is often not an option unlike software where test benches can allow controlled failure in a simulated env).
This is what I think as well. Unfortunately for the AI proponents they already made an example of the software industry. Its on news reports in the US and globally; most people are no longer recommending to get into the industry, etc. Software for better or worse has made an example for other industries as to what "not to do" both w.r.t data (online and option), and culture (e.g. open source, open tests, etc).
Anecdotally most people I know are against AI - they see more negatives from it than positives. Reading things like this just reinforces that belief.
The question of why are we even doing this? Why did we invent this? etc. Most people aren't interested in creating a "worthy successor" at best that eliminates them and potentially their children seeing that goal as nothing but naive and dare I say it wrong. All these thoughts will come from reading the above for most people.
History unfolds without anyone at the helm. It just happens, like a pachinko ball falling down the board. Global economic structures will push the development of AI and they're extremely hard to overwhelm.
for better or worse, decisions with great impact are taken by people in power. this view of history as a pachinko ball may numb us to not question the people in power.
This is the reason IMO. Fundamentally China right now is better at manufacturing (e.g. robotics). AI is the complement to this - AI increases the demand for tech manufactured goods. Whereas America is in the opposite position w.r.t which side is their advantage (i.e. the software). AI for China is an enabler into a potentially bigger market which is robots/manufacturing/etc.
Commoditizing the AI/intelligence part means that the main advantage isn't the bits - its the atoms. Physical dexterity, social skills and manufacturing skills will gain more of a comparative advantage vs intelligence work in the future as a result - AI makes the old economy new again in the long term. It also lowers the value of AI investments in that they no longer can command first mover/monopoly like pricing for what is a very large capex cost undermining US investment in what is their advantage. As long as it is strategic, it doesn't necessarily need to be economic on its own.
A well-rounded take in an age and medium of reactionary hot takes!
While theres some synchronistic effects... I think the physical manufacturing and logistics base is harder to develop than deploying a new model, and will be the hard leading edge. (That's why the US seems to be hellbent on destroying international trade to try and build a domestic market.)
This may make sense if there is a centralized force to dictate how much these Chinese foundational model companies charge for their models. I know in the west people just blanketly believes that the state controls everything in China. However it can't be further from the truth. Most of the Chinese foundational model companies like moonshot, 01.ai, minimax, etc used to try to make money on those models. The VC money raised by those companies are in them to make money, not to voluntarily advance state competativeness. Deepseek is just an outlier backed by a billionaire. This billionaire has long been given money to various charities by hundered of millions per year before deepseek. Open-source SOTA models are not out-of-character move for him given his track record.
The thing is, model is in effect a piece of software that has almost 0 marginal cost. You just need a few, maybe even one company to release SOTA models consistently to really crash the valuation of every model companies because every one can acquire that single piece of software without cost to leave other model companies by themselves. The foundational model scene is basically in an extremely unstable state readily to return to a stable state of the model cost goes to 0. You really don't need the state competition assumption to explain the current state of affairs.
I'm not saying there is a centralised force - I didn't say the government per se. Its enough to say many of the models coming out of China - the AI portion isn't their main income source especially for the major models that people are hyping up (Qwen, DeepSeek, etc). This model (Qwen) from Alibaba is a side model more likely complimenting their main business and cloud offerings. DeepSeek started as a way to use AI for trading models firstly; then spun up on the side. I'm more speaking about China's general position - for them AI seems to be more of a compliment than the main business as compared say to the major AI labs in America (ex Google). My opinion is that robotics in particular just extends that going forward.
Given as you say the long term cost of AI models is marginally zero, I don't think this is a bad position to be in.
I use AI, and for some things its great. But I'm feeling like they want us to use the "blunt instrument" that is AI when sometimes a smaller, more fine grained tool/just handcrafting code for accuracy at least for me is quicker and more appropriate. The autonomy window as I recently heard it expressed.
reply