Hacker News new | past | comments | ask | show | jobs | submit | cglace's comments login

In my tests of using AI to write most of my code, just writing the code yourself(with Copilot) and doing manual rounds with Claude is much faster and easier to maintain.


So you can create a serious contender to Salesforce or Zapier in a week?


like an Eventbrite or a shopmonkey. but yeah, you don't think you could? Salesforce is a whole morass. not every customer uses every corner of it, and Salesforce will nickel and dime you with their consultants and add ons and plugins. if you can be more specific as to which bit of Salesforce you want to provide to a client we can go deep.


But you said "I can now whip up a serious contender to any SaaS business in a week".

Any SaaS business. In a week. And to be a "serious contender", you have to have feature parity. Yet now you're shifting the goalposts.

What's stopping you? There are 38 weeks left in 2025. Please build "serious contenders" for each of the top 38 most popular SaaS products before the end of the year. Surely you will be the most successful programmer to have ever lived.


The rest of the business is the issue. I can whitelabel a Spotify clone but licensing rights and all that business stuff is outside my wheelhouse. An app that serves mp3s and has a bunch of other buttons? yeah, done. "shifting goalposts?" no, we're having a conversation, I'm not being deposed under a subpoena.

My claim is that in a week you could build a thing that people want to use, as long as you can sell it, that's competitive with existing options for a given client. Salesforce is a CRM with walled gardens after walled garden. access to each of which costs extra, of course. they happened to be in the right place at the right time, with the right bunch of assholes.

A serious contender doesn’t have to start with everything. It starts by doing the core thing better—cleaner UX, clearer value, easier to extend. That’s enough to matter. That’s enough to grow.

I’m not claiming to replace decades overnight. But momentum, clarity, and intent go a long way. Especially when you’re not trying to be everything to everyone—just the right thing for the right people.

as for Spotify: https://bit.ly/samson_music


Salesforce is and pretty much always has been a set of code generation platforms. If you can produce a decent code generation platform, do it. It's one of the most sure ways to making money from software since it allows you to deploy systems and outsource a large portion of design to your customers.

Spotify is not the audio player widget in some user interface. It started off as a Torrent-like P2P system for file distribution on top of a very large search index and file storage. That's the minimum you'd build for a "whitelabel [...] Spotify clone". Since then they've added massive, sophisticated systems for user monitoring and prediction, ad distribution, abuse and fraud detection, and so on.

Use that code generation platform to build a product off any combination of two of the larger subsystems at Spotify and you're set for retirement if you only grab a reasonable salesperson and an accountant off the street. Robust file distribution with robust abuse detection or robust ad distribution or robust user prediction would be that valuable in many business sectors.

If building and maintaining actually is that effortless for you, show some evidence.


> Since then they've added massive, sophisticated systems for user monitoring and prediction, ad distribution, abuse and fraud detection, and so on. Use that code generation platform to build a product off any combination of two of the larger subsystems at Spotify

I'm listening. I fully admit that I was looking at Spotify as a user and thus only as a music playing widget so I'd love to hear more about this side of things. What is user prediction?


They spend a lot of effort trying to get good at predicting user preferences, through modeling of personality, behaviour patterns and more.

You can find out quite a lot in their blogs and publications:

https://research.atspotify.com/2022/02/modeling-users-accord...

https://research.atspotify.com/user-modeling/


> I fully admit that I was looking at Spotify as a user and thus only as a music playing widget

This is the key insight here. Software is not the interface you see. There a whole lot more you don't know about (and probably no one knew before building it), and asking an LLM to find some JS front end patterns that abound in its training data won't come close to giving you.

That is why so many developers are skeptical of the amount of hype around LLMs generating code.


> as for Spotify: https://bit.ly/samson_music

I'm not sure what you are trying to say here - that this website is comparable to Spotify? Even if you are talking about just the "core experience", this example supports the opposite argument that you are trying to make.


The way I see it, the core user experience is that the user listens to music. There's playlist management on top of that and some other bits, sure, but I really don't see it as being that difficult to build those pieces. This is a no code widget I had lying around with a track that was produced last night because I kept asking the producer about a new release. I linked it because it was top of mind. It allows the user to listen to music, which I see as the core of what Spotify offers its users.

Spotify has the licensing rights to songs and I don't have the business acumen to go about getting those rights, so I guess I could make Pirate Spotify and get sued by the labels for copyright infringement, but that would just be a bunch of grief for me which would be not very fun and why would I want to screw artists out of getting paid to begin with?


> The way I see it

i think ive detected the root cause of your problem.

and, funnily enough, it goes a long way to explaining the experiences of some other commentators in this thread on “vibe coding competitive SaaS products”.


Sure, yeah, go ahead, do it. Seriously! Build a SaaS business in a week and displace an existing business. Please report back with your findings.


As much as I'd like to pretend otherwise, I'm just a programmer. Say I build, I dunno, an Eventbrite clone. Okay, cool I've got some code running on Vercel. What do I do next? I'm not about to quit my day job to try and pay my mortgage on hopes and dreams, and while I'm working my day job and having a life outside of that. There's just not enough hours left in the day to also work on this hypothetical EventBrite clone. And there are already so many competitors of them out there, what's one more? What's my "in" to the events industry that would have me succeed over any of their numerous existing competitors? Sure, Thants to LLMs I can vibe code some CRUD app, but my point is there's so much I don't know that I don't even know what I don't know about business in order to be successful. So realistically it's just a fun hobby, like how some people sew sweaters.


After spending a week coding exclusively with AI assistants, I got functional results but was alarmed by the code quality. I discovered that I didn't actually save much time, and the generated code was so complex and unfamiliar that I was scared to modify it. I still use Copilot and Claude and would say I'm able to work through problems 2-3x faster than I would be without AI but I wouldn't say I get a 10x improvement.

My projects are much more complex than standard CRUD applications. If you're building simple back-office CRUD apps, you might see a 10x productivity improvement with AI, but that hasn't been my experience with more complex work.


Yeah, my job is now automating tasks using AI and code instead of automating tasks with just code. The code part has gone down, and the abstraction has moved up a layer, but I have more work than ever.


The makes you an an observers orchestrator in a rudimentary agentic system. What happens when that layer of abstraction for scheduling, managing and instructing of agents is better suited for an LLM or group of LLMs? I think a lot of people are going to get screwed.


You can say the same thing about all abstractions. React? So you are just an orchestrator react code that will do the actual work. Etc.


You sound like you're having a bad day. Go take a walk, its just someones side project on HN.


In this article: https://medium.com/@takafumi.endo/how-ai-is-redefining-strat...

You can see how AI is already transforming consulting at the Big 3.


This is the most level take I’ve seen.


The thing I can't wrap my head around is that I work on extremely complex AI agents every day and I know how far they are from actually replacing anyone. But then I step away from my work and I'm constantly bombarded with “agents will replace us”.

I wasted a few days trying to incorporate aider and other tools into my workflow. I had a simple screen I was working on for configuring an AI Agent. I gave screenshots of the expected output. Gave a detailed description of how it should work. Hours later I was trying to tweak the code it came up with. I scrapped everything and did it all myself in an hour.

I just don't know what to believe.


It kind of reminds me of the Y2K scare. Leading up to that, there were a lot of people in groups like comp.software.year-2000 who claimed to be doing Y2K fixes at places like the IRS and big corporations. They said they were just doing triage on the most critical systems, and that most things wouldn't get fixed, so there would be all sorts of failures. The "experts" who were closest to the situation, working on it in person, turned out to be completely wrong.

I try to keep that in mind when I hear people who work with LLMs, who usually have an emotional investment in AI and often a financial one, speak about them in glowing terms that just don't match up with my own small experiments.


I used to believe that until, over a decade later, I read stories from those ""experts" who were closest to the situation", and it turns out Y2K was serious and it was a close call.


I just want to pile on here. Y2K was avoided due to a Herculean effort across the world to update systems. It was not an imaginary problem. You'll see it again in the lead up to 2038 [0].

[0]: https://en.wikipedia.org/wiki/Year_2038_problem


You’re biased because if you’re here, you’re likely an A-tier player used to working with other A-tier players.

But the vast majority of the world is not A players. They’re B and C players

I don’t think the people evaluating AI tools have ever worked in wholly mediocre organizations - or even know how many mediocre organizations exist


wish this didnt resonate with me so much. Im far from a 10x developer, and im in an organization that feels like a giant, half dead whale. Sometimes people here seem like they work on a different planet.


> But then I step away from my work and I'm constantly bombarded with “agents will replace us”.

An assembly language programmer might have said the same about C programming at one point. I think the point is, that once you depend on a more abstract interface that permits you to ignore certain details, that permits decades of improvements to that backend without you having to do anything. People are still experimenting with what this abstract interface is and how it will work with AI, but they've already come leaps and bounds from where they were only a couple of years ago, and it's only going to get better.


There are some fields though where they can replace humans in significant capacity. Software development is probably one of the least likely for anything more than entry level, but A LOT of engineering has a very very real existential threat. Think about designing buildings. You basically just need to know a lot of rules / tables and how things interact to know what's possible and the best practices. A purpose built AI could develop many systems and back test them to complete the design. A lot of this is already handled or aided by software, but a main role of the engineer is to interface with the non-technical persons or other engineers. This is something where an agent could truly interface with the non-engineer to figure out what they want, then develop it and interact with the design software quite autonomously.

I think though there is a lot of focus on AI agents in software development though because that's just an early adopter market, just like how it's always been possible to find a lot of information on web development on the web!


Good freaking luck! The inconsistencies of the software world pale in comparison to trying to construct any real world building: http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...


   > "you basically just need to know a lot of rules..."
This comment commits one of the most common fallacies that I see really often in technical people, which is to assume that any subject you don't know anything about must be really simple.

I have no idea where this comment comes from, but my father was a chemical engineer and his father was mechanical engineer. A family friend is a structural engineer. I don't have a perspective about AI replacing people's jobs in general that is any more valuable than anyone elses, but I can say with a great deal of confidence that in those three engineering disciplines specifically literally none of any of their jobs are about knowing a bunch of rules and best practices.

Don't make the mistake of thinking that just because you don't know what someone does, that their job is easy and/or unnecessary or you could pick it up quickly. It may or may not be true but assuming it to be the case is unlikely to take you anywhere good.


It's not simple at all, that's a huge reduction to the underlying premise. The complexity is the reason that AI is a threat. That complexity revolves around a tremendous amount of data and how that data interacts. The very nature of the field makes it non-experimental but ripe for advanced automation based on machine learning. The science of engineering from a practical standpoint, where most demand for employees comes from, is very much algorithmic.


> The science of engineering from a practical standpoint, where most demand for employees comes from, is very much algorithmic.

You should read up on Göedel's and Turing's work on the limits of formal systems and computability.

You are basically presuming that P=NP.


> just

In my experience this word means you don't know whatever you're speaking about. "Just" almost always hide a ton of unknown unknowns. After being burned enough times nowadays when I'm going to use it I try to stop and start asking more questions.


It's a trick of human psychology. Asking "why don't you just..." leads to one reaction, when asking "what are the road blocks to completing..." leads to a different but same answer. But thinking "just" is good when you see it as a learning opportunity.


I mean, perhaps, but in this case "just" isn't offering any cover. It is only part of the sentence for alliterative purposes, you could "just" remove it and the meaning remains.


>a main role of the engineer is to interface with the non-technical persons or other engineers

The main role of the engineer is being responsible for the building not collapsing.


I keep coming back to this point. Lots of jobs are fundamentally about taking responsibility. Even if AI were to replace most of the work involved, only a human can meaningfully take responsibility for the outcome.


If there is profit in taking that risk someone will do it. Corporations don't think in terms of the real outcome of problems, they think in terms of cost to litigate or underwrite.


Indeed. I sometimes bring this up in terms of "cybersecurity" - in the real world, "cybersecurity" is only tangentially about the tech and hacking; it's mostly about shifting and diffusing liability. That's why the certifications and standards like SOC.2 exist ("I followed the State Of The Art Industry Standard Practices, therefore It's Not My Fault"), that's what external auditors get paid for ("and this external audit confirmed I Followed The Best Practices, therefore It's Not My Fault"), that's why endpoint security exists and why cybersec is denominated not in algorithms, but third-party vendors you integrate, etc. It all works out into a form of distributed insurance, where the blame flows around via contractual agreements, some parties pay out damages to other parties (and recoup it from actual insurance), and all is fine.


I think about this a lot when it comes to self-driving cars. Unless a manufacturer assumes liability, why would anyone purchase one and subject themselves to potential liability for something they by definition did not do? This issue will be a big sticking point for adoption.


Consumers will tend to do what they are told and the manufacturers will lobby the government to create liability protections for consumers. Insurance companies will weight against human drivers and underwrite accordingly.


At a high level yes, but there are multiple levels of teams below that. There are many cases where senior engineers spend all their time reviewing plans from outsourced engineers.


ChatGPT will probably take more responsibility than Boeing for their airplane software.


Most engineering fields are de jure professional, which means they can and probably will enforce limitations on the use of GenAI or its successor tech before giving up that kind of job security. Same goes for the legal profession.

Software development does not have that kind of protection.


Sure and people thought taxi medallions were one of the strongest appreciating asset classes. I'm certain they will try but market inefficiencies typically only last if they are the most profitable scenario. Private equity is already buying up professional and trade businesses at a record pace to exploit inefficiencies caused by licensing. Dentists, vets, Urgent Care, HVAC, plumbing, pest control, etc. Engineering firms are no exception. Can a licensed engineer stamp one million AI generated plans a day? That's the person PE will find and run with that. My neighbor was a licensed HVAC contractor for 18 yrs with a 4-5 person crew. He got bought out and now has 200+ techs operating under his license. Buy some vans, make some shirts, throw up a billboard, advertise during the local news. They can hire anyone as an apprentice, 90% of the calls are change the filter, flip the breaker, check refrigerant, recommend a new unit.


for ~3 decades IT could pretend it didn't need unions because wages and opportunities were good. now the pendulum is swinging back -- maybe they do need those kinds of protections.

and professional orgs are more than just union-ish cartels, they exist to ensure standards, and enforce responsibility on their members. you do shitty unethical stuff as a lawyer and you get disbarred; doctors lose medical licenses, etc.


I promise the amount of time, experiments and novel approaches you’ve tested are .0001% of what others have running in stealth projects. Ive spent an average of 10 hours per day constantly since 2022 working on LLMs, and I know that even what I’ve built pales in comparison to other labs. (And im well beyond agents at this point). Agentic AI is what’s popular in the mainstream, but it’s going to be trounced by at least 2 new paradigms this year.


Say more.


seems like OP ran out of tokens


So what is your prediction?


In all of these posts I fail to see how this is engineering anymore. It seems like we are one step away from taking ourselves out of the picture completely.


I don’t write binaries, assembly, or C. If I don’t have to write an application, I’m okay with that.

I still have to write the requirements, design, and acceptance criteria.

I still have to gather the requirements from stakeholders, figure out why those will or will not work, provision infra, figure out how to glue said infra together, test and observe and debug the whole thing, get feedback from stakeholders…

I have plenty of other stuff to do.

And if you automate 99% of the above work?

Then the requirements are going to get 100Xed. Put all the bells and whistles in. Make it break the laws of physics. Make it never ever crash and always give incredibly detailed feedback to the end users. Make it beautiful and faster than thought itself.

I’m not worried about taking myself out of the loop.


I have to say that I am worried that, by taking myself out of the loop for the 99%, I'm going to get worse at the 1% of things that occasionally fall into my lap because the LLM can't seem to do them. I think software engineering is a skill that is "use it or lose it", like many others.

There's also the question of whether I will enjoy my craft if it is reduced to, say, mostly being a business analyst and requirements gatherer. Though the people paying me probably don't care very much about that question.


Reading some of the comments in the "Layoffs don't work" right before reading comments here must have been one of the more surreal experiences for me :)

The takes are as different as (paraphrasing): "if a person can't create something with en empty text editor, I fail them", and "if a person can't speed run through an unrealistically large set of goals because they don't use AI-assisted development, I fail them".

I guess one should keep their skills at both honed at all times, even if neither are particularly useful at most real jobs, because you never know when you're going to be laid off and interviewing.


It's very specialized already, though.

How many devs could debug both a K8s network configuration issue and a bug in an Android app caused by a weird vendor's OS tweak? Not most of us.

Some people will be better at pushing the LLM things to generate the write crap for the MVP. Some people will be better at using these tools for testing and debugging. Some people will be better at incidence response. They'll probably all be using tools with some level of AI "magic" in them, but the specialization will be somewhat recognizable to what it's been for the past decade.

If you're on the business side you still want a team of people running that stuff until there's a step-change in the ability to trust these things and they get so good you'd be able to give over control of all your cloud/datacenter/network/whatever infrastructure and spending.

And at THAT point... the unemployed software engineers can team up with the unemployed lawyers and doctors and blue-collar workers who were replaced by embodied-LLM-powered robots and ... go riot and ransack some billionare's houses until they decide that these new magical productivity machines should let everyone have more free time and luxury, not less.


Thanks for sharing. I hope you are right. It's hard to stay objective as things are changing so quickly.


Are you concerned that these tools will soon replace the need for engineers?


Yes, I used to be skeptical about the hype, but now I'm somewhat concerned. I don't think they will replace engineers but they do increase their productivity. I'm not able to quantify by how much though. In my case, maybe it increases my productivity by 5-10%, saving me a few hours of work each week. Very rough estimate.

Does it mean that we'll need less engineers to perform the same amount of work? or we'll produce better products? In my company, there's no shortage of things to do, so I don't think we'll hire less people if suddenly engineers are a bit more productive. But who knows how it'll impact the industry as a whole.


The market is currently valuing the stock based primarily on their robotaxi initiative and humanoid robots.


The market is currently valuing it on what it thinks other investors in the market will value it at.

It's a class in the Greater Fool Theory of investing, writ large.


What do you think it would take to create a cascading loss of confidence in the security?


Time


Time and bad results. That said the market can stay irrational longer than you can stay solvent.


I think you are right. A minor shock in share price from a bad profit announcement will cause certain traders to have to sell to stop the loss getting worse, probably by an automated stop-loss order. This will start a chain reaction.


Those are two startups attached to a destroyed brand and an auto company. Idk why anyone would invest.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: