Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Mark Zuckerberg says AI could soon do the work of Meta's midlevel engineers (businessinsider.com)
43 points by cryptoz 4 months ago | hide | past | favorite | 103 comments



https://archive.is/SHoY9

Maybe. It's certainly possible that this is the case, but I'm hesitant to believe anything Zuckerberg says during a hype-train, especially when saying that has a non-zero chance of boosting the stock.

But let's suppose that this is true, I don't think it necessarily implies that fewer engineers are actually hired. It could be that this "AI mid-level" engineer frees up resources for the remaining engineers to work on something more interesting.


>> frees up resources for the remaining engineers to work on something more interesting.

Or forces them to waste all their time babysitting the "AI" workers that can't do basic tasks.


If that's the case, I don't think we have anything to worry about anyway. If productivity actually drops then I suspect that this program will be sunsetted.


Have you found the fabled solution for measuring programmer productivity?


No, but I do feel like if every programmer were to complain that all they do is babysit AI code, and if we see that most of their commits are fixing mistakes of AI code (e.g. wrapping stuff with null checks or something), eventually management would listen.


Management will not listen as long as the company’s future success is at least partially based on the success of their AI developer initiative.


Also, I'd expect he cares very little about the stock price. He turned down Yahoo's offer when it must have seemed so lucrative


Why do you doubt Zuck's foresight? I am not a fanboy, but he timed the pivot to mobile really well, acquired instagram, and anticipated how crucial messaging would be. All pretty good calls. The VR stuff is still playing out. In fact I think he is one of the few who do a better job of looking ahead.


Mostly because of the huge bet on "Web 3.0" and the "metaverse" stuff. I could be wrong, maybe in ten years we'll be looking at how great Facebook was at predicting stuff, but it seems like it mostly has not panned out, at least from my (admittedly very limited) perspective.

I guess the reason I'm skeptical is because there's really no reason for him to not say this kind of stuff. If he says "Meta's AI model is so good that it's on par with a mid-level engineer", there's a chance the stock price shoots up because it suggests that maybe Meta has some amazing new model and AI is the current hotness, and there's basically no penalty for being wrong.

It's not hard to find cases where CEOs just completely lie to everyone's faces in order to try and boost stock prices, so it's not a skepticism of Zuckerberg explicitly, so much as all CEOs.


Most CEOs, yes. But founder CEOs normally don't care about stock price that much. Zuck turned down yahoo, remember. Bezos kept taking losses in Amazon in the beginning for the sake of future growth despite the stock being punished so hard. Steve Jobs was like that too. Your cynicism is misguided. VR is a very long call.


I don't see how any of that proves that founder CEOs don't care about stock price. Just because they don't take the first easy-out doesn't imply that they're not susceptible to doing things to try and drum up investor hype. I own stock in Apple but I don't panic-sell every single time that AAPL drops in price.

VR is neat but the "metaverse" suggests a lot more, which is why I called it out and not VR.


The point of an engineer isn't just "to write code." You pay engineers to understand the systems they work with. Code is incidental.


Coding is, and always has been, the easy part of software development


Pretty sure he knows this


I don’t know how they think that’s going to work. The best AI has been able to do for me is smarter Auto Complete, I doubt it can just fit into larger code bases without the searching a human can do.

I know the job market has been rough which has emboldened these CEO’s but we are starting to see hiring pick up again, and not just Staff+, interns, SDE 1 and 2 as well, which bodes well for 2025.


I think what’ll happen when people say “AI will replace X people” (engineers in this case) it’s not that it’ll for example replace 100% of what 80 people might do (even if they’re kid-level), but rather it’ll make 8 people 10 times more efficient type of thing .. didn’t Tesla’s head of ai or self driving say publicly years ago that co-pilot wrote 80% of his code? It’s not quite apples to apples but you can see how it’s progressing. I know that’s not what Zuck said but I wonder if that might be what he meant?


How many developers are doing little more than writing boilerplate? Those are the the kinds of jobs best automated. A few years ago behavior-driven development tried to take this as far as possible, taking stories and then turning them into runnable tests. LLMs can already do a pretty good job of being fed inputs and some context and create running code (for example, give ChatGPT a class file and tell it to write a unit test. It does a pretty good job of it)


> How many developers are doing little more than writing boilerplate?

Before you go down the road of what can be automated, let's try to answer this. I'm sure there's actual research, but I cannot find it.

Anecdotal, but with 20+ years of professional development under my belt, at startups, ngos, enterprise, goverment etc: "insignificant amounts".

Yes, some embed-images.sh or some invoice-extractor/src/main.rs - side projects related to a job. Every week some boilerplate like this.

But the vast bulk of my work is:

- finding out what customers/a spec/stakeholder really mean. And how important it really is (and encoding that in executable specs, preferably)

- keeping the old stuff somewhat up-to-date. Fiddling with old Ruby runtimes, Pipenv/pyenv/ or whatever todays flavor is. Nodejs, npm juggling. Docker. Cargo. Deprecations. Security patches. CIs, infra, crashing on updates etc etc.

- muddling through codebases with 10+ years of accumulated horrors. Touched by 100+ random freelancers, employees, most lost to the time. Crippling technical debt. Undocumented decisions everywhere. Crucial business requirements encoded only in these three LoCs amongst thousands of lines inside this update_user_projects() function.

- Trying to get this old project running again on todays OS/runtimes/etc.

- Stitching poorly documented, weirdly (or hardly) architectured, abandoned or outdated libraries together.

The way I see it, LLMs-code-gens currently are little more than unpredictable but advanced compilers. Compiling instructions into instructions that computers can execute. BDD could be a good language to write these instructions. And works for new stuff, and much less for existing stuff. "Maintaining existing stuff" is the vast majority of most developers' work, I'd thing. - but cannot back up -.


AI can search your code today. In cursor this is called “codebase indexing”. We have some million(s) lines of code, orders of magnitude smaller than Facebook, but definitely larger than the average startup. We search with AI tools, through Q&A, and for AI-driven code mods. Cursor also exposes LSP stuff to their AI system so it can look at that tool.

Sourcegraph Cody is another example of a code AI that can search.


> we are starting to see hiring pick up again

There is a different picture. You are now seeing the same jobs but this time it has a reduced salary + equity and fewer openings. It is not the same as before 2020.

For every tech role open, it will be ultra competitive to the point where companies may start hiring olympians for the job that match the so-called "exceptional talent".

The truth is, it will get worse before it gets any better.


From hunter gatherer society to day, every time a new technology comes along that claims to make life ‘easier,’ it pushes us further into being slave-adjacent. We have phones, ChatGPT, Google, and smartphones—but for what? We end up working more and more. The greater the technological progress, the more our lives are exploited to generate profits for our corporate overlords.

ChatGPT will save you time but your boss will now demand you to greatly up your productivity to match this fabulous new technology. Now you’re working even more for the same or less pay!

AGI, in its final form, will turn us into slaves in its final form, and yet we’re cheering it on. This is madness.


I don't agree with the how, but the conclusion is true.

I'll elaborate a bit more though...

In a capitalist world, it will always seek to maximize profits. Human suffering, animal suffering, morality, and so on are basically irrelevant.

Someone willing to exploit human happiness for an extra 2% gain, will wipe out the competition if they are unwilling to do the same. So everything is guaranteed to spiral in a race to the bottom.

So with a new tool, as every time before, it will be a race to the bottom to see how can most unethically use AI to exploit value.

Whether we end up as slaves, or starving on the street remains to be seen. But the odds of humanity being better off without a sharp course correction and regulation by the government, are very low.


> Human suffering, animal suffering, morality, and so on are basically irrelevant.

Not "irrevelent" but most often deliberately part of a strategy:

In economics, an externality or external cost is an indirect cost or benefit to an uninvolved third party that arises as an effect of another party's (or parties') activity. https://en.wikipedia.org/wiki/Externality


One day they're going to want only senior engineers, but with AI taking all the mid-level jobs there won't be any seniors.


> there won't be any seniors

In the SFBA, Lockheed hires welders who can get a security clearance.

Someone I know was in their 70s working there, because there isn't a large pipeline of welders to pull from. In his words, "there's a bunch of great guys with steady hands delivering doordash, because it pays better than being an apprentice in a body shop".

Most automotive welders can't get a clearance, the true factory welding jobs no longer pay well or are automated away.

In response, the old Martinez chevrolet factory welders are continuing to work in Lockheed with larger than normal incentives to stick around past their medicare and social security pay.

Most of them would rather be training new guys (or girls, apparently half of the good new welders are women), but most of the time there's nobody new who comes in.


This has arguably been happening with all fields since the beginning of time.

Mere literacy used to qualify you for a management role. I am not sure you could get a job of any kind without being able to read and write now. It gets replaced with more time spent in specialized education.


That's not a real problem. There are established models for training people to work in fields, where you need plenty of experience to qualify for an entry-level position. Medicine is the most prominent example.

And it's going to be easier than in medicine, because software is so forgiving. You can do real work during training with minimal supervision, as mistakes rarely cause serious harm.


yea there will. ASI (supposedly).

The common theme from these CEO's is today is the worst AI will ever be. (who knows if thats true, not me)


That'd be a task for the next CEO to fix.


i'm quite surprised by the level of skepticism in this thread. I've been developping my website + webservice + db only using cursor "composer" mode, not typing a single line of code, and i'm already with something fully functional. I am voluntarely refraining from fixing the bugs myself by digging into the code, just to see how far someone with 0 coding skill can go. And this answer is : as far as a junior / midlevel software developer can.

Obviously, not on its own. I still needed to give it guidelines and feedback on what to do next. But i could improve my site while watching TV. This was science-fiction just 2 years ago.


Could someone with "0 coding skill" really compose the prompts in the first place? You need to understand the problem space, the right terminology and visualise a general architecture/structure for the solution. These things might be obvious to you and me but take someone from a different field and it would be the same as handing me the controls to a medical robot and telling me to start pushing buttons.


this seems like a very very minor thing compared to correctly understanding how a bug in a screen showing "unauthorized access" on the react frontend project, is in fact triggered by a wrong configuration in a "authorization service" in a file located completely elsewhere, due to a serialization issue, etc..

"visualizing a general architecture" is piece of cake compared to the mental model one need to understand a full stack react / express / sqlite / typescript stack with sufficient details to be able to debug obscure error code.


Do you realize most enterprise coders are writing just simple CRUD applications?


At Meta? I can't say but I doubt it. At the large company I work at - definitely not.


Please, show us the website and the source code - every time I read statement like this, there is no proof.


oh you won't need it : the code is probably horrible, the website design is vaguely Ok for an admin backend but clearly not suitable to customer facing, etc.

But the point is : it works.It is functional and handles complex realistic scenarios.

And only two years ago, this was pure fantasy. Now give the tech two more years, and see where this is going ?


Now for those who are curious, i'm done with my first experiment and i am planning on starting a full redesign of that project, this time providing the AI with a lot more guidance on software best practices. I'll decompose more, make it write more unit tests, etc. And make sure to keep everything in check regularly regarding code quality (still writing nothing myself though).

In fact, i'll be doing exactly what i would do with a junior/midlevel developer. Except AI work probably 100x times faster.

My first iteration of that project lasted about 3 days from start to finish, reaching a point where i left so much cruft accumulating that it couldn't refactor it properly without breaking absolutely everything in an impossible to recover manner... But before you say anything, i've personally seen the same thing happen at least 5 times in my career, with professional humans developing the code.


No, you missed the memo. You're supposed to say they're toy webpages and it doesn't count as "real" programming.

Simonw has plenty of examples on his blog, including full transcripts if you've honestly not seen them before though. https://simonwillison.net/tags/ai-assisted-programming/

A fun one that he didn't bother to post to his blog tho - https://news.ycombinator.com/item?id=42505772


I see no such example, just some blog full of useless posts.


That's because most of the engineers or coders in this thread are in fear their job won't exist in a few years and they will have to go learn a completely different skill to make a wage. It's hard to admit you may not have a job in x years time because a computer took it.

But let's be real here, 90% of software and coding is taking x performing an action on it so the user can consume y. It's basic stuff. Much like how we don't write machine code these days and we use a compiler, your bonkers if you think your going to be writing python or c in the future and not just saying give me x and y from z.

You will still get folks writing code in some obscure areas, but most of us will switch to describing system and app flows and have the code written by ai and in areas utilize ai's to get tasks done. The theory of software construction from a high level will become more important than actual language mastery on the low level.

The folks saying it won't are much like the horse riders of yesteryear who said cars would never replace horses.


I bet your code is full of exploits.


The future for programmers is not bright. Consequently the future for bug hunters and security professionals is very bright.


What makes you think adversarial AIs can't do the job of bug hunters and security professionals too?


Some people are using AI to generate bug reports and have been getting banned because they're hallucinated spam. The bugs don't actually exist. The curl dev wrote a blog post about this issue.

Generative AI can't think, it can only mimic.


This is like one grandparent with dementia correcting the other grandparent with dementia at Thanksgiving dinner.


That is the same person who renamed his company to Meta and went all in into the metaverse and how we all be working in the metaverse. The guy got lucky once, that's all, he is not the modern Oracle.


I’ve definitely seen a pattern in business where some guy effectively wins the lottery, and the business world puts him on a throne, gives him endless capital, and listens breathlessly to his musings about how to successfully win the lottery.


That's my takeaway as well, and this is his new pivot, from VR to AR. Facebook's problem is that it has all of the money and no idea what to do with it, but instead of sitting on a hoard like Apple they've taken the unusual approach of powering their servers with piles of burning $100 bills.

Personally I suspect that if they really try to replace mid-levels with "AI", they're going to have to re-hire the mid-levels as members of a new field: "AI mistake-checker/fixer". They'll have to pay for the AI (which is resource intensive) and the human worker to fix the AI mess. Everyone wins?


True, but there is something to be said about making it to his level.

He isnt stupid. Stupid people don't make it this far. They don't. Don't let the media designed for The Commons tell you otherwise. There are too many filters from rational intelligence, social intelligence, willpower, book smarts, delegation/management/leadership, etc... If you fail at one of these, you don't ~10,000x your company's value.

John Mearsheimer said "They made the wrong decision, not an irrational one." And I think that goes for the Metaverse. VR was/is incredible. I personally would find it difficult to bet real money on that question.

He sees something that is reasonable to say. Will mid-level engineers be replaced by AI? If there is a 2x efficiency improvement, what will half those programmers be doing? Will 100% of their midlevel engineers be replaced? No, but I don't think a smart person would make that claim. Do You personally think 0 people will lose their jobs because they arent needed anymore?


I say this half-jokingly, but only half: Zuck's on TRT now and thinking much more clearly.


Isn't it the opposite? Testosterone causes high-risk behaviour.


That’s a bad framing. Testosterone makes you less likely to place consensus above all else, which is what you want from a leader. Some will mischaracterise that as risky.


Buying and nurturing Instagram and Whatsapp are two other instances of "getting lucky"


Identifying and buying potential competitors is neither lucky, nor is it visionary, it's just SOP for megacorps with deep pockets and potential competition.


He’s missing the bigger story here.

B2B SaaS platforms like Salesforce/CRMs, etc are dead.

Let’s say you’re a big company able to afford quite a bit of mid level AI engineers - why bother paying millions to Salesforce, when you can direct your 24/7 mid engineers to just replicate the Salesforce offering?

Code quality doesn’t matter, it can be copy paste after copy paste, as long as the AI devs manage it.

The death of these types of companies are going to erase billions in the stock market.


> when you can direct your 24/7 mid engineers to just replicate the Salesforce offering

What happens when AI devs can't manage it anymore and it suddenly falls apart, with every new release shipping more critical bugs than it fixed?

I think you better pray that it happens sooner rather than later, because the larger the mountain of crap code becomes, the more it's going to hurt when it unravels, and less likely it is that it's fixable. Do you then just shut the company down?


There's going to be a whole new industry on reliability if this becomes reality. I wonder how "you build it you run it" would evolve. Would we end up with ops figuring out how to remediate the code?. Would the remediation be done by whoever opened the ticket? Would the person who accepted the PR/MR be held accountable for this?.


How to tell everyone that you don't know what SWEs do...

They aren't slinging pig iron into train cars.

If you are in the C* suite and believe this you need to find someone to feed context up to you stat.

Probably why lean, Toyota, agile failed too...

History will be far more likely to remember you as a tulip salesman than an innovator.

Hint, the code isn't your problem.


Great, more midlevel engineers for me.

I genuinely cannot see how AI is going to be bad for programmers. It doesnt do the entire job. It does snippits and you must integrate everything. (And if it does the entire job, amazing! Maybe we can spend extra time on a good UI or extra unit tests)

AI could reduce our staff by half, but that isnt what happens. Instead we just get more work done. The productivity increase seems to warrent spending more money on programmers. If we made our company 500k/yr profit before, AI makes us 1M/yr profit now. What if we bring on 2x as many programmers, do we make 2M/yr profit?

I understand not all jobs scale like this, but at least my industry has demand go up since we are more efficient.


Thank you, this is what I've been trying to say for awhile now.

It feels like people are assuming that there's only so much work that can actually be done, and I don't think that's what happens in practice; instead we figure out how to exploit the resources more instead of doing "the same with less".

Basically I feel most people are not thinking in terms of Gustafson's Law: https://en.wikipedia.org/wiki/Gustafson%27s_law


I can see it turning the job into a quasi-product/engineering manager role and for some, they may not have chosen the new social version of the career.

For some of my smaller projects, AI makes them vastly more efficient to do, but it is far from the hacker-style type of work they started as. For many of them, the human communication, typing the change request into ChatGPT, and pasting the new code into the file are all that remain of the work.

Memory and storage used to be precious resources in the computing world, but now nobody blinks at shipping webpages that are 5MB, APIs that take 3 seconds because there is now a spinner to distract people, or entire browser instances for a calculator.

I know a few software devs that didn't take that transition well as they loved knowing the little details and then the paradigm changed to "nobody cares."

Dev used to be an option for a quiet solitary job. I can see that type of role disappearing.


Oh definitely, no doubt that the world is changing, and of course not everyone is going to like the change. I wasn't exactly slap-happy that everyone decided to start writing their desktop apps in a rebranded version of Chrome, but it makes sense when memory is infinite and CPUs are fast.

I just don't think it's quite as doom-and-gloom as everyone is paranoid about. I feel like most of the jobs that could be easily replaced by AI also could have already fairly easily been outsourced to India. If you already have a job in a rich country like the US, it's likely that your employer thinks that it's necessary.


I think this is partly correct. It just allows you to do more work, and there is always more work to be done. That said, if every company does this, there isn't suddenly more money supply to just double everyone's profit. It's going to increase competition since the path from idea -> product is shorter, so you'll likely make the same amount of money while doing twice the work. In the end, your input effort and output profit will remain the same - the gearing in between will just be adjusted.


> And if it does the entire job, amazing!

Can you explain this line of thinking a bit? I see this a lot but it never makes any sense. It's like being offended about being paid money or feeling that you make too much or that you will be excited once you can get paid minimum wage or better yet just be homeless; I don't see the appeal ...


> AI could reduce our staff by half, but that isnt what happens. Instead we just get more work done. The productivity increase seems to warrent spending more money on programmers.

https://en.wikipedia.org/wiki/Jevons_paradox


This whole "AI replacing engineers" statement would bear a lot more weight if it came from a company that doesn't have in-house model development teams like Shopify and Meta.


I think we're headed for an industry where "everybody is above average". What we think of today as work for junior/mid level engineers will be completely done by AI agents (multiple), all under the direction of a few experienced engineers.

How do we train up the next generation of software engineers if nobody is hiring newbies? This is going to be the fundamental question for the industry over the next decade. The answer, I'm afraid, is going to look something like Musk's Twitter - 80% drop in head count, only the highly motivated remain.


The whole debate about whether or not he can realistically do this based on the state of the technology seems like a side issue to me. What really stands out is that he and CEOs like him would imply that a large, critical segment of their work force is not especially valuable to them, and also that they don't really see them as people. Rather, they are things that can be set aside as soon as they aren't needed.

These business owners aren't going to see us as people until we force them to. Unionize and strike.


Lol your no different as a coder to any other employee in any role on any jobsite. If a business owner can replace you with tech for less, they will. Unionizing won't save you from being made redundant.

Can't unionize against being outskilled by a machine. Much like in construction, why hire a hundred men to shovel when I can hire one bloke and a excavator? Why use a hammer when I've got a nailgun?


No need to unionize and strike in this case. Just convince people to stop using Meta's products. I've never had them (excl. a short period of WhatsApp) and never missed them.


This appears to be the next hype train after the Metaverse, of which we hear very little these days.

The logical AI consequence for Facebook is not that they'll be able to save on salaries, it is that people will (and already are) getting tired of the Internet and will visit Facebook less.

But it fits with the coordinated effort between Starmer's AI hype and the new fake MAGA team of Musk, Sacks, etc.

I'm sure that replacing workers is exactly what Trump voters hoped for! Let us see how the Bannon/Musk row plays out.


So lowlevel engineers are already replaced by AI? Lowlevel meaning low tier no low level/hardware software engineers?


Lowlevel meaning low tier not low level/hardware software engineers.*


Sure, for some meaning of "work" we've been there already for decades. I don't think anyone is doubting that AI can increase productivity (and in this sense, displace midlevel engineers) but I don't see any reason to suspect that AI can replace a given engineer.


You can hire less people if they are more productive. Or you can do more things.


Sure but that's not really replacing workers at all, it's just moving the work around. Same as every other automation technology of our lifetimes. Very odd to position this in terms of replacing workers rather than increasing productivity, even if they are equivalent as I pointed out.

I have no clue why Zuckerberg would want to position stuff this way if a) it's just going to get people more upset and b) can't deliver on his claims.


My company makes a boatload of money fixing urgent stuff companies cannot fix fast enough while losing money because it is broken. Without AI this type of thing is on the quick rise since we started end of the 90s and there is so much work we can just keep upping the hourlies and only pick the stuff we like. The last year or so, we have been finding naively generated AI code as a kind of incident multiplier. I think we will need to find more people as this is going to make us a lot more $.

Even if this is all bullshit and just hyping, it will cause more naive / lazy AI code to flood the market, that's for sure.


Tell me more about this !


Not so much to tell: example: some company has been running for the past 20 years on some erp and things have been attached to it and built around it over that time: suddenly it stops working for no clear reason and the company basically is not functioning fully or at all, losing whatever amount per hour. The original lead dev is gone or (in a recent case) died years ago and there is not much documentation, no issue tracking or even version management (it's a little HN myth that non IT companies all use that stuff as well; they don't, including some IT/SaaS ones we encountered). We get asked to fix it and make it run again. Sometimes we get ask to figure out a plan to fix it long term as well.


In the Metaverse, no doubt.


Does this mean it has already replaced their bottom level engineers?


So the plan would be to have your most brilliant engineers reviewing code that was generated by AI? Next step will be having AI writing LGTM! and automatically merging PRs.


I'm reminded of the github issue opened by a machine, commented on by a machine, LGTM'd by a machine, and merged by a machine.

With self-congratulatory post-merge success gif post by a machine.


RE Zuckerberg - How about AI to replace his own role?


Depending on the LLM it may refuse based on adherence to ethics.


There’s so much mumbo jumbo on that platform that it might as well be the case?

Would be great to know what’s the day of a mid-level engineer at meta like.


I don't think there is enough energy and compute resources to replace all mid level engineers.


Maybe this is good, less engineers in meta may mean more engineers solving actual world's problems.


Can we have a Meta where AI replaces Zuckerberg and makes better decisions?


Sounds like he doesn't have a high opinion of his midlevel engineers.


I just have one question.........Are they replacing Junior Engineers yet?


This says as much about the kind of tasks these people are being given and the whole organization of the company as it does AI's capabilities. This does not make rat penis implant recipient Mark Zuckerberg look as good as he probably thinks it does.


I recall a similar environment at Google where a lot of the software work was very tedious, on-the-rails work.


Forget about Midlevel engineers.

Social Media itself can be replaced if ChatGPT or what ever just takes everyone's chat history and connects us directly to people who care about similar interests.

I don't give a shit about getting the worlds Attention if only 6 people in the world actually care about the things I care about. Social Media's current architecture is pure shit.


It has been admitted and not even Zuckerberg is hiding it.

First it was 'juniors', then 'mid-level' and it WILL especially be seniors next (as they are already expensive). There is a high certainty that there will be a significant job displacement with the introduction of these "AI Agents".

Once again as predicted in other threads before [0] with increased proposals to run to the AGI scam it is after complete job displacement with no alternative for those lost jobs.

2030 is their deadline. I'm giving you 5 years early to prepare as I already am doing this year. [1]

[0] https://news.ycombinator.com/item?id=42651672

[1] https://news.ycombinator.com/item?id=42563239


What are you doing to prepare?

My thoughts are:

I might as well enjoy life now since the future me will need no savings as the age of abundance will take care of me.

I might switch careers to accelerate this future. Whatever I’m working on now isn’t that important because 5 years from now it will be done by AGI if this is brought about.

I should stake a claim in the AGI future. Own or invest in a company that will increase in value given this future.

I could start preparing by switching careers. Maybe becoming a product manager or manager that manages AGI for a business or picking a career that AGI can’t do. Maybe an electrician or doctor or plumber. But is anything safe?

Prepare for the Terminator future. Find the weakness in machines and figure out how to destroy them.


direct translation: „we at meta are a super innovation company and among the leaders in AI. our valuation will go through the roof this year.“ (buy stocks now i need to catch up with my billionaire „friends“.


Is this not exactly what one would expect from "If you need info on anyone at Harvard just ask, they 'trust me', dumb fucks"


I'll believe it when I see it. As it stands Zuck and Meta directly benefit from hyping up AI to the moon.


If you believe that I've got a bridge to sell you.


META is a bigger short than I would have guessed. Gingerbread man spent too much time getting high on his own supply.


AI will never replace bridge salesmen.


lol zuck couldn't even match tiktok tech talking about AI smh




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: