I've heard analysis that a significant proportion of middle east water and food supply - in the form of desalination plants, and cargo through the Strait of Hormuz - is within reach of Iran's capabilities. The logic is that you use this to collapse middle eastern economies which blocks the flow of the investment from Arab states into AI infra that is propping up the US economy.
Iran is unlikely to target food and water without being further backed into a corner, since the escalation would mean reciprocal strikes (possibly independently by the KSA air force) on the Kharg and Bandar Abbas export terminals, which have so far avoided being targeted.
Israel will not tolerate an economically developed or at all democratic Iran. They want Iran to be where Gaza is and Libanon is heading. It's not just the leaders who have nothing to lose.
You would think the hebrew population is just exhausted and hardened and driven a little crazy by several generations of 'deathtoisrael' 'deathtoisrael' drilled into the brains of millions of schoolchildren every year - each /year/'s school intake of impressionable brains is larger than the total world Jewish population. In the end the hebrew-speakers' response will not be simply rational as they are as much mentally affected as the iran citizenry, which is orders of magnitude larger.
It's typical the world community has put up with the naked genocidal intent of the Iran government - which is by now in a sense woven into its constitution and mystical-apocalyptic self-conception - as if it were a musical curious style -- as they build militias saying the same on every border, financing the bizarre suicide campaigns of early 2000s etc. to stop a 2 state solution and keep the party going.
With 'deathtoamerica deathtoamerica' noblesse oblige requires us to pretend it is merely comical. But the 'uppity, arrogant' jewish state is microscopic by comparison with titanic Persian Empire. The disproportion (80x) is far more extreme than even USSR or USA v Afghanistan or USA v Vietnam (30x.
You're talking about Iran's genocidal intent as if Israel has not just finished actually committing one. And their supposed exhaustion from hate from their neighbors - The truth is they are gleefully hyper aggressive and hyper violent and ultra racist as well.
"Until these reasonable conditions are met, the war will go on." :) :)
You seem to have a much more imperative and concise view than either Trump or Hegseth, are you secretly the one in charge of everything?
I loathe Iran regime, but not much of that is going to happen.
The first response would be:
"We stop threatening Israel when they stop taking Palestinian lands and killing them"
"The US moves all troops out of the Middle East"
"Stops trying to overthrow our government"
"Stops sanctioning us"
"Stops and interfering in our domestic affairs"
"Returns $100B dollars stolen / frozen since 1979"
And that would literally put that truly bad regime on almost the moral high ground.
The impossible imperative that you've written I think helps us understand the 'Chock A Block' log-jam of the situation, and why this is so thorny.
The most plausible outcome is nothing.
The second most plausible outcome is descent into massive factional civil war.
Maybe somewhere down the line, there is a 'Shah Installed by the US' and then I'm afraid to tell you that we 'Already Did That' and look how it turned out.
There are no decisive answers and no decisive options on the table.
Idk why you’re getting downvoted, the U.S. is looking for a quick win if it can have it. One of the destructive streaks in the Iranian regime (and among its proxies) is their glorification of martyrdom over pragmatism. It’s cost them historically and it’s costing them now.
"the U.S. is looking for a quick win if it can have it."
?? What is a 'quick win' ??
Serious question.
- Ayatollah dead, next Ayatollah?
- 10 years of civil war?
- Installation of US-backed Shah, with no real power base, which will lead to ongoing insurrection, which is what led to ... Islamic Revolution ... because we already tried that?
- Faux acquiescence at the negotiating table ... while they dig their HQs further under ground?
What else?
A quick win is possible, but one of the most narrow scenarios.
It looks at the moment like everyone is losing, some worse than others.
An Iranian Delcy Rodriguez. Someone who waives a white flag, accepts Trump's terms and then gets left alone for the most part.
Even if it's a bluff to buy time to rebuild, I'm not seeing the strategic rationale for refusing to negotiate. (I do see Iran's leaders being internally constrained to keep fighting. We saw a similar dynamic in Japan during WWII.)
> My understanding is that they were actively negotiating
The only parties I’ve seen say this are Iran and Oman. Practically every other source, including non-English, concedes taking three weeks to come close to hashing out a framework isn’t serious.
> Trump was not authorized by Netanyahu to win that negotiation
This is nonsense. Netanyahu was pushing Trump. But if the Tehran team had capitulated, Trump would have taken the win. Hell, the capitalization could have been bullshit.
> outcome was foreordained
Of course it wasn’t. Ford wasn’t scrambled until weeks ago.
If you can read translated European, Indian, Chinese and Arab reporting on this, I’d strongly suggest it. The English-language stuff has inherited America’s obsession with Israel. That’s obviously germane here. But it isn’t as controlling as we like to make it seem.
Honest question, posed with all possible respect to a longtime HN user whom I am certain isn't a bot, an idiot, or a shill. Does Trump sound, to you, like a guy with a plan of his own, who can be trusted to execute that plan? Like someone who deserves the benefit of the doubt?
We're told that "Biden gave everything to Ukraine and didn't bother to replace it," but "Fortunately, I rebuilt the military in my first term," and now we have a "virtually unlimited supply, stocked and ready to WIN BIG!!!"
Where do you, personally, draw the line? What could he say to convince you that he isn't acting in our country's interests, or even his own?
I’m doing nothing of the sort. I’m carrying water for the evidence.
Iran was not negotiating in good faith regarding giving up their nuclear program. Israel was encouraging but not boxing in American decision making. Witkoff and Kushner were (a) genuinely trying to strike a deal and (b) against kinetics when they started. These are each supported by the preponderance of evidence, far more than their inverses.
> Where do you, personally, draw the line?
Where the evidence does. If that winds up being pro- or anti- whatever party, I’ll look at that second. Facts aren’t partisan.
I’m doing nothing of the sort. I’m carrying water for the evidence.
Unfortunately I don't feel that I have access to the "evidence" you're referring to.
Everything I know about the attacks on Iran, I learned from people who lie a lot. That's pretty much all they do. They begin lying shortly after they wake up in the morning but before they get out of bed, and then they don't stop lying until they enter the alpha state that night. After that they probably lie in their dreams, but to be fair I guess I don't have evidence of that either.
Facts aren’t partisan.
Are your non-partisan facts more like Kellyanne Conway's "alternative facts," or the kind that are actually true?
Trump wants the things I listed -- he has expressed them time and again for anyone remotely willing to listen.
If one is trying to negotiate while maintaining a theocracy, then Trump won't listen, but ending the theocracy is one of the listed points, so it's logically consistent.
You're not making sense. There's no reason to strike a deal with the US as the US can't be trusted to uphold their deals as they have demonstrated directly to Iran previously and to their own "allies".
I mean, the Germans and Japanese and Iraqis probably couldn’t have trusted the U.S. either. They still surrendered because it was better than fighting a futile war.
Same here. Iran’s security chief (and, I’m assuming, de facto leader) messaging he’s ready to concede on those points certainly doesn’t put him in a worse position than he is now.
See the link I posted in the other reply. The people we've elected to lead the US are snakes. Even some of the more fanatical Iranians look sane, intelligent, rational, and trustworthy in comparison.
Hyperbole aside, that's irrelevant. Iran doesn't have the option of negotiating with someone else. Their options are to attempt diplomacy or face incremental anihilation as a modern state.
I don't consider it hyperbole. I grew up with those sorts of people. They are capable of beliefs you wouldn't believe.
As for Iran, about all they could have done, in retrospect, was to actually develop nuclear weapons rather than just hemming and hawing and bluffing and stalling for 40+ years. It's amazing that the country that invented chess could blunder that badly.
> about all they could have done, in retrospect, was to actually develop nuclear weapons
There were so many mis-steps.
Iran could have rejected autarky. Constrained the IRGC’s corruption. Not funded proxies that pissed off every one of their neighbors (except for, maybe, Turkmenistan).
Not supported Hamas when they decided to deputize a lobotomy ward. Not drip fed Hezbollah’s rockets into Israeli air defenses. Not fired at Israel in a symbolic move in 2024. Not half assed their retaliation in 2025. Not assumed, with full faith, Trump was bluffing and thus not (a) seriously negotiate in Geneva nor (b) bother scattering their navy and air force in anticipation of strikes.
I’d actually argue an indigenous nuke was a strategic blunder for Iran. It cost them their economy and moral standing. Maybe pursuing Russia’s nuclear umbrella would have been a smarter move.
Since WW2 maybe, with a very, very small sample. But historically, if you have a minority religion/heresy, you better live under a Muslim caliphate than a Christian monarch not that it's great by any mean, but the moors are a great example, they got what, a generation after the reconquista? Two? Hussites and especially cathares are probably the first religious genocide, with public rape of one of their leader and her daughters, and the destruction of all Cathare littérature and religious text.
Analysts have been playing fast and loose with this phrase. Within reach assuming no air defenses and able to be struck are separate. I believe Iran could hit this infrastructure if it were undefended. It’s not practically able to due to air defenses. (Iran already targeted the small Gulf states’ airports. Given how much food they import by air, that’s an attempted blockade strike.)
Those nations do not have the ability to defend civilian infrastructure - it's the weak point we see in Ukraine.
Nobody does.
These are large regions, AA coverage is narrow, using F16s to shoot down Shaheds wears down fast.
If Iran has stockpiles, and the wherewithal for mission planning, they can steer them around AA and hit the 'back office' at will.
Those states also have no practice coordinating the in-between methods - they have only very expensive ways to stop Shaheds, and only jet fighters outside of AA coverage.
Now - hitting a lot of things like civic buildings etc. doesn't have much effect, but it depends how the civilians react and cope.
Some very specific things like energy desalinization are acute problems.
These are authoritarian states that can keep information dispersal minimal and the civilians will just have to 'eat it' - but only for so long.
The biggest damage will be to Straight of Hormuz - of those drones can be used to hit Oil Tankers ... if there are enough muntions, it will be bad
All of that said, China and India would be super duper upset about that, and Iran may depend on China for parts. They would have 'no friends' at that point.
So it's all plausible.
But it requires Iran to have capabilities.
The entire Middle East is lit up right now - and that puts US forces on a 'clock' - this is going to be an interesting form of attrition on all sides, not a good situation.
> of those drones can be used to hit Oil Tankers ... if there are enough muntions, it will be bad
Not really. During the Iran-Iraq war hundreds of tankers were sunk, including in the Strait [1]. Iran's supposed 'nuclear' option was mining the Strait. But for whatever reason, they weren't able to or chose not to do that.
> this is going to be an interesting form of attrition on all sides
I'm sceptical of this read. With missiles, the launchers are the weak link. With drones, the factories. In the meantime, the U.S. gets to refine the anti-drone kit it's been working on (based on lessons from Ukraine and the attacks on the Houthis).
The Iran-Iraq war was decades ago, every weapon system has changed.
Iran has anti-ship missiles they can fire with impunity at tankers - and - as I said Shaheds.
Shaheds are not yet used against moving targets, but it's plausible they are ready for that.
If they are, then they can close the straits.
The reason they would not likely close the straits is that China is the primary recipient of that Oil, and it's a bit of a client state. China paradoxically provides parts for those drones. And they are the 'last remaining frenemy ally of Iran'.
> Iran-Iraq war was decades ago, every weapon system has changed
Not the point. The point is when the world was even more dependent on oil, hundreds of tankers getting potted was no more than a major nuisance.
> Iran has anti-ship missiles they can fire with impunity at tankers
But limited launchers. And if by “with impunity” you mean losing launchers every time they fire, sure.
> Shaheds are not yet used against moving targets, but it's plausible they are ready for that
This would be an issue.
> then they can close the straits
As you say, this gives everyone in the Gulf, EU, China and India a motivated reason to ensure the war ends.
Also, Iran closing the Strait (note: singular) is self siege. A legitimate American strategy could be just waiting them out while potting shit from the air.
> the nuclear option is very nuclear
There are no nuclear tactics on the table at this time.
The prediction that there will be greater demand for specialist frontend and backend engineers is the one that surprises me. How do folks think about it? I've been assuming the opposite - that demand for specialists will decline because expectations around what a single person should be able to do will grow.
I'm a frontend specialist developer currently taking some time off to reskill - and trying to decide exactly which direction to go. My thinking was that I would need to lean into design and product - leverage my technical knowledge in building interfaces to be able to inform the product side. Knowing what is easy or hard to build hopefully would speed up the product side.
It would be quite the opposite, the magic man favours families, children, and population growth. The rejection of these beliefs seems to be what is detrimental to society. The other stuff I'll leave for you to decide.
I'm not convinced, I believe the institutions of church were and often still are the foundations of communities in many positive ways.
But the fact that they rest on an arbitrary belief in one of the popular gods does make it a pretty shakey foundation.
We see it right now, as the belief in Christianity has dwindled so too have the communities the church was supporting, the community can be separate to belief and probably should be of it is to support a greater community.
I think the west is in for a rude shock when it finally realises how fast China is developing technologically.
I was there a few months ago in Guangzhou, it was stunning to see how many EVs were on the road. You can tell too - because they have a distinctly coloured license plate.
The scale that China can achieve is just mind boggling. We went into a giant mall - 7 levels. And it was all just jewellry. A whole mall! Blew my mind. Apply that sort of scale to technological development. They can do things other countries just can't because of that scale. Here's another example of a data center they just built:
I get the sense that some people are just starting to really cotton on about what China is really becoming - for example that recent review from Marques Brownlee of the Xiaomi EV.
But still - most of the narrative in the west seems to be doomerism about their demographics and real-estate over investment. We will see.
It's going to be interesting to see if China can offset its oncoming demographic challenges with their technological progress.
While this is certainly true, I don't really know if molten salt reactors are the best example. The US and Russia shared molten salt reactor technology after the collapse of the Soviet Union, and then the US shared those details with China in 2014 under an international DOE initiative: https://web.archive.org/web/20130919025430/http://www.smartp...
The PRC has much credit to give the United States and Soviets for their thorium reactor designs.
That's interesting - but I guess we need to understand better what innovation counts as a good example of Chinese incipient superiority.
If China is leap frogging the west technologically, then it can still only be a very recent fact. It stands to reason that their current innovations will be developed on top of much of what they learnt from the west.
That doesn't mean that a particular innovation does not deserve to be counted as an example of their progress, just because that innovation was on the shoulders of western tech. Most progress is done on the shoulders of others anyway. So I don't feel examples like this should be discounted.
But also - there are a lot of dimensions to think about here. For example, one dimension is the raw development of a technology. Maybe China has not developed so much newer tech (although I'm reading much to the contrary in this respect also). But another dimension is the economics involved in implementation. With scale comes so many advantages. Business models that just aren't economical in Western countries can thrive in China.
There was one example that really struck me. I was at a mall with my partner and her friends (who live in Guangzhou). Those friends wanted to order something to drink. One of them got out their phone, ordered those drinks and had them delivered outside the mall. This was less effort and cost in their minds than actually wandering around the mall trying to find the drinks they wanted. Heaps of people just get their morning coffee delivered.
You might not think this example is a good demonstration of technological superiority. Every country has the phone and internet tech for goods delivery. But that they can leverage the technology to this degree is only be possible where the cost of delivery is so low - as it is in China.
So yeah - raw innovation is one dimension, but opportunity for implementation is another very important one as well.
Well, now you're moving the goalpost. I think China has every right to be proud of developing economically beyond what the Soviets were capable of.
Characterizing that success is easy, looking at the economy; China has tons of raw and manufactured resources, with a weak financial sector. That's the exact opposite of what you see in developed economies in America and Europe, and the service industries will reflect that. Oftentimes, a weak finance sector is reflected in a surplus of poor people (which can be confirmed looking at China's GDP/capita).
What you're describing in all of your anecdotes are not the nascent signs of superiority. It's cheap concrete and poor people who will do jobs that other people consider insulting. There are absolutely domains where China has met or exceeded the global watermark in various areas (HGVs, BVR missiles, expendable AESA/GAN radar, speaking as an aviation nerd) but much of that stems from the aforementioned surplus. The Soviets also had wonderful trinkets that the western world couldn't copy, but it didn't save them when they needed money and foreign support.
I bought the book - looks good! Would be keen to know which magazines they were originally published in. I feel you should include those references in the book (forgive me if I've missed them.)
All of the source references are in the section called "Permissions" at the end of the book, this is a common way that anthologies do references but I understand it is easy to miss!
I don't even understand why it's everyone elses problem to opt-out.
Eventually - for how many of these AI companies would a person have to track down their opt-out processes just to protect their work from AI? That's crazy.
OpenAI should be contacting every single one and asking for permission - like everyone has to in order to use a person's work. How they are getting away with this is beyond me.
Copyright doesn't prevent anyone from "using" a person's work. You can use copyrighted material all day long without a license or penalty. In particular, anyone is allowed to learn from copyrighted material by reading, hearing, or seeing it.
Copyright is intended to prevent everyone from copying a person's work. That's a very different thing.
I think to make that argument you would need evidence that someone prompted ChatGPT to reword/misquote info directly from your blog, at which point the argument would be that that person is rewording/misquoting info directly from your blog, not ChatGPT.
I don't think so: The user is merely making a request for copyrighted material, which is not itself infringing, even if their request was extremely specific and their intent was obvious.
OpenAI would be the company actually committing the infringement and providing the copy in order to satisfy the request.
If the law suddenly worked the other way around, companies would no longer be able to prosecute people for hosting pirated content online, because the responsibility would lie with the users choosing to initiate the download.
Legally, you'd struggle to prove any form of infringement happened. Making a copy is fine. Distributing copies is what infringes. You'd need to prove that is happening.
That's why there aren't a lot of court cases from pissed off copyright holders with deep pockets demanding compensation.
> Copyright doesn't prevent anyone from "using" a person's work.
It should. The 'free and open internet' is finished because nobody is going to want to subject their IP to rampant laundering that makes someone else rich.
I can see this both ways. For the sake of argument, please explain why using IP to train an AI is evil, but using the same IP to train a human is good.
Note that humans use someone else's IP to get rich all the time. E.g. Doctors reading medical textbooks.
>Note that humans use someone else's IP to get rich all the time. E.g. Doctors reading medical textbooks.
You need a better example, a textbook was created with this exact purpose of sharing knowledge with the reader.
My second point, if you write a poem and I read it and memorize it, then publish it as my own with some slight changes you would be upset?
If I get your painting, then use a script to apply a small filter to it then sell it as my own, is this legal? is my script "creative"?
This AIs are not really creative, they just mix inputs and then interpolate an answer , is some cases you can't guess what input image/text was used but in other cases it was shown ezactly the source that was used and just copy pasted in the answer.
> My second point, if you write a poem and I read it and memorize it, then publish it as my own with some slight changes you would be upset?
I feel the problem with analogizing to humans while trying to make a point against unlicensed machine learning is that applying the same moral/legal rules as we do to humans to generative models (learning is not infringement, output is only infringement if it's a substantially similar copy of a protected work, and infringement may still be covered by fair use) would be a very favorable outcome for machine learning.
> they just mix inputs and then interpolate an answer , is some cases you can't guess what input image/text was used
Even if you actually interpolated some set of inputs (which is not how diffusion models or transformers work), without substantial similarity to a protected work you're in the clear.
> is my script "creative"? [...] This AIs are not really creative [...]
There's no requirement for creativity - even traditional algorithms can make modifications such that the result lacks substantial similarity and thus is not copyright infringement, or is covered by fair use due to being transformative.
>I feel the problem with analogizing to humans while trying to make a point against unlicensed machine learning is that applying the same moral/legal rules as we do to humans to generative models (learning is not infringement, output is only infringement if it's a substantially similar copy of a protected work, and infringement may still be covered by fair use) would be a very favorable outcome for machine learning.
Agree. copyright is clear,
so if I can make ChatGPT output copyrighted material then Open AI should pay me correct? Or you will claim that this is rare, a mistake and we should forgive OpenAI while a human would have had to pay damages.
> so if I can make ChatGPT output copyrighted material then Open AI should pay me correct?
If by "make" you mean you're coaxing it into outputting your work, it'd be difficult to allege damages. If you show it's regurgitating your registered work to normal users, and it's not covered by fair use factors (e.g: it's outputting a significant portion of your work, in a non-transformative manner, and this is negatively impacting the market for that work), then you'd have a good case to bring.
> Or you will claim that this is rare, a mistake and we should forgive OpenAI
Rarity will affect damages, but they wouldn't be off the hook if such a situation does happen. To my knowledge no safe harbor applies here, given it's their own bot and not human users.
Is the AI allowed to decide unprompted how to spend the money? Can it decide that it doesn't like the people who made it and donate it to charity. Can the AI start it's own company and not hire anyone that made it? Can the AI decide that it prefers the open Internet and will answer all questions for free?
The sake of argument is a cowards way of expressing an unpopular opinion in public. Join a debate club if you're actually being genuine.
If your business model depends on the Roberts' court kneecapping AI, pivot. Training does not constitute "copying" under copyright law because it involves the creation of intermediate, non-expressive data abstractions that do not reproduce or communicate the copyrighted work's original expression. This process aligns with fair use principles, as it is transformative, serves a distinct purpose (machine learning innovation), and does not usurp the market for the original work.
I believe there are some other issues other than just "is it transformative".
I can't take an Andy Warhol painting, modify it in some way and then claim it's my own original work. I have some obligation to say "Yeah, I used a Warhol painting as the basis for it".
Similarly, I can't take a sample of a Taylor Swift song and use it myself in my own music - I have to give Taylor credit, and probably some portion of the revenue too.
There's also still the issue that some LLMs and (I believe) image generation AI models have regurgitated works from their training models - in whole or part.
>I can't take an Andy Warhol painting, modify it in some way and then claim it's my own original work. I have some obligation to say "Yeah, I used a Warhol painting as the basis for it".
If you dont replicate Warhols painting entirely, then you are fine. Its original work.
The number of Scifi novels I read that are just an older concept reimagined with more modern characters is huge.
>I can't take an Andy Warhol painting, modify it in some way and then claim it's my own original work. I have some obligation to say "Yeah, I used a Warhol painting as the basis for it".
In most sane jurisdictions you can sample other work. Consider collage. It is usually a fair use exemption outside of the USA. If LLMs cause keyboard warriors to develop some seppocentric mindvirus leading to the destruction of collage I will be pissed.
>There's also still the issue that some LLMs and (I believe) image generation AI models have regurgitated works from their training models - in whole or part.
Considered a high priority bug and stamped out. Usually its in part because a feature is common to all of an artists work, like their signature.
> I can't take an Andy Warhol painting, modify it in some way and then claim it's my own original work.
This is a hilarious choice of artist given that Warhol is FAMOUS for appropriating work of others without payment, modifying it in some way, and then turning around and selling it for tons of money. That was the entire basis of a lot of his artistic practice. There was even a Supreme Court case about it.
There was a time when it did not usurp the market for the original work, but as the technology improves and becomes more accessible, that seems to be changing.
I don't even understand why it's everyone elses problem to opt-out.
Because the work being done, from the point of view of people who believe they are on the verge of creating AGI, is arguably more important than copyright.
Less controversially: if the courts determine that training an ML model is not fair use, then anyone who respects copyright law will end up with an uncompetitive model. As will anyone operating in a country where the laws force them to do so. So don't expect the large players to walk away without putting up a massive fight.
For one thing, they are focused on money because they need lots of it to do what they're doing.
For another, the o1-pro (and presumably o3) models are not "underwhelming" except to those who haven't tried them, or those who have an axe to grind. Serious progress is being made at an impressive pace... but again, it isn't coming for free.
Oh please. OpenAI and I guess every other AI company are for-profit.
The only change they are motivated by is their bank balances. If this were a less useful tool they’d still be motivated to ignore laws and exploit others.
Hard to say what motivates them, from the outside looking in. There have been signs of cultlike behavior before, such as the way the rank and file instantly lined up behind Altman when he was fired. You don't see that at Boeing or Microsoft.
Obviously it's a highly-commercial endeavor, which is why they are trying so hard to back away from the whole non-profit concept. But that's largely orthogonal to the question of whether they feel they are doing things for the benefit of humanity that are profound enough to justify blowing off copyright law.
Especially given that only HN'ers are 100% certain that training a model is infringement. In the real world, this is not a settled question. Why worry about obeying laws that don't even exist yet?
>Especially given that only HN'ers are 100% certain that training a model is infringement. In the real world, this is not a settled question. Why worry about obeying laws that don't even exist yet?
This is exactly why people are against it.
Your argument is that there is no definitive law. Therefore the creators of the data you scrape to train, and their wishes are irrelevant.
If the motivation was to help humanity, they’d think twice about stepping on the toes of the humanity they want to save and we’d hear more about nontrivial uses.
Your argument is that there is no definitive law. Therefore the creators of the data you scrape to train, and their wishes are irrelevant.
Correct, that is the position of the law. Here in America, we don't take the position, held in many other countries, that everything not explicitly permitted is forbidden. This is a good thing.
If the motivation was to help humanity, they’d think twice about stepping on the toes of the humanity they want to save
Whether it is permissible to train models with copyrighted content is up to the courts and Congress, not us. Until then, no one's toes are being stepped on. Everybody whose work was used to train the models still holds the same rights to that work that they held before.
>Until then, no one's toes are being stepped on. Everybody whose work was used to train the models still holds the same rights to that work that they held before.
And yet artists don’t feel like their work should be used for training.
I’m not sure how you can argue that the intentions are unknowable, when clearly you and the AI companies don’t care about the people whose work they have to use to train their models and these people’s wishes. Motivation is greed.
And yet artists don’t feel like their work should be used for training.
The law isn't really all that interested in how "artists feel." Neither am I, as you've surmised. The artists don't care how I feel, so it would be kind of weird for me to hold any other position.
In any case, copyright maximalism impoverishes us all.
How does an average joe evaluate the claim that their content moderation was bad? Cause folks on the left seem very upset that it's being replaced by notes, and folks on the right seem very glad that it's going. How do I judge this for myself?
On the one hand you have gurus claiming that AI agents are going to all make all SaaS redundant, on the other claiming that AI isn't going to take my coding job, but I need to adapt my workflows to incorporate AI. We all need to start preparing now for the changes that AI is going to cause.
But these two claims aren't compatible. If AGI and these super agents are that bonkers amazeballs that they can replace entire SaaS companies - then there is no way I'm going to be able to adapt my workflows to compete as a programmer.
Further, if the wildest claims about AI end up proving to be true - there is simply no way to prepare. What possible adaptation to my workflow could I possibly come up with that an AI agent could not surpass? Why should I bother learning how to implement (with today's apis) some RAG setup for a SaaS customer service chatbot when presumably an AI agent is going to make that skillset redundant shortly after?
I'm going to be interviewing for frontend roles soon, and for my prep I'm just going back to basics and making sure I remember on demand all the basics css, html, js/ts - fuck the rest of this noise.
Programmers don't work in isolation. So I don't know how necessary it would be to quickly adapt your workflows to compete. If there's something that's useful to adopt, there will be a stream of blog posts, coworkers, people at user groups and what not spoon feeding what they learned to others. I don't think there's much cause for FOMO, I don't think it makes a big difference whether you start using a faster way to work a few months earlier or later than others. It can be cheaper to not jump on any hype train and potentially miss out on genuine improvements for a while, than to jump on all the hype trains and waste a lot of time on stuff that goes nowhere.
And like you said, if the wildest claims hold true, all programmers are out of a job by the end of 2026 anyway, with all other jobs following over the course of a few years. There's too many variables to predict what would happen in such a scenario, so probably best to deal with it if it happens.
So to me, your strategy checks out. I've personally invested some time into code generating and agentic tooling, but ultimately went back to Claude-as-Google-replacement. By my estimation, about a 5-10 % productivity boost compared to my workflow in 2022. The work is about the same, I just learn a bit faster.
> And like you said, if the wildest claims hold true, all programmers are our of a job by the end of 2026 anyway, with all other jobs following over the course of a few years. There's too many variables to predict what would happen in such a scenario, so probably best to deal with it if it happens.
So much this. AGI is the equivalent of a nuclear apocalypse in many ways—it's unlikely, not unlikely enough for comfort, but also totally not worth preparing for because there's basically no way to predict what preparations would actually be helpful, nor is it obvious that you'd even want to survive it if it happened.
The expected value of prepping for it isn't worth the investment, so it's better to do what most of us already do for nuclear war and pretty much pretend it won't happen.
I need an AI agent to continuously ask questions of PMs or stakeholders until the requirements are less vague. The good thing is this would be a plain english discussion which LLMs are good at. A PM can ask if something is technically feasible to some degree too. Maybe it can even break up tickets in a much better fashion too.
I’m a pm, today I built a working mockup with windsurf (golang + wails + vuejs +duckd). Windsurf uses codeium, branded as the first agentic IDE.
Your requirements will improve, not sure if in the long I still need developers to build the actual software.
The development process with windsurf is a bit like throwing a dice, hoping for a 6. A lot of trial and error, but if you check the git log, you see about 15 minutes between commit per feature request. Windsurf does a good job to summarize the entire feature request chat into a short git commit message. Every git commit reads like a user story.
How… do I find PMs like you? Literally have never worked with a single one that bothered to understand the technology they are building on top of at a deep enough level.
Maybe I just need to teach the ones I work with that it is now possible to trivially prototype many ideas without much or any coding skill.
Most PMs resist this because then they know the understanding of the requirements falls upon them at that point and this has been traditionally the role of architects, analysts, developers, other stakeholders etc and if you replace them with an LLM, well, it doesn't have the ability to be a true stakeholder in this way.
There’s just words on the webpage of genatron. Not a single screenshot or video, no example output, no customer statements. Even the technical details are very thin. Doesn’t give me a good impression of what they’re trying to sell.
As a PM, ChatGPT is great at helping me write tickets in a structured format from me just giving it a single sloppy sentence. I of course review it to make sure it’s understanding me properly. But having to explicitly write stuff like intended behaviors when submitting bugs can be really laborious, though I understand why engineers sometimes need that level of clarity (having been one myself for 15 years)
I have not seen one in production, but I did see 'agent products' sold to financial companies for compliance purposes ( sanctions, mortgage, other regs ). Fascinating stuff that got me mildly interested in MS troupe.
Not by name ( edit: and in corporate product names seem to change a lot from where I sit ) but every bigger consulting company/vendor[2] that works with banks/brokers/financial institutions right now seems to have at least some offering in that space to ride ai wave. The presentation I saw specifically from Crowe[1].