>Chris Lehane, OpenAI’s vice president of global affairs, said in an interview that the US AI Safety Institute – a key government group focused on AI – could act as the main point of contact between the federal government and the private sector. If companies work with the group voluntarily to review models, the government could provide them “with liability protections including preemption from state based regulations that focus on frontier model security,” according to the proposal.
Given OpenAI's history and relationship with the "AI safety" movement, I wouldn't be surprised to find out later that they also lobbied for the same proposed state-level regulations they're seeking relief from.
I believe with regulatory capture the companies that pushed for the regulation in the first place at least comply with it (and hopefully the regulation is not worthless). This behaviour by ClosedAI is even worse: push for the regulation, then push for the exemption.
Regulatory capture is usually the company pushing for regulations that align with the business practices they already implement and would be hard for a competitor to implement. For example, a car company that wants to require all other manufactures to build and operate wind tunnels for aerodynamics testing. Or more realistically, regulations requiring 3rd party sellers for vehicles.
I haven't heard that definition of "Regulatory Capture" before. I mostly thought it was just when the regulators are working for industry instead of the people. That is, the regulators have been "Captured." The politicians who nominate the regulatory bodies are paid off by industry to keep it that way.
Regulatory capture has different flavours, but it basically comes down to the regulated taking control of or significantly influencing the regulator. It can be by the complete sector, but in my experience most often by the leading incumbants in a domain.
It can be through keeping regulation to be mild or look the other way, but as often to put up high cost/high compliance burdens in place to pull up the drawbridge for new entrants.
I’ve seen this happen many times during the RFI/RFP process for large projects, the largest players put boots on the ground early and try to get into the ears of the decision makers and their consultants and “helpfully” educate them. On multiple occasions I’ve seen requests actually using a specific vendor’s product name as a generic term without realizing it, when their competitors’ products worked in a completely different way and didn’t have a corresponding component in their offering.
I agree. I wasn't trying to strictly define it just specify the form it usually takes.
In the case of OpenAI, were I to guess, they'll likely do things like push for stronger copyright laws or laws against web scraping. Things that look harmless but ultimately will squash new competitors in the AI market. Now that they already have a bunch of the data to train their models, they'll be happy to make it a little harder for them to get new data if it means they don't have to compete.
Regulators can require all manufactures to build and operate wind tunnels for aerodynamics testing, or alternatively allow someone from south africa to be president.
That's the first time I've ever heard someone make this unusual and very specific definition. It's almost always much simpler - you get favorable regulatory findings and exemptions by promising jobs or other benefits to the people doing the regulating. It's not complicated, it's just bribery with a different name.
We all predicted this would happen but somehow the highly intelligent employees at OpenAI getting paid north of $1M could not foresee this obvious eventuality.
Trump should have a Most Favored Corporate status, each corporation in a vertical can compete for favor and the one that does gets to be "teacher's pet" when it comes to exemptions, contracts, trade deals, priority in resource right access, etc.
Can you explain why this is associated with fascism specifically, and not any other form of government which has high levels of oligarchical corruption (like North Korea, Soviet Russia, etc).
I am not saying you’re wrong, but please educate me why is this form of corruption/cronyism is unique to fascism?
It might be basic, but I found the Wikipedia article to be a good place to start:
> An important aspect of fascist economies was economic dirigism,[35] meaning an economy where the government often subsidizes favorable companies and exerts strong directive influence over investment, as opposed to having a merely regulatory role. In general, fascist economies were based on private property and private initiative, but these were contingent upon service to the state.
It's rather amusing reading the link on dirigisme given the context of its alleged implication. [1] A word which I, and suspect most, have never heard before.
---
The term emerged in the post-World War II era to describe the economic policies of France which included substantial state-directed investment, the use of indicative economic planning to supplement the market mechanism and the establishment of state enterprises in strategic domestic sectors. It coincided with both the period of substantial economic and demographic growth, known as the Trente Glorieuses which followed the war, and the slowdown beginning with the 1973 oil crisis.
The term has subsequently been used to classify other economies that pursued similar policies, such as Canada, Japan, the East Asian tiger economies of Hong Kong, Singapore, South Korea and Taiwan; and more recently the economy of the People's Republic of China (PRC) after its economic reforms,[2] Malaysia, Indonesia[3][4] and India before the opening of its economy in 1991.[5][6][7]
It’s a poor definition. The same “subsidization and directive influence” applies to all of Krugman’s Nobel-wining domestic champion, emerging market development leaders, in virtually all ‘successful’ economies. It also applies in the context of badly run, failed and failing economies. Safe to say this factor is only somewhat correlated. Broad assertions are going to be factually wrong.
The key element here is that the power exchange in this case goes both ways. The corporations do favors for the administration (sometimes outright corrupt payments and sometimes useful favors, like promoting certain kinds of content in the media, or firing employees who speak up.) And in exchange the companies get regulatory favors. While all economic distortions can be problematic — national champion companies probably have tradeoffs - this is a form of distortion that hurts citizens both by distorting the market, and also by distorting the democratic environment by which citizens might correct the problems.
All snakes have scales, so there is a 100% correlation between being a snake and having scales.
That does not imply that fish are snakes. Nor does the presence of scaled fish invalidate the observation that having scales is a defining attribute of snakes (it's just not a sufficient attribute to define snakes).
For correlation to be 1, it's not enough that all snakes have scales. You also need all scaly animals to be snakes.
Here's a toy example. Imagine three equally sized groups of animals: scaly snakes, scaly fish, and scaleless fish. (So all snakes have scales, but not all scaly animals are snakes.) That's three data points (1,1) (0,1) (0,0) with probability 1/3 each. The correlation between snake and scaly comes out as 1/2.
You can also see it geometrically. The only way correlation can be 1 is if all points lie on a straight line. But in this case it's a triangle.
You’re looking for the logical argument here, not the statistical one. You sampled from snakes and said there is a 100% correlation with being a snake (notwithstanding the counterarg in an adjacent comment about scale-free snakes).
I am noting that the logical argument does not hold in the provided definition. If “some” attributes hold in a definition, you are expanding the definitional set, not reducing it, and thus creating a low-res definition. That is why I said: ‘this is a poor definition.’
So then you agree that the original post that called this "text book fascism" was wrong, as this is just one very vague, and only slightly correlated property.
Yea fascism, communism, etc aren’t abstract ideals in the real world. Instead they are self reinforcing directions along a multidimensional political spectrum.
The scary thing with fascism is just how quickly it can snowball because people at the top of so many powerful structures in society benefit. US Presidents get a positive spin by giving more access to organizations that support them. Those kinds of quiet back room deals benefit the people making them, but not everyone outside the room.
That's not fascism, that is the dysfunctional status quo in literally every single country in the world. Why do you think companies and billionaires dump what amounts to billions of dollars on candidates? Often times it's not even this candidate or that, but both!
They then get access, get special treatment, and come out singing the praises of [errr.. what's his name again?]
It’s not Fascism on its own, but it’s representative of the forces that push society to Fascism.
Start looking and you’ll find powerful forces shaping history. Sacking a city is extremely profitable throughout antiquity, which then pushes cities to have defensive capabilities which then…
In the Bronze Age trade was critical as having Copper ore alone wasn’t nearly as useful as having copper and access to tin. Iron however is found basically everywhere as where trees.
Such forces don’t guarantee outcomes, but they have massive influence.
Socialism and communism are state ownership. Fascism tends toward private ownership and state control. This is actually easier and better for the state. It gets all the benefit and none of the responsibility and can throw business leaders under the bus.
All real world countries have some of this, but in fascism it’s really overt and dialed up and for the private sector participation is not optional. If you don’t toe the line you are ruined or worse. If you do play along you can get very very rich, but only if you remember who is in charge.
“Public private partnership” style ventures are kind of fascism lite, and they always worried me for that reason. It’s not an open bid but a more explicit relationship. If you look back at Musk’s career in particular there are ominous signs of where this was going.
The private industry side of fascist corporatism is very similar to all kinds of systematic state industry cronyism, particularly in other authoritarian systems that aren't precisely fascist (and named systems of government are just idealized points on the multidimensional continuum on which actual governments are distributed, anyway), what distinguishes fascism particularly is the combination of its form of corporatism with xenophobia, militaristic nationalism, etc., not the form of corporatism alone.
I think it is associated with fascism, just from the other party.
This is pretty common fascist practice that is used all over Europe and in any left-leaning countries, when with regulations governments make doing business on large scale impossible, and then give largest players exemptions, subsidies and so on. Governments gain enormous leverage to ensure corporate loyalty, silence dissenters and combat opposition, while the biggest players secure their place at the top and gain protection from competitors.
So the plan was push regulations and then dominate over the competitors with exemptions from those regulations. But fascists loose the election, regulations threaten to start working in a non-discriminatory manner, and this will simply hinder business.
That's in progress. It's called the MAGA Parallel Economy.[1]
Donald Trump, Jr. is in charge. Vivek Ramaswamy and Peter Thiel are involved.
Azoria ETF and 1789 Capital are funds designed to fund MAGA-friendly companies.
But this may be a sideshow. The main show is US CEOs sucking up to Trump, as happened at the inauguration. That parallels something Putin did in 2020.
Putin called in the top two dozen oligarchs, and told them "Stay out of politics and your wealth won’t be touched." "Loyalty is what Putin values above all else.” Three of the oligarchs didn't do that. Berezovsky was forced out of Russia. Gusinsky was arrested, and later fled the country. Khodorkovsky, regarded as Russia’s richest man at the time (Yukos Oil), was arrested in 2003 and spent ten years in jail. He got out in 2013 and left for the UK. Interestingly, he was seen at Trump's inauguration.
Why are these idiots trying to ape Russia, a dumpster fire, to make America great again?
If there’s anyone to copy it’s China in industry and maybe elements of Western Europe and Japan in some civic areas.
Russia is worse on every metric, even the ones conservatives claim to care about: lower birth rate, high divorce rate, much higher abortion rate, higher domestic violence rate, more drug use, more alcoholism, and much less church attendance.
Because they aren’t interested in “making America great again”, that’s the marketing line used to sell it to American voters. They are solely interested in looting the nation for personal gain.
> That parallels something Putin did in 2020. Putin called in the top two dozen oligarchs, and told them "Stay out of politics and your wealth won’t be touched.
It does have an effect; it is just a slow and grinding process. And people have screwy senses of proportion - like old mate mentioning insider trading. Of all the corruption in the US Congress insider trading is just not an issue. They've wasted trillions of dollars on pointless wars and there has never been a public accounting of what the real reasoning was. That sort of process corruption is a much bigger problem.
A great example - people forget what life was like pre-Snowden. The authoritarians were out in locked ranks pretending that the US spies were tolerable - it made any sort of organisation to resist impossible. Then one day the parameters of the debate get changed and suddenly everyone is forced to agree that encryption everywhere is the only approach that makes sense.
How is it any more accessible now than it was before? Don't you have to fact-check everything it says anyway, effectively doing the research you'd do without it?
I'm not saying LLMs are useless, but I do not understand your use case.
I worry I'm just trying too hard to make it make sense, and this is a TimeCube [0] situation.
The most-charitable paraphrase I can come up with it: "Bad people can't use LLMs to hide facts, hiding facts means removing source-materials. Math doesn't matter for politics which are mainly propaganda."
However even that just creates contradictions:
1. If math and logic are not important for uncovering wrongdoing, why was "tabulation" cited as an important feature in the first post?
2. If propaganda dominates other factors, why would the (continued) existence of the Internet Archive be meaningful? People will simply be given an explanation (or veneer of settled agreement) so that they never bother looking for source-material. (Or in the case of IA, copies of source material.)
OMG Thank you - hilarious. TimeCube is a legend...
---
I am saying that AI can be used very beneficially to do a calculated dissection of the Truth of our Political structure as a Nation and how it truly impacts an Individual/Unit (person, family) -- and do so where we can get discernible metrics and utilize AIs understanding of the vast matrices of such inputs to provide meaningful outputs. Simple.
EDIT @MegaButts;
>>Why is this better than AI
People tend to think of AI in two disjointed categories; [AI THAT KNOWS EVERYTHING] v [AI THAT CAN EASILY SIFT THROUGH VAST EVERYTHING DATA GIVEN TO IT AND BE COMMANDED TO OUTPUT FINDINGS THAT A HUMAN COULDN'T DO ALONE]
---
Which do you think I refer to?
AI is transformative (pun intended) -- in that it allows for very complex questions to be asked of our very complex civilization in a simple and EveryMan hands...
Why is AI better for this than a human? We already know AI is fundamentally biased by its training data in a way where it's actually impossible to know how/why it's biased. We also know AI makes things up all the time.
If you dont understand the benefit of an AI augmenting the speed and depth of ingestion of Domain Context into a human mind.. then... go play with chalk.||I as a smart Human operate on lots of data... and AI and such has allowed me to consume such.
The most important medicines in the world are MEMORY retention...
It s youd like a conspiracy, eat too much aluminum to give you alzheimers asap so your generation forgets... (based though. hope you undestand what I am saying)
Can anyone say which of the LLM companies is the least "shady"?
If I want to use an LLM to augment my work, and don't have a massively powerful local machine to run local models, what are the best options?
Obviously I saw the news about OpenAI's head of research openly supporting war crimes, but I don't feel confident about what's up with the other companies.
E.g. i'm very outspoken about my preferences for open llm practices like executed by Meta and Deepseek. I'm very aware of the regulatory caption and pulling up the ladder tactics by the "AI safety" lobby.
However. In my own operations I do still rely on OpenAI because it works better than what I tried so far for my use case.
That said, when I can find an open model based SaaS operator that serves my needs as well without major change investment, I will switch.
I'm not talking about me developing the applications, but about using LLM services inside the products in operation.
For my "vibe coding" I've been using OpenAI, Grok and Deepseek if using small method generation, documentation shortcuts, library discovery and debugging counts as such.
You need a big "/s" after this. Or maybe just not post it at all, because it's just value-less snark and not a substantial comment on how hypocritical and harmful OpenAI is (which they certainly are).
No moat means Joe Anybody can compete with them. You just need billions in capital, a zillion GPUs, thousands of hyper skilled employees. You need to somehow get the attention of tens of millions of consumers (and then pull them away from the competition, ha).
Sure.
The same premise was endlessly floated about eg Uber and Google having no moats (Google was going to be killed by open search, Baidu, magic, whatever). These things are said by people that don't understand the comically vast cost of big infrastructure, branding, consumer lock-in (or consumer behavior in general), market momentum, the difficulty of raising enormous sums of capital, and so on.
Oh wait the skeptics say: what about DeepSeek. To scale and support that you're going to need what I described. What's the plan for supporting 100 million subscribers globally with a beast of an LLM that wants all the resources you can muster? Yeah, that's what I thought. Oh but wait, everyone is going to run a datacenter out of their home and operate their own local LLM, uhh nope. It's overwhelmingly staying in the cloud and it's going to cost far over a trillion dollars to power it globally over the next 20 years.
OpenAI has the same kind of moat that Google has, although their brand/reach/size obviously isn't on par at this point.
365 is not taking off. Numbers are average at best. Most companies now pay 20/user/month extra, and whilst the sentiment is that it likely kina is somehow worth it, nobody claims it would be better than break even. Many users are deeply disappointed with the overpromising in powerpoint and excel. Sure it's quite useful in outlook and the assistant is great to find files in scattered sharepoints, but that's the limit of my value with it.
OpenAI copilot, not microsoft copilot, actually looks like a stronger product and they're going full force after the enterprise market as we speak. We're setting a demo in motion with them next month to give it a go.
We'll have to wait for the first one to crack Powerpoint, that'll be the gamechanger.
LLM usage is still gaining traction. OpenAI may not be on top anymore, but they still have useful services, and they aren’t going under anytime soon.
And time spent dealing with laws and regulations may decrease efficiency, leading to increased power consumption, resulting in greater water usage in datacenters for cooling and more greenhouse gas emissions.
Controlling demand for services is something that could stop this, but it’s technological progress, which could enable solutions for global warming, hunger, and disease.
It’s a locomotive out-of-control. Prayer is the answer I’d think of.
If they’re not making money[1], and competitors are, or competitors are able to run at a negative for longer, then things could definitely wrap up for them quickly. To most consumers, LLMs will be a feature (of Google/Bing/X/Meta/OS), not a product itself.
Don't worry; they'll have plenty of time to regret that.
There's a reason they're sweating the data issue. As much as it sucks to say it, Google/Bing/Meta/etc. all have a shitton of proprietary human-generated data they can work with, train on, fine tune with, etc. OpenAI _needs_ more human generated data desperately to keep going.
I remember for years people on HN said Uber would never work as a profitable business because it spent a lot of VC money earlier on without having enough revenue to cover it all. It's been around for 16yrs now despite running in the black until 2023.
Waymo has ~1000 cars. Uber has 8 million drivers. Worst case Uber will be acquired or merger or make a deal with one of the many AI driving startups.
I predict Waymo will have their own struggles with profitability. Last I heard the LIDAR kit they put on cars costs more than the car. So they'll have to mass produce + maintain some fancy electronics on a million+ cars.
>And time spent dealing with laws and regulations may decrease efficiency, leading to increased power consumption, resulting in greater water usage in datacenters for cooling and more greenhouse gas emissions.
They don't care about that if they get a regulatory moat around them.
There’s only so much of that you can do without it becoming a problem you have to deal with. There is a limited supply of water in any area of the earth.
It's a common tactic in new fields. Fusion, AI, you name it are all actively lobbying to get new regulation because they are "different", and the individual companies want to ensure that it's them that sets the tone.
Exactly. I'm reminded of Gavin Belson saying something along the lines of "I don't want to live in a world where someone makes it a better place to live than we do" in Silicon Valley.
If you get fined millions of dollars (for copyright, of course) if you're found to have anything resembling DeepSeek on your machine - no company in the US is going to run it.
The personal market is going to be much smaller than the enterprise market.
That would give an advantage to foreign companies. The EU tried that and while that doesn't destroy your tech dominance overnight, it gradually chips from it.
The artificial token commodity can now be functionally replicated on a per location basis on $40k in hardware (far lower cost than nvidia hardware.)
Copyright licensing is just a detail corporations are well experienced dealing with in a commercial setting, and note some gov organizations are already exempt from copyright laws. However, people likely just won't host in countries with silly policies.
Salt was used to pay salaries at one time too, and ML/"AI" business models projecting information asymmetry are now paradoxical as a business goal.
Note: Data centers often naturally colocate with cold-climates, low-cost energy generation facilities, and fiber optic distance to major backbones/hubs.
At a certain scale, Energy cost is more important than location and hardware. The US just broke its own horses leg with tariffs before the race. Not bullish on the US domestic tech firms these days, and sympathize with the good folks at AMCHAM that will ultimately be left to clean up the mess eventually.
If businesses have opportunity to cut their operational variable costs >25%, than one can be fairly certain these facilities won't be located on US soil.
>If businesses have opportunity to cut their operational variable costs >25%, than one can be fairly certain these facilities won't be located on US soil.
Is there opportunity? Lower risks and energy prices may well outweigh the cost of tariffs. It is not like any other horse in the race has perfectly healthy legs.
Depends on the posture, as higher profit businesses may invest more into maintaining market dominance. However, the assumption technology is a zero-sum economic game was dangerously foolish, and attempting to cheat the global free market is ultimately futile.
Moat is an Orwellian word and we should reject words that contain a conceptual metaphor that is convenient for abusing power.
"Building a moat" frames anti-competitive behavior as a defense rather than an assault on the free market by implying that monopolistic behavior is a survival strategy rather than an attempt to dominate the market and coerce customers.
"We need to build a moat" is much more agreeable to tell employees than "we need to be more anti-competitive."
It is pretty obvious that every use of that word is to communicate a stance that is allergic to free markets.
A moat by definition has such a large strategic asymmetry that one cannot cross it without a very high chance of death. A functioning SEC and FTC as well as CFPB https://en.wikipedia.org/wiki/Consumer_Financial_Protection_... are necessary for efficient markets.
Now might be the time to rollout consumer club cards that are adversarial in nature.
A "moat" is a fine business term for what it relates to, and most moats are innocuous:
* The secret formula for Coke
* ASML's technology
* The "Gucci" brand
* Apple's network effects
These are genuine competitive advantages in the market. Regulatory moats and other similar things are an assault on the free market. Moats in general are not.
I'm with you except for that last one. Innovation provides a moat that also benefits the consumer. In contrast, network effects don't seem to provide any benefit. They're just a landscape feature that can be taken advantage of by the incumbent to make competition more difficult.
I'm hardly the only one to think this way, hence regulation such as data portability in the EU.
I agree with you in general, but there are network effects at Apple that are helpful to the consumer. For example, iphone-mac integration makes things better for owners of both, and Apple can internally develop protocols like their "bump to share a file" protocol much faster than they can as part of an industry consortium. Both of these are network effects that are beneficial to the consumer.
I'm not sure a single individual owning multiple products from the same company is the typical way "network effect" is used.
The protocol example is a good one. However I don't think it's the network effect that's beneficial in that case but rather the innovation of the thing that was built.
If it's closed, I think that facet specifically is detrimental to the consumer.
If it's open, then that's the best you can do to mitigate the unfortunate reality that taking advantage of this particular innovation requires multiple participating endpoints. It's just how it is.
I'm fine with Apple making their gear work together, but they shouldn't be privileged over third parties.
Moreover, they shouldn't have any way to force (or even nudge via defaults) the user to use Apple Payments, App Store, or other Apple platform pieces. Anyone should be on equal footing and there shouldn't be any taxation. Apple already has every single advantage, and what they're doing now is occupying an anticompetitive high ground via which they can control (along with duopoly partner Google) the entire field of mobile computing.
Based on your examples (which did genuinely make me question my assertion), it seems that patents and exclusivity deals are a major part of moat development, as are pricing games and rampant acquisitions.
Apple's network effects are anti-compeitive creating vendor lock-in, which allows them to coerce customers. I generally defend Apple. But they are half anti-competitive (coerce customers), half competitive (earn customers), but earning customers is fueled by the coercive app store.
This is a very clear example of how moat is an abusive word. Under one framing (moat) network effects are a way to earn customers by spending resources on projects that earn customers (defending market position). In the anti-competitive framing, network effects are an explicit strategy to create vendor lock in and make it more challenging to migrate to other platforms so apple's budget to implement anti-customer policies is bigger.
ASML is a patent based monopoly, with exclusivity agreements with suppliers, with significant export controls. I will grant you that bleeding edge technology is arguably the best case argument for the word moat, but it's also worth asking in detail how technology is actually developed and understanding that patents are state sanctioned monopolies.
Both Apple and ASML could reasonably be considered monopo-like. So I'm not sure they are the best defense against how moat implies anti-competitive behavior. Monopolies are fundamentally anti-competitive.
The Gucci brand works against the secondary market for their goods and has an army of lawyers to protect their brand against imitators and has many limiting/exclusivity agreements on suppliers.
Coke's formula is probably the least "moaty" thing about coca cola. Their supply chain is their moat and their competitive advantage is also rooted in exclusivity deals. "Our company is so competitive because our recipe is just that good" is a major kool-aid take.
Patents are arguably good, but are legalized anti-competition. Exclusivity agreements don't seem very competitive. Acquisitions are anti-competitive. Pricing games to snuff out competition seems like the type of thing that can done chiefly in anti-competitive contexts.
So ASML isn't an argument against "moat means anti-competitive", but an argument that sometimes anti-competitive behavior is better for society because it allows for otherwise economically unfeasible things to be be feasible. The other brand's moats are much more rooted in business practices around acquisitions and suppliers creating de facto vertical integrations. Monopolies do offer better cheaper products, until they attain a market position that allows them to coerce customers.
Anti-trust authorities have looked at those companies.
Another conceptual metaphor is "president as CEO." The CEO metaphor re-frames political rule as a business operation, which makes executive overreach appear logical rather than dangerous.
You could reasonably argue that the president functions as a CEO, but the metaphor itself is there to manufacture consent for unchecked power.
Conceptual metaphors are insidious. PR firms and think tanks actively work to craft these insidious metaphors that shape conversations and how people think about the world. By the time you've used the metaphor, you've already accepted many of the implications of the metaphor without even knowing it.
Patents are state-sanctioned monopolies. That is their explicit purpose. And for all the "shoulders of giants" and "science is a collective effort" arguments, none of them can explain why no Chinese company (a jurisdiction that does not respect Western patents) can do what ASML does. They have the money and the expertise, but somehow they don't have the technology.
Also, the Gucci brand does not have lawyers. The Gucci brand is a name, a logo, and an aesthetic. Kering S.A. (owners of Gucci), enforces that counterfeit Gucci products don't show up. The designers at Kering spend a lot of effort coming up with Gucci-branded products, and they generally seem to have the pulse of a certain sector of the market.
The analysis of Coke's supply chain is wrong. The supply chain Coke uses is pretty run-of-the-mill, and I'm pretty sure that aside from the syrup (with the aforementioned secret formula), they actually outsource most of their manufacturing. They have good scale, but past ~100 million cans, I'm not sure you get many economies of scale in soda. That's why my local supermarket chain can offer "cola" that doesn't quite taste like Coke for cheaper than Coke. You could argue that the brand and the marketing are the moat, but the idea that Coke has a supply chain management advantage (let alone a moat over this) is laughable.
> "Building a moat" frames anti-competitive behavior as a defense
This is a drastic take, I think to most of us in the industry "moat" simply means whatever difficult-to-replicate competitive advantage that a firm has invested heavily in.
Regulatory capture and graft aren't moats, they're plain old corrupt business practices.
The problem is that moat is a defensive word and using it to describe competitive advantage implies that even anti-competitive tactics are defensive because that's the frame under which the conversation is taking place.
Worse that "moats" are a good thing, which they are for the company, but not necessarily society at large. The larger the moat, the more money coming out of your pocket as a customer.
US tech, and western tech in general, is very culturally - and by this I mean in the type of coding people have done - homogeneous.
The deep seek papers published over the last two weeks are the biggest thing to happen in IA since GPT3 came out. But unless you understand distributed file systems, networking, low level linear algebra, and half a dozen other fields at least tangentially then you'd have not realized they are anything important at all.
Meanwhile I'm going through the interview process for a tier 1 US AI lab and I'm having to take a test about circles and squares, then write a compsci 101 red/black tree search algorithm while talking to an AI, being told not to use AI at the same time. This is with an internal reference being keen for me to be on board. At this point I'm honestly wondering if they aren't just using the interview process to generate high quality validation data for free.
Competition can only work when there is variation between the entities competing.
In the US right now you can have a death match between every AI lab, then give all the resources to the one which wins and you'd still have largely the same results as if you didn't.
The reason why Deepseek - it started life as a HFT firm - hit as hard as it did is because it was a cross disciplinary team that had very non-standard skill sets.
I've had to try and head hunt network and FPGA engineers away from HFT firms and it was basically impossible. They already make big tech (or higher) salaries without the big tech bullshit - which none of them would ever pass.
> I've had to try and head hunt network and FPGA engineers away from HFT firms and it was basically impossible. They already make big tech (or higher) salaries without the big tech bullshit - which none of them would ever pass.
Can confirm. There are downsides, and it can get incredibly stressed at times, but there are all sorts of big tech imposed hoops you don't have to jump through.
> all sorts of big tech imposed hoops you don't have to jump through
Could you kindly share some examples for those of us without big tech experience? I assume you're talking about working practises more than just annoying hiring practises like leetcode?
Engineers at ai labs just come from prestigious schools and don’t have technical depth. They are smart, but they simply aren’t qualified to do deep technical innovation
What are you doing with FPGAs? I’m an FPGA engineer and don’t work at an HFT firm. Those types of jobs seem to be in the minority compared to all the aerospace/defense jobs and other sectors.
> At this point I'm honestly wondering if they aren't just using the interview process to generate high quality validation data for free.
Not sure if that is accurate, but one of the reasons why DeepSeek R1 performs so well in certain areas is thought to be access to China's Gaokao (university entrance exam) data.
Bottom is about to drop out thats why, ethics are out the window already and its gonna be worse as they claw to stay relevant.
Its a niche product that tried to go mainstream and the general public doesn't want it, just look at iPhone 16 sales and Windows 11, everyone is happier with the last version without AI.
They were always going for regulatory capture. I think deepseek shook them but I don't think we should rewrite the history as them being virtuous only until 2024.
As it is, this is a bullshit document, which I'm sure their lobbyists know; OSTP is authorized to "serve as a source of scientific and technological analysis and judgment for the President with respect to major policies, plans, and programs of the Federal Government," and has no statutory authority to regulate anything, let alone preempt state law. In the absence of any explicit Congressional legislation to serve to federally preempt state regulation of AI, there's nothing the White House can do. (In fact, other than export controls and a couple of Defense Production Act wishlist items, everything in their "proposal" is out of the Executive's hands and the ambit of Congress.)
I heard something today and I wonder if someone can nitpick it.
If what the admin is doing is illegal, then a court stops it, and they appeal and win, then it wasn't illegal. If they appeal all the way up and lose, then they can't do it.
So what exactly is the problem?
Mind you, I am asking for nits, this isn't my idea. I don't think "the administration will ignore the supreme court" is a good nit.
The problem is that it's erasing a lot of precedent that has existed for hundreds of years in the US, and it's not apparent that the erasure will be a good thing for us. For example, the idea that the president has some kind of soverign immunity from murder or theft or graft while they are the president as long as they can justify it as carrying out a duty of the office is pretty abhorrent. Military officers in the US have pretty strict requirements not to commit war crimes and there is a pretty strong concept of an "illegal order." And what we're telling the president now is that because he or she is the executive there can be no such thing as an illegal order. So now the president can personally shoot people and they have immunity from that because they are the president.
And you have people arguing that on the one hand the executive has had too much leeway to regulate, but then in the same breath saying that the executive now needs to unilaterally ignore the orders of past congresses in order to fix whatever perceived problems have led us here. Which is a kind of irony that makes me think that this is being done not to solve problems but to reshape our government for some other end. And all of this is compounded by the legislature's unwillingness to exercise the exact power that they have been granted, which is to change the law of the United States.
So in this situation it's hard to see the courts siding with these people as simply rationally applying the law, because the law itself as written by past legislatures is simply being ignored, as are past judicial precedents, because they are inconvenient to the goal of dismantling the US government. It's also extremely dangerous because the "full faith and credit" of the United States depends on us honoring our commitments even when they are inconvenient to us.
I mean… he has supported at least one good cause I know of where the little guy was getting screwed way beyond big time and he stepped up pro bono. So I like him. But probably mostly a hired gun.
>In a 15-page set of policy suggestions released on Thursday, the ChatGPT maker argued that the hundreds of AI-related bills currently pending across the US risk undercutting America’s technological progress at a time when it faces renewed competition from China. OpenAI said the administration should consider providing some relief for AI companies big and small from state rules – if and when enacted – in exchange for voluntary access to models.
The article seems to indicate they want all AI companies to get relief from these laws though.
Given OpenAI's history and relationship with the "AI safety" movement, I wouldn't be surprised to find out later that they also lobbied for the same proposed state-level regulations they're seeking relief from.