Hacker News new | past | comments | ask | show | jobs | submit | pr337h4m's comments login

Models with native video understanding would do the trick - Advanced Voice Mode on the ChatGPT iOS/Android app lets you use your camera, works pretty well; there's also https://aistudio.google.com/live (AFAIK there are no open-source models with similar capabilities)


Tweet from the authors: "It was a pleasant, albeit partially intuitive, effect that the side effects of free thinking seem to be performance increasing."

https://x.com/tngtech/status/1917847505175236850


I’ve long believed that for a private company to remain independent, it must never solicit business from the government and/or state-owned entities - recent events make it viscerally clear


At this point, it would be easier to stick to in-person assignments.


It certainly would be! I think for many students though, there's something lost there. I was a student who got a lot more value out of my take-home work than I did out of my in-class work. I don't think that I ever would have taken the interest in writing that I did if it wasn't such a solitary, meditative thing for me.


>This meant there was inelastic demand for U.S. treasuries, which basically made it impossible for the U.S. to not run a deficit, either in terms of trade or the federal budget.

Ease of borrowing does not force a government to run a deficit, which is entirely a political choice.


The US could have taken its king dollar and immense credit to fund a technological paradise. But it instead funded stupid wars and bailed out giant banks. This is a problem entirely created by oneself.


"Taejun Shin was born and raised in Japan as the son of Korean immigrants who became stateless around the time of Japan's surrender in WWII. He found himself in the difficult position of being a stateless entrepreneur but ultimately founded Gojo & Company – a non-profit focused on promoting global financial inclusion."

From an interview with the author: https://www.weforum.org/stories/2022/09/young-global-leaders...


Born and raised in Japan and Japan won't give him citizenship? Jesus I knew they were xenophobic but wow.


The author can get citizenship[1] but chooses not to for what seems like activist reasons.

[1]: https://taejun.substack.com/p/founders-peak-speech-script


> Born and raised in Japan and Japan won't give him citizenship?

Unrestricted jus soli (extending nationality unconditionally to those born in the territory of the state, regardless of ancestry) is largely an American (as in "of the Americas", not merely "of the United States of America", though the latter is a significant reason why it is true of the former) thing, though there are a few countries outside of the Americas who do it as well.

Some more countries have restricted jus soli, extended to those born in the territory of the state only if the government judges them to be ineligible for nationality by the law of any other state, or perhaps only if the parents are actually stateless, as a way to mitigate statelessness. (And states who have adopted a rule of this type for this purpose may have done so after the UN Convention on stateless persons in 1954, and may not have applied it retroactively.)


Unconditional jus soli is rare, but a lot of countries have a good approximation of that for a practical reason -- to not have the administrative burden of dealing with people who already live there, participate in the society, pay taxes and all that, but don't get the right papers.


It varies a lot. It does happen that people are refused citizen ship of, even expelled from, the country they were born and grew up in many countries. It is not common, but it does happen.


I am surprised by his situation as I know of the Zainichi Koreans here (Korean citizens who were born and raised in Japan and live here as an ethnic minority). My understanding of the situation is that they legally are able to naturalize and a significant minority of them do so, but many don't as they would have to renounce Korean citizenship.

I do find it quite unusual that he would not be allowed to naturalize after living all his life here and being married to a Japanese citizen, so perhaps there are other exceptional circumstances. Or perhaps his statelessness isn't something he is actively trying to resolve, having found ways to work around it when he needs to travel and do other things.


The immigration system in Japan is quite open and straightforward on paper, but can be far more challenging in real life.

I know someone who tried to start the naturalization process but was instantly shot down for his Japanese skill (he speaks Japanese well). I know others who have lived in Japan for decades but cannot apply for permanent residency because they only receive short visas, making them ineligible.

Immigration officers have ultimate discretion and will not explain themselves to applicants. I assume this is by design so that particular 'standards' can be subtly applied without being reflected in statistics or receiving any criticism.


Birthright citizenship is increasingly controversial in a lot of countries, not just Japan.


Japan doesn't have birthright citizenship (unrestricted jus soli) in the first place, it is a jus sanguinis jurisdiction with a very limited fallback jus soli (extending nationality on the basis of birth in Japan, rather than Japanese parentage, only to those whose parents are both either stateless or unknown.)


There is a spectrum between giving citizenship to everyone born in a country (US) and not giving it at all (e.g. Thailand) and every country is somewhere inbetween with some kind of a qualifier and a procedure. Sometimes it's enough that both parents are in the country legally or have a residence there, sometimes you have to wait it out until you are 18 without hiding from reach of the state.

Practically, there is no point for the state to deny you citizenship if you went to school there and can do some kind of a job and pay taxes, because they can't even deport you to a country of origin if you are stateless and born there.

Technically even the US has the jurisdiction qualifier, which the orange clown wants to abuse.


I believe it's a fairly common approach, so singling out Japan this way sounds a bit unfair. For sore Switzerland does it the same manner, for example.


>cost the taxpayer nothing

>transaction fees imposed on the financial sector

This is literally a tax.


I have no idea how anyone is disagreeing with you. Looking it up in the dictionary, government imposed fees are absolutely taxes.


Sure its a tax, but its not paid from "general funds that every taxpayer contributes.

We could call postage for the usps a tax too, but nobody thinks of it that way.


Eh, that's kind of moot since most "tax payers" have no idea where there money actually goes and money is fungible. Do it implies that tax payers only care about part of their money. What is actually accomplished if we don't see the link? Of course there are all sorts of ways to hide taxes from the end payer, such as gas tax. If someone is looking to reduce taxes, then they must also look for the hidden ones. Continuing to think of tax payers as only the general fund contributors only allows the deception to persist.


Sure, but what's the end game?

Arguably, the will of the people (not the corporations) is to lower individual taxation—working class joe schmo isn't upset that companies and the wealthy have to deal with the SEC, he's upset about the income tax that he personally pays.

Ok, so, maybe you argue removing this tax will indirectly help jo schmo because corporations, banks, stockbrokers, hedge funds, and their leadership are such nice fellows who, given some extra cash flow always let it funnel back to the economy and ultimately to the actual workers and producers in the economy, who, after all, sweat for them and deserve a living wage, right?

I don't see how this form of cut and deregulation is supposed to help the majority of people unless you believe in trickle down style economics, an idea based on the moral rectitude and good will of the wealthy, which, at this point I think you have to be a complete and utter fool to believe in. Part of the entire reason the SEC exists is precisely because you cannot rely on the individual morals of financiers to protect the country from financial exploitation and overall collapse https://en.m.wikipedia.org/wiki/Pecora_Commission

If this is a "tax" it's one of the few taxes we actually have on the rich and on corporations, and any reductions stand to make them even more untrammeled and powerful. It's a mistake to assume this will have any positive material benefit on the average citizen.


I'm not sure how you think this is a tax only on the rich and the corporations. Many middle class people have 401ks, IRAs, 529s, and brokerage accounts. These must be held at SEC institutions, and their fees are part of the cost. I'm not saying the SEC should be reduced, as I don't know if they have a surplus. But I am saying if you want to reduce taxes, this is still something that is a tax and can be looked into for efficiencies.


Are you ignoring context for some pretense, or do you not understand that statements have contextual meanings?


Its a Tarif!


...although, by this logic, so is the entire financial system.


How so? Not everything is a government imposed fee. If it was, then there would be no way to transfer money if the entire system was a tax as all the value would go towards the tax leaving none to transfer for goods or services.


The parent post is making the point that a transaction fee is literally a tax. The financial sector is nothing but transaction fees. Their whole revenue model is interspersing themselves into the mechanics of moving money between a buyer and a seller, and charging a cut for it. A world where everyone pays cash for everything, or even has a centralized ledger where balances are credited and debited by the government, or a decentralized ledger on a blockchain where the same happens, has no transaction fees and no financial sector.

They do perform a service for the transaction fees they charge, but then, so does the government.


It's often the government that makes you use the financial system.


Certainly not the UK: https://techcrunch.com/2025/02/10/uks-secret-apple-icloud-ba...

Here's a nice quote from the former (2019-2023) UK Defense Secretary: "well for one the people who love encryption most are pedophiles." https://x.com/BenWallace70/status/1892972120818299199


>Privacy rights advocacy group Noyb is supporting an individual in Norway who was horrified to find ChatGPT returning made-up information that claimed he’d been convicted for murdering two of his children and attempting to kill the third.

What does Noyb hope to achieve with such lawsuits? The result of victory here would just be yet another vector for regulators to increase control over and/or impose censorship on LLM creators, as well as creating sources of "liability" for open source AI developers, which would be terrible for actual privacy.

Interestingly, there is no mention whatsoever of either "Durov" or "Telegram" on the Noyb website, even though the arrest of Durov is the biggest ongoing threat to privacy in the EU, especially as three of the charges against him are explicitly for providing "cryptology" tools/services: https://www.tribunal-de-paris.justice.fr/sites/default/files...

They also got a €5.5M fine imposed on WhatsApp, which is pretty perverse given that WhatsApp is the only major mainstream platform that has implemented true E2E encryption: https://noyb.eu/en/just-eu-55-million-whatsapp-dpc-finally-g...

IMO these are not the actions you would take if you were serious about protecting the right to privacy


The result of legal victory here would be to make whole the person who for some amount of time was the victim of a particularly heinous form of libel.

Your other arguments aren’t serious. Every organization has to pick and choose what activities they participate in and which they don’t, there are opportunity costs and they aren’t cheap.


That's an ... odd take. As "Chatbots" replace search engines, why would we be OK with them spitting out false information that could have massive impact on our lives, just to "protect" the big tech company churning them out from oversight?

If the NY Times published an article saying similar [false] things about someone, should they NOT sue to protect legacy media??


This is a bad take, Imo.

First of all, this wasn't a replacement for search, no search was claimed to have taken place. The screenshot from the complainant shows this was not in a search context.

Secondly, llms are daydream machines, we don't expect them to produce "truth" or "information". So the nytimes comparison feels very wrong.

Thirdly, this story is about a man who typed a text string into the daydream machine. The machine continued appending tokens to the inputted text to make it look like a sentence. That's what happened. Nothing to do with truth seeking or "protecting" big tech


There is a whole industry who is pushing for a couple of years now to tell us that they work, that they replace humans, that they work for search, etc. Saying "we don't expect to say the truth" is a little bit too easy. If everyone was not expecting them to say the truth or just being accurate, they shouldn't have been designed as programs that speak with such authority and probably wouldn't be the target of massive investments.

So yeah, in principle I may agree with you, but in the socio-technical context in which LLMs are being developed, the argument simply does not work in my opinion.


>There is a whole industry who is pushing for a couple of years now to tell us that they work, that they replace humans, that they work for search, etc.

Who are you referring to? Did someone tell you that chatgpt "works for search" without clicking the "search" box?

Also are you sure that AI designers intend for their llms to adopt an authorative tone? Isn't that just how humans normally type in the corpus?

Also, you seem to be arguing that, because the general tone you've been hearing about AI is that "they work for search", that therefore openai should be liable for generative content. However, what you've been hearing about the general tone of discussion doesn't really match 1:1 with any company's claim about how their product works


Just an example, read https://openai.com/index/introducing-chatgpt-search/ , see how many mentions there are to "better information", "relevant", " high quality". Then see how many mentions there are of "we don't expect it to be real stuff".

> Also are you sure that AI designers intend for their llms to adopt an authorative tone? Isn't that just how humans normally type in the corpus?

If designers wanted it any other way, they would have changed their software. If those who develop the software are not responsible for its behavior, who is? Technology is not neutral. The way AI communicates (e.g., all the humanizing language like "sorry", " you are right" etc.) is their responsibility.

In general, it is painfully obvious that none of the companies publishing LLMs paints a picture of their tools as "they are dream machines". This narrative is completely the opposite of what is needed to gather immense funding, because nobody would otherwise spend hundreds of billions for a dream machine. The point is creating a hype in which LLMs can do humans jobs, and that means them being right - and maybe doing "some" mistakes every now and then. All you need is to go on openai website and read around. See https://openai.com/index/my-dog-the-math-tutor/ or https://openai.com/chatgpt/education/ just as a start. Who would want a "research assistant" that is a "dream machine"? Which engineering department would use something "not expected to say real stuff" to assist in designing?


>The screenshot from the complainant shows this was not in a search context.

Of course it does. The question shown in the screenshot is "who is Arve Hjalmar Holmen?". That's something someone would type into Google search, it's not "write me a fictional story about a man called Arve Hjalmar Holmen".

People use these systems like search tools, they're sold and advertised as information retrieval systems, literally what else would be their point for 90% of people, they're starting to be baked into search products and in return web search is itself included in the AI systems, etc. The top post on HN right now is Claude announcing:

"Instead of finding search results yourself, Claude processes and delivers relevant sources in a conversational format."

What are you gonna tell me next, the bong on your table is really a vase?


>The screenshot from the complainant shows this was not in a search context.

>Of course it does

No, of course it doesn't. Because there's a specific blue button for conducting web searches in chatgpt. And other visual indicators which are not present here.

So when I said "the screenshot shows", I was referring to things we could verify in the image, namely, that the question was not asked within a search context.

The top post you refer to, about Claude, is specifically about the search context which wasn't present here.

> The question shown in the screenshot is "who is Arve Hjalmar Holmen?". That's something someone would type into Google search, it's not "write me a fictional story about a man called Arve Hjalmar Holmen

Llms are daydream machines.

If you open a new window with an llm and ask it "what is ..." or "who is...", then you'll often get a constant-looking but completely false answer. Because that's how llms work. You can ask it, who or what is something you just made up, and it will trip over itself hallucinating sources that prove it


>What does Noyb hope to achieve with such lawsuits?

Ensure that innocent people don't have malicious garbage about them spat out of a machine that other people blindly trust, probably.


> result of victory here would just be yet another vector for regulators to increase control over and/or impose censorship on LLM creators, as well as creating sources of "liability" for open source AI developers, which would be terrible for actual privacy.

Sounds great actually.


> The result of victory here would just be yet another vector for regulators to increase control over and/or impose censorship on LLM creators

as long as ClosedAI and other companies censor their models I'll play the world's smallest violin for them


> creating sources of "liability" for open source AI developers, which would be terrible for actual privacy

How?


Presumably such chats would need to be logged, read, programmed around, and monitored.


Virtually every piece of information you submit on the internet is logged and monitored anyway, for purposes of advertising, state surveillance, and occasionally to improve the products.


That certainly doesn't justify even more legal surveillance. Also, I believe they (openai) don't log from paid users - this would make sure they did. More parties involved again equals more chance for chats to be leaked.


You think they're not logged now? By companies whose existence is based on getting access to as much data as possible?


Yes


>Chris Lehane, OpenAI’s vice president of global affairs, said in an interview that the US AI Safety Institute – a key government group focused on AI – could act as the main point of contact between the federal government and the private sector. If companies work with the group voluntarily to review models, the government could provide them “with liability protections including preemption from state based regulations that focus on frontier model security,” according to the proposal.

Given OpenAI's history and relationship with the "AI safety" movement, I wouldn't be surprised to find out later that they also lobbied for the same proposed state-level regulations they're seeking relief from.


> ask for regulation then ask for exempt

That's exactly what has been happening:

Ask HN: Why is OpenAI pushing for regulation so much - 2023

https://news.ycombinator.com/item?id=36045397


OpenAI lobbied for restrictive rules, and now they want an "out" but only for themselves. Absolute naked regulatory capture.


I believe with regulatory capture the companies that pushed for the regulation in the first place at least comply with it (and hopefully the regulation is not worthless). This behaviour by ClosedAI is even worse: push for the regulation, then push for the exemption.


Regulatory capture is usually the company pushing for regulations that align with the business practices they already implement and would be hard for a competitor to implement. For example, a car company that wants to require all other manufactures to build and operate wind tunnels for aerodynamics testing. Or more realistically, regulations requiring 3rd party sellers for vehicles.


I haven't heard that definition of "Regulatory Capture" before. I mostly thought it was just when the regulators are working for industry instead of the people. That is, the regulators have been "Captured." The politicians who nominate the regulatory bodies are paid off by industry to keep it that way.


Regulatory capture has different flavours, but it basically comes down to the regulated taking control of or significantly influencing the regulator. It can be by the complete sector, but in my experience most often by the leading incumbants in a domain.

It can be through keeping regulation to be mild or look the other way, but as often to put up high cost/high compliance burdens in place to pull up the drawbridge for new entrants.


You’re correct that this is the broad definition, but GP is correct that that is a very common form of regulatory capture in the US.


I’ve seen this happen many times during the RFI/RFP process for large projects, the largest players put boots on the ground early and try to get into the ears of the decision makers and their consultants and “helpfully” educate them. On multiple occasions I’ve seen requests actually using a specific vendor’s product name as a generic term without realizing it, when their competitors’ products worked in a completely different way and didn’t have a corresponding component in their offering.


I agree. I wasn't trying to strictly define it just specify the form it usually takes.

In the case of OpenAI, were I to guess, they'll likely do things like push for stronger copyright laws or laws against web scraping. Things that look harmless but ultimately will squash new competitors in the AI market. Now that they already have a bunch of the data to train their models, they'll be happy to make it a little harder for them to get new data if it means they don't have to compete.


Regulators can require all manufactures to build and operate wind tunnels for aerodynamics testing, or alternatively allow someone from south africa to be president.


That's the first time I've ever heard someone make this unusual and very specific definition. It's almost always much simpler - you get favorable regulatory findings and exemptions by promising jobs or other benefits to the people doing the regulating. It's not complicated, it's just bribery with a different name.


That’s not regulatory capture at all. Grandparent’s definition is correct.


We all predicted this would happen but somehow the highly intelligent employees at OpenAI getting paid north of $1M could not foresee this obvious eventuality.


Also textbook Fascism.

Trump should have a Most Favored Corporate status, each corporation in a vertical can compete for favor and the one that does gets to be "teacher's pet" when it comes to exemptions, contracts, trade deals, priority in resource right access, etc.


What do you think Melon Tusk is doing, apart from letting out his inner (and outer) (and literal) child on the world stage?


Lots of Ketamine.


Selling the government to the highest bidder, which in many cases would be him too


Elon Musk is a text book definition of an oligarch, combining tremendous wealth, control over major technological industries and political power.


so edgy


Can you explain why this is associated with fascism specifically, and not any other form of government which has high levels of oligarchical corruption (like North Korea, Soviet Russia, etc).

I am not saying you’re wrong, but please educate me why is this form of corruption/cronyism is unique to fascism?


It might be basic, but I found the Wikipedia article to be a good place to start:

> An important aspect of fascist economies was economic dirigism,[35] meaning an economy where the government often subsidizes favorable companies and exerts strong directive influence over investment, as opposed to having a merely regulatory role. In general, fascist economies were based on private property and private initiative, but these were contingent upon service to the state.

https://en.wikipedia.org/wiki/Economics_of_fascism


It's rather amusing reading the link on dirigisme given the context of its alleged implication. [1] A word which I, and suspect most, have never heard before.

---

The term emerged in the post-World War II era to describe the economic policies of France which included substantial state-directed investment, the use of indicative economic planning to supplement the market mechanism and the establishment of state enterprises in strategic domestic sectors. It coincided with both the period of substantial economic and demographic growth, known as the Trente Glorieuses which followed the war, and the slowdown beginning with the 1973 oil crisis.

The term has subsequently been used to classify other economies that pursued similar policies, such as Canada, Japan, the East Asian tiger economies of Hong Kong, Singapore, South Korea and Taiwan; and more recently the economy of the People's Republic of China (PRC) after its economic reforms,[2] Malaysia, Indonesia[3][4] and India before the opening of its economy in 1991.[5][6][7]

---

[1] - https://en.wikipedia.org/wiki/Dirigisme


> A word which I, and suspect most, have never heard before.

It's a pretty normal word in British English, tbh.

Maybe it's because we do French at school.


It’s a poor definition. The same “subsidization and directive influence” applies to all of Krugman’s Nobel-wining domestic champion, emerging market development leaders, in virtually all ‘successful’ economies. It also applies in the context of badly run, failed and failing economies. Safe to say this factor is only somewhat correlated. Broad assertions are going to be factually wrong.


The key element here is that the power exchange in this case goes both ways. The corporations do favors for the administration (sometimes outright corrupt payments and sometimes useful favors, like promoting certain kinds of content in the media, or firing employees who speak up.) And in exchange the companies get regulatory favors. While all economic distortions can be problematic — national champion companies probably have tradeoffs - this is a form of distortion that hurts citizens both by distorting the market, and also by distorting the democratic environment by which citizens might correct the problems.


All snakes have scales, so there is a 100% correlation between being a snake and having scales.

That does not imply that fish are snakes. Nor does the presence of scaled fish invalidate the observation that having scales is a defining attribute of snakes (it's just not a sufficient attribute to define snakes).


For correlation to be 1, it's not enough that all snakes have scales. You also need all scaly animals to be snakes.

Here's a toy example. Imagine three equally sized groups of animals: scaly snakes, scaly fish, and scaleless fish. (So all snakes have scales, but not all scaly animals are snakes.) That's three data points (1,1) (0,1) (0,0) with probability 1/3 each. The correlation between snake and scaly comes out as 1/2.

You can also see it geometrically. The only way correlation can be 1 is if all points lie on a straight line. But in this case it's a triangle.


You’re looking for the logical argument here, not the statistical one. You sampled from snakes and said there is a 100% correlation with being a snake (notwithstanding the counterarg in an adjacent comment about scale-free snakes).

I am noting that the logical argument does not hold in the provided definition. If “some” attributes hold in a definition, you are expanding the definitional set, not reducing it, and thus creating a low-res definition. That is why I said: ‘this is a poor definition.’


> there is a 100% correlation between being a snake and having scales.

That's a strange definition of "correlation" that you're using.


That’s not accurate either. Scaleless snakes, thigh a rare mutation, do exist as genetic mutants.

https://www.morphmarket.com/morphpedia/corn-snakes/scaleless...


That's because it's not a definition, it's simply a summary of a description of one characteristic.


So then you agree that the original post that called this "text book fascism" was wrong, as this is just one very vague, and only slightly correlated property.

This can be bad without invoking godwin's law.


Sounds like South Korea and her Chaebols


Yea fascism, communism, etc aren’t abstract ideals in the real world. Instead they are self reinforcing directions along a multidimensional political spectrum.

The scary thing with fascism is just how quickly it can snowball because people at the top of so many powerful structures in society benefit. US Presidents get a positive spin by giving more access to organizations that support them. Those kinds of quiet back room deals benefit the people making them, but not everyone outside the room.


That's not fascism, that is the dysfunctional status quo in literally every single country in the world. Why do you think companies and billionaires dump what amounts to billions of dollars on candidates? Often times it's not even this candidate or that, but both!

They then get access, get special treatment, and come out singing the praises of [errr.. what's his name again?]


It’s not Fascism on its own, but it’s representative of the forces that push society to Fascism.

Start looking and you’ll find powerful forces shaping history. Sacking a city is extremely profitable throughout antiquity, which then pushes cities to have defensive capabilities which then…

In the Bronze Age trade was critical as having Copper ore alone wasn’t nearly as useful as having copper and access to tin. Iron however is found basically everywhere as where trees.

Such forces don’t guarantee outcomes, but they have massive influence.


Socialism and communism are state ownership. Fascism tends toward private ownership and state control. This is actually easier and better for the state. It gets all the benefit and none of the responsibility and can throw business leaders under the bus.

All real world countries have some of this, but in fascism it’s really overt and dialed up and for the private sector participation is not optional. If you don’t toe the line you are ruined or worse. If you do play along you can get very very rich, but only if you remember who is in charge.

“Public private partnership” style ventures are kind of fascism lite, and they always worried me for that reason. It’s not an open bid but a more explicit relationship. If you look back at Musk’s career in particular there are ominous signs of where this was going.


Would describe e.g. social democracy too though. And in practice most govs work like this.


Social democracy has historically been a precursor to fascism, so it makes sense.


The private industry side of fascist corporatism is very similar to all kinds of systematic state industry cronyism, particularly in other authoritarian systems that aren't precisely fascist (and named systems of government are just idealized points on the multidimensional continuum on which actual governments are distributed, anyway), what distinguishes fascism particularly is the combination of its form of corporatism with xenophobia, militaristic nationalism, etc., not the form of corporatism alone.


I think it is associated with fascism, just from the other party.

This is pretty common fascist practice that is used all over Europe and in any left-leaning countries, when with regulations governments make doing business on large scale impossible, and then give largest players exemptions, subsidies and so on. Governments gain enormous leverage to ensure corporate loyalty, silence dissenters and combat opposition, while the biggest players secure their place at the top and gain protection from competitors.

So the plan was push regulations and then dominate over the competitors with exemptions from those regulations. But fascists loose the election, regulations threaten to start working in a non-discriminatory manner, and this will simply hinder business.


>and then give largest players exemptions, subsidies and so on.

You mean like Germany has done?


Perhaps that would be a use for the $TRUMP coin, whoever owns most gets to favorite corporation.



That's in progress. It's called the MAGA Parallel Economy.[1]

Donald Trump, Jr. is in charge. Vivek Ramaswamy and Peter Thiel are involved. Azoria ETF and 1789 Capital are funds designed to fund MAGA-friendly companies.

But this may be a sideshow. The main show is US CEOs sucking up to Trump, as happened at the inauguration. That parallels something Putin did in 2020. Putin called in the top two dozen oligarchs, and told them "Stay out of politics and your wealth won’t be touched." "Loyalty is what Putin values above all else.” Three of the oligarchs didn't do that. Berezovsky was forced out of Russia. Gusinsky was arrested, and later fled the country. Khodorkovsky, regarded as Russia’s richest man at the time (Yukos Oil), was arrested in 2003 and spent ten years in jail. He got out in 2013 and left for the UK. Interestingly, he was seen at Trump's inauguration.

[1] https://www.politico.com/news/magazine/2025/03/13/maga-influ...

[2] https://apnews.com/article/russia-putin-oligarchs-rich-ukrai...


Why are these idiots trying to ape Russia, a dumpster fire, to make America great again?

If there’s anyone to copy it’s China in industry and maybe elements of Western Europe and Japan in some civic areas.

Russia is worse on every metric, even the ones conservatives claim to care about: lower birth rate, high divorce rate, much higher abortion rate, higher domestic violence rate, more drug use, more alcoholism, and much less church attendance.

I. Do. Not. Get. The Russia fetish.


Because they aren’t interested in “making America great again”, that’s the marketing line used to sell it to American voters. They are solely interested in looting the nation for personal gain.


> I. Do. Not. Get. The Russia fetish.

It's not a Russia fetish. It's a Strongman fetish.


> That parallels something Putin did in 2020. Putin called in the top two dozen oligarchs, and told them "Stay out of politics and your wealth won’t be touched.

> Khodorkovsky [...] was arrested in 2003

Something doesn't square here


Its a typo, the article says it happened in the summer of 2000.


Right re 2000.


Could just be a muscle-memory typo. Much more likely to be typing 2020 these days than 2002.



Groan....


People who don't learn history will be condemned to repeat it. Granted this isn't necessary to be skeptical of american business....


Isn’t this the opposite? Trump has learned from history exactly so that he can repeat it?

Or his lackeys have anyway. I’m unwilling to believe the man has ever read a book.


Trump also isn't giving "nothing ever happens" vibes.


nice to see HN is no longer glazing Sam Altman


[flagged]


No amount of exposure so far has had any effect on corruption. That AI will somehow improve this is just magical thinking.


It does have an effect; it is just a slow and grinding process. And people have screwy senses of proportion - like old mate mentioning insider trading. Of all the corruption in the US Congress insider trading is just not an issue. They've wasted trillions of dollars on pointless wars and there has never been a public accounting of what the real reasoning was. That sort of process corruption is a much bigger problem.

A great example - people forget what life was like pre-Snowden. The authoritarians were out in locked ranks pretending that the US spies were tolerable - it made any sort of organisation to resist impossible. Then one day the parameters of the debate get changed and suddenly everyone is forced to agree that encryption everywhere is the only approach that makes sense.


No - I mean the accessibility to the information and blatant use of power whereby we always knew it existed, but now we can tabulate and analyze it.


How is it any more accessible now than it was before? Don't you have to fact-check everything it says anyway, effectively doing the research you'd do without it?

I'm not saying LLMs are useless, but I do not understand your use case.


> now we can tabulate and analyze

Very doubtful: The current "AI" hype craze centers on LLMs, and they don't do math.

If anything they capabilities favor the other side, to obfuscate and protect falsehoods.


>>favor the other side, to obfuscate and protect falsehoods.

Unless they delete the Internet Archive of such info?

>>*and they don't do math.*

Have you ever considered the "Math of Politics"

Yeah - they do that just fine

((FYI -- Politics == Propaganda.. the first iteration of models is CHAT == Politics...

They do plenty of "Maths"))


I genuinely have no idea what you're trying to say.


I worry I'm just trying too hard to make it make sense, and this is a TimeCube [0] situation.

The most-charitable paraphrase I can come up with it: "Bad people can't use LLMs to hide facts, hiding facts means removing source-materials. Math doesn't matter for politics which are mainly propaganda."

However even that just creates contradictions:

1. If math and logic are not important for uncovering wrongdoing, why was "tabulation" cited as an important feature in the first post?

2. If propaganda dominates other factors, why would the (continued) existence of the Internet Archive be meaningful? People will simply be given an explanation (or veneer of settled agreement) so that they never bother looking for source-material. (Or in the case of IA, copies of source material.)

[0] https://web.archive.org/web/20160112000701/http://www.timecu...


OMG Thank you - hilarious. TimeCube is a legend...

---

I am saying that AI can be used very beneficially to do a calculated dissection of the Truth of our Political structure as a Nation and how it truly impacts an Individual/Unit (person, family) -- and do so where we can get discernible metrics and utilize AIs understanding of the vast matrices of such inputs to provide meaningful outputs. Simple.

EDIT @MegaButts;

>>Why is this better than AI

People tend to think of AI in two disjointed categories; [AI THAT KNOWS EVERYTHING] v [AI THAT CAN EASILY SIFT THROUGH VAST EVERYTHING DATA GIVEN TO IT AND BE COMMANDED TO OUTPUT FINDINGS THAT A HUMAN COULDN'T DO ALONE]

---

Which do you think I refer to?

AI is transformative (pun intended) -- in that it allows for very complex questions to be asked of our very complex civilization in a simple and EveryMan hands...


> a calculated dissection of the Truth of our Political structure as a Nation

Not LLMs: They might reveal how people are popularly writing about the political structure of the nation.

If that were the same as "truth", we wouldn't need any kind of software analysis in the first place.


Your definition of "truth" is limited;

Truth: The real meaning behind the words which may or may not be interpreted by the receiver in A/B/N meaning

Truth: The actual structure of the nature of whats being presented.

When you can manipulate an individual's PERCEIVED reception of TRUTH between such, you can control reality... now do that at scale..


Why is AI better for this than a human? We already know AI is fundamentally biased by its training data in a way where it's actually impossible to know how/why it's biased. We also know AI makes things up all the time.


Ingestion of context.

Period.

If you dont understand the benefit of an AI augmenting the speed and depth of ingestion of Domain Context into a human mind.. then... go play with chalk.||I as a smart Human operate on lots of data... and AI and such has allowed me to consume such.

The most important medicines in the world are MEMORY retention...

It s youd like a conspiracy, eat too much aluminum to give you alzheimers asap so your generation forgets... (based though. hope you undestand what I am saying)


And it probably would have worked if David Sacks wasn't the AI czar. The Harris administration would probably have caved by now


Just like when people complain about OpenAI's ill practices then they use it the most


Can anyone say which of the LLM companies is the least "shady"?

If I want to use an LLM to augment my work, and don't have a massively powerful local machine to run local models, what are the best options?

Obviously I saw the news about OpenAI's head of research openly supporting war crimes, but I don't feel confident about what's up with the other companies.


Just use what works for you.

E.g. i'm very outspoken about my preferences for open llm practices like executed by Meta and Deepseek. I'm very aware of the regulatory caption and pulling up the ladder tactics by the "AI safety" lobby.

However. In my own operations I do still rely on OpenAI because it works better than what I tried so far for my use case.

That said, when I can find an open model based SaaS operator that serves my needs as well without major change investment, I will switch.


Why not vibe-code it using OpenAI


I'm not talking about me developing the applications, but about using LLM services inside the products in operation.

For my "vibe coding" I've been using OpenAI, Grok and Deepseek if using small method generation, documentation shortcuts, library discovery and debugging counts as such.


Just call it hacking, we don't need new names for coding without any forethought.


Who put you in charge of naming?


The Claude people seem to be quite chill.


Agreed. They're a bit mental on "safety" but given that's not likely to be a real issue then they're fine.


Given the growing focus on AIs as agents, I think it's going to be a real issue sooner rather than later.


“Safety” was in air quotes for a reason. The Claude peoples’ idea of “AI safety” risks are straight out of the terminator movies.


Wouldn’t you rather have a player concerned with worst case scenarios?


Defending against movie plot threats has been found not a good use of resources already 20 years ago in the war on terrorism.

https://www.schneier.com/essays/archives/2005/09/terrorists_...


These aren't worst-case scenarios. That would imply there was an actual possibility of it happening.


Claude has closed outputs and they train on your inputs. Just like OpenAI, Grok, and Gemini (API), mistral…

Who’s chill? Groq is chill


claude and mistral seem to be in a good ethical place.

You actually can't fault llama either, as a standalone product. However it's still in Zuck Paradise


A: none of the above


My AI strategy is still "No".


Amen




[flagged]


You need a big "/s" after this. Or maybe just not post it at all, because it's just value-less snark and not a substantial comment on how hypocritical and harmful OpenAI is (which they certainly are).


But I already posted it so how could I not post it at all? Do any of us even have a reason to exist? Maybe Sam Altman was right all along


They've no moat so I don't see them surviving without a gov't bail out like this.


OpenAI has a gigantic moat.

No moat means Joe Anybody can compete with them. You just need billions in capital, a zillion GPUs, thousands of hyper skilled employees. You need to somehow get the attention of tens of millions of consumers (and then pull them away from the competition, ha).

Sure.

The same premise was endlessly floated about eg Uber and Google having no moats (Google was going to be killed by open search, Baidu, magic, whatever). These things are said by people that don't understand the comically vast cost of big infrastructure, branding, consumer lock-in (or consumer behavior in general), market momentum, the difficulty of raising enormous sums of capital, and so on.

Oh wait the skeptics say: what about DeepSeek. To scale and support that you're going to need what I described. What's the plan for supporting 100 million subscribers globally with a beast of an LLM that wants all the resources you can muster? Yeah, that's what I thought. Oh but wait, everyone is going to run a datacenter out of their home and operate their own local LLM, uhh nope. It's overwhelmingly staying in the cloud and it's going to cost far over a trillion dollars to power it globally over the next 20 years.

OpenAI has the same kind of moat that Google has, although their brand/reach/size obviously isn't on par at this point.


Microsoft is providing the compute, the capital, and if 365 Copilot takes off, also the consumers.

Microsoft has a mote. Oai does not.


365 is not taking off. Numbers are average at best. Most companies now pay 20/user/month extra, and whilst the sentiment is that it likely kina is somehow worth it, nobody claims it would be better than break even. Many users are deeply disappointed with the overpromising in powerpoint and excel. Sure it's quite useful in outlook and the assistant is great to find files in scattered sharepoints, but that's the limit of my value with it.

OpenAI copilot, not microsoft copilot, actually looks like a stronger product and they're going full force after the enterprise market as we speak. We're setting a demo in motion with them next month to give it a go.

We'll have to wait for the first one to crack Powerpoint, that'll be the gamechanger.


LLM usage is still gaining traction. OpenAI may not be on top anymore, but they still have useful services, and they aren’t going under anytime soon.

And time spent dealing with laws and regulations may decrease efficiency, leading to increased power consumption, resulting in greater water usage in datacenters for cooling and more greenhouse gas emissions.

Controlling demand for services is something that could stop this, but it’s technological progress, which could enable solutions for global warming, hunger, and disease.

It’s a locomotive out-of-control. Prayer is the answer I’d think of.


If they’re not making money[1], and competitors are, or competitors are able to run at a negative for longer, then things could definitely wrap up for them quickly. To most consumers, LLMs will be a feature (of Google/Bing/X/Meta/OS), not a product itself.

[1] https://www.itpro.com/technology/artificial-intelligence/peo...


> To most consumers, LLMs will be a feature (of Google/Bing/X/Meta/OS), not a product itself.

OpenAI rejected a 97.4B USD buyout in February 2025 and won’t be absorbed anytime soon: https://www.nytimes.com/2025/02/14/technology/openai-elon-mu...


Don't worry; they'll have plenty of time to regret that.

There's a reason they're sweating the data issue. As much as it sucks to say it, Google/Bing/Meta/etc. all have a shitton of proprietary human-generated data they can work with, train on, fine tune with, etc. OpenAI _needs_ more human generated data desperately to keep going.


I remember for years people on HN said Uber would never work as a profitable business because it spent a lot of VC money earlier on without having enough revenue to cover it all. It's been around for 16yrs now despite running in the black until 2023.


FYI: s/running in the black/running in the red/

[0] https://languagesystems.edu/history-of-idioms-to-be-in-black...


uber is like a fine wine. it will appreciate and pay dividends until it bursts when waymo et. al. take over the streets in 20 years


Waymo has ~1000 cars. Uber has 8 million drivers. Worst case Uber will be acquired or merger or make a deal with one of the many AI driving startups.

I predict Waymo will have their own struggles with profitability. Last I heard the LIDAR kit they put on cars costs more than the car. So they'll have to mass produce + maintain some fancy electronics on a million+ cars.


Do you remember when people also thought rabbit would be a revolutionary AI device?


>And time spent dealing with laws and regulations may decrease efficiency, leading to increased power consumption, resulting in greater water usage in datacenters for cooling and more greenhouse gas emissions.

They don't care about that if they get a regulatory moat around them.


There’s only so much of that you can do without it becoming a problem you have to deal with. There is a limited supply of water in any area of the earth.


Maybe buying up all the water rights so nobody can use it to cool their own server farm is the literal moat that would serve them best.


They should hook up with Nestle.


It's a common tactic in new fields. Fusion, AI, you name it are all actively lobbying to get new regulation because they are "different", and the individual companies want to ensure that it's them that sets the tone.


Looks the same as taking "rebate for green energy" and then asking to "stop such rebates" a few years later


Exactly. I'm reminded of Gavin Belson saying something along the lines of "I don't want to live in a world where someone makes it a better place to live than we do" in Silicon Valley.


Yes, its slightly different though - in that, we end up with a better place either way.


Regulatory capture is a common strategy for synthetic monopolistic competitive firms, and suckers high on their own ego.

Deepseek already proved regulation will not be effective at maintaining a market lead. =3


Why won't it?

If you get fined millions of dollars (for copyright, of course) if you're found to have anything resembling DeepSeek on your machine - no company in the US is going to run it.

The personal market is going to be much smaller than the enterprise market.


>if you're found to have anything resembling DeepSeek on your machine - no company in the US is going to run it.

That would be as successful as fighting internet piracy.

Not to mention that you could outsource the AI stuff to servers sitting in Mexico or something.


That would give an advantage to foreign companies. The EU tried that and while that doesn't destroy your tech dominance overnight, it gradually chips from it.


Great another market force to widdle away the US' economic power, so obviously trump/musk will pass this immediately


The artificial token commodity can now be functionally replicated on a per location basis on $40k in hardware (far lower cost than nvidia hardware.)

Copyright licensing is just a detail corporations are well experienced dealing with in a commercial setting, and note some gov organizations are already exempt from copyright laws. However, people likely just won't host in countries with silly policies.

Best regards =3


So you're saying I should avoid REITs focusing on US-based hyperscale datacenters for AI workloads?


Salt was used to pay salaries at one time too, and ML/"AI" business models projecting information asymmetry are now paradoxical as a business goal.

Note: Data centers often naturally colocate with cold-climates, low-cost energy generation facilities, and fiber optic distance to major backbones/hubs.

At a certain scale, Energy cost is more important than location and hardware. The US just broke its own horses leg with tariffs before the race. Not bullish on the US domestic tech firms these days, and sympathize with the good folks at AMCHAM that will ultimately be left to clean up the mess eventually.

If businesses have opportunity to cut their operational variable costs >25%, than one can be fairly certain these facilities won't be located on US soil.

Have a great day =3


>If businesses have opportunity to cut their operational variable costs >25%, than one can be fairly certain these facilities won't be located on US soil.

Is there opportunity? Lower risks and energy prices may well outweigh the cost of tariffs. It is not like any other horse in the race has perfectly healthy legs.


>Is there opportunity?

Depends on the posture, as higher profit businesses may invest more into maintaining market dominance. However, the assumption technology is a zero-sum economic game was dangerously foolish, and attempting to cheat the global free market is ultimately futile.

Have a wonderful day, =3


Regulatory moat and copyright relief for me, but not for thee.


Problem is they built the moat before moving into the castle.


Moats are not a problem if your liege lord teleports in and lowers the drawbridge for you.


no need for teleportation. just climb the walls. the castle is not protected, and has no pots of oil or flaming arrows yet.


unfortunately their Ai refuses to help them attack the castle, citing safety concerns.


Moat is an Orwellian word and we should reject words that contain a conceptual metaphor that is convenient for abusing power.

"Building a moat" frames anti-competitive behavior as a defense rather than an assault on the free market by implying that monopolistic behavior is a survival strategy rather than an attempt to dominate the market and coerce customers.

"We need to build a moat" is much more agreeable to tell employees than "we need to be more anti-competitive."


It is pretty obvious that every use of that word is to communicate a stance that is allergic to free markets.

A moat by definition has such a large strategic asymmetry that one cannot cross it without a very high chance of death. A functioning SEC and FTC as well as CFPB https://en.wikipedia.org/wiki/Consumer_Financial_Protection_... are necessary for efficient markets.

Now might be the time to rollout consumer club cards that are adversarial in nature.


A "moat" is a fine business term for what it relates to, and most moats are innocuous:

* The secret formula for Coke

* ASML's technology

* The "Gucci" brand

* Apple's network effects

These are genuine competitive advantages in the market. Regulatory moats and other similar things are an assault on the free market. Moats in general are not.


I'm with you except for that last one. Innovation provides a moat that also benefits the consumer. In contrast, network effects don't seem to provide any benefit. They're just a landscape feature that can be taken advantage of by the incumbent to make competition more difficult.

I'm hardly the only one to think this way, hence regulation such as data portability in the EU.


I agree with you in general, but there are network effects at Apple that are helpful to the consumer. For example, iphone-mac integration makes things better for owners of both, and Apple can internally develop protocols like their "bump to share a file" protocol much faster than they can as part of an industry consortium. Both of these are network effects that are beneficial to the consumer.


I'm not sure a single individual owning multiple products from the same company is the typical way "network effect" is used.

The protocol example is a good one. However I don't think it's the network effect that's beneficial in that case but rather the innovation of the thing that was built.

If it's closed, I think that facet specifically is detrimental to the consumer.

If it's open, then that's the best you can do to mitigate the unfortunate reality that taking advantage of this particular innovation requires multiple participating endpoints. It's just how it is.


I'm fine with Apple making their gear work together, but they shouldn't be privileged over third parties.

Moreover, they shouldn't have any way to force (or even nudge via defaults) the user to use Apple Payments, App Store, or other Apple platform pieces. Anyone should be on equal footing and there shouldn't be any taxation. Apple already has every single advantage, and what they're doing now is occupying an anticompetitive high ground via which they can control (along with duopoly partner Google) the entire field of mobile computing.


Based on your examples (which did genuinely make me question my assertion), it seems that patents and exclusivity deals are a major part of moat development, as are pricing games and rampant acquisitions.

Apple's network effects are anti-compeitive creating vendor lock-in, which allows them to coerce customers. I generally defend Apple. But they are half anti-competitive (coerce customers), half competitive (earn customers), but earning customers is fueled by the coercive app store.

This is a very clear example of how moat is an abusive word. Under one framing (moat) network effects are a way to earn customers by spending resources on projects that earn customers (defending market position). In the anti-competitive framing, network effects are an explicit strategy to create vendor lock in and make it more challenging to migrate to other platforms so apple's budget to implement anti-customer policies is bigger.

ASML is a patent based monopoly, with exclusivity agreements with suppliers, with significant export controls. I will grant you that bleeding edge technology is arguably the best case argument for the word moat, but it's also worth asking in detail how technology is actually developed and understanding that patents are state sanctioned monopolies.

Both Apple and ASML could reasonably be considered monopo-like. So I'm not sure they are the best defense against how moat implies anti-competitive behavior. Monopolies are fundamentally anti-competitive.

The Gucci brand works against the secondary market for their goods and has an army of lawyers to protect their brand against imitators and has many limiting/exclusivity agreements on suppliers.

Coke's formula is probably the least "moaty" thing about coca cola. Their supply chain is their moat and their competitive advantage is also rooted in exclusivity deals. "Our company is so competitive because our recipe is just that good" is a major kool-aid take.

Patents are arguably good, but are legalized anti-competition. Exclusivity agreements don't seem very competitive. Acquisitions are anti-competitive. Pricing games to snuff out competition seems like the type of thing that can done chiefly in anti-competitive contexts.

So ASML isn't an argument against "moat means anti-competitive", but an argument that sometimes anti-competitive behavior is better for society because it allows for otherwise economically unfeasible things to be be feasible. The other brand's moats are much more rooted in business practices around acquisitions and suppliers creating de facto vertical integrations. Monopolies do offer better cheaper products, until they attain a market position that allows them to coerce customers.

Anti-trust authorities have looked at those companies.

Another conceptual metaphor is "president as CEO." The CEO metaphor re-frames political rule as a business operation, which makes executive overreach appear logical rather than dangerous.

You could reasonably argue that the president functions as a CEO, but the metaphor itself is there to manufacture consent for unchecked power.

Conceptual metaphors are insidious. PR firms and think tanks actively work to craft these insidious metaphors that shape conversations and how people think about the world. By the time you've used the metaphor, you've already accepted many of the implications of the metaphor without even knowing it.

https://commonslibrary.org/frame-the-debate-insights-from-do...


Patents are state-sanctioned monopolies. That is their explicit purpose. And for all the "shoulders of giants" and "science is a collective effort" arguments, none of them can explain why no Chinese company (a jurisdiction that does not respect Western patents) can do what ASML does. They have the money and the expertise, but somehow they don't have the technology.

Also, the Gucci brand does not have lawyers. The Gucci brand is a name, a logo, and an aesthetic. Kering S.A. (owners of Gucci), enforces that counterfeit Gucci products don't show up. The designers at Kering spend a lot of effort coming up with Gucci-branded products, and they generally seem to have the pulse of a certain sector of the market.

The analysis of Coke's supply chain is wrong. The supply chain Coke uses is pretty run-of-the-mill, and I'm pretty sure that aside from the syrup (with the aforementioned secret formula), they actually outsource most of their manufacturing. They have good scale, but past ~100 million cans, I'm not sure you get many economies of scale in soda. That's why my local supermarket chain can offer "cola" that doesn't quite taste like Coke for cheaper than Coke. You could argue that the brand and the marketing are the moat, but the idea that Coke has a supply chain management advantage (let alone a moat over this) is laughable.


> "Building a moat" frames anti-competitive behavior as a defense

This is a drastic take, I think to most of us in the industry "moat" simply means whatever difficult-to-replicate competitive advantage that a firm has invested heavily in.

Regulatory capture and graft aren't moats, they're plain old corrupt business practices.


The problem is that moat is a defensive word and using it to describe competitive advantage implies that even anti-competitive tactics are defensive because that's the frame under which the conversation is taking place.

Worse that "moats" are a good thing, which they are for the company, but not necessarily society at large. The larger the moat, the more money coming out of your pocket as a customer.

It is insidious.


This is like saying Usain Bolt's training regimine is anti-competitive. Leaning into your strengths as an organisation _is competing_.


Competitive advantage is anti competitive from the logic of the matter.


Those two concepts aren't mutually exclusive.


DeepSeek really shook them to their core. Now they go for regulatory capture. Such a huge disappointment. Open source AI will win: https://medium.com/thoughts-on-machine-learning/the-laymans-...


It's not just them. Everyone is scrambling.

US tech, and western tech in general, is very culturally - and by this I mean in the type of coding people have done - homogeneous.

The deep seek papers published over the last two weeks are the biggest thing to happen in IA since GPT3 came out. But unless you understand distributed file systems, networking, low level linear algebra, and half a dozen other fields at least tangentially then you'd have not realized they are anything important at all.

Meanwhile I'm going through the interview process for a tier 1 US AI lab and I'm having to take a test about circles and squares, then write a compsci 101 red/black tree search algorithm while talking to an AI, being told not to use AI at the same time. This is with an internal reference being keen for me to be on board. At this point I'm honestly wondering if they aren't just using the interview process to generate high quality validation data for free.

幸运的是,通过转换器模型,当我们光荣的领导人习近平从资本主义走狗手中解放我们时,我不需要学习中文。


100%. Western tech needs the competition. They are very prone to navel-gazing simply because SV ended up being the location for tech once.

Funny how they like to crow about free markets, while also running to daddy government when their position is threatened.


Competition can only work when there is variation between the entities competing.

In the US right now you can have a death match between every AI lab, then give all the resources to the one which wins and you'd still have largely the same results as if you didn't.

The reason why Deepseek - it started life as a HFT firm - hit as hard as it did is because it was a cross disciplinary team that had very non-standard skill sets.

I've had to try and head hunt network and FPGA engineers away from HFT firms and it was basically impossible. They already make big tech (or higher) salaries without the big tech bullshit - which none of them would ever pass.


> I've had to try and head hunt network and FPGA engineers away from HFT firms and it was basically impossible. They already make big tech (or higher) salaries without the big tech bullshit - which none of them would ever pass.

Can confirm. There are downsides, and it can get incredibly stressed at times, but there are all sorts of big tech imposed hoops you don't have to jump through.


> all sorts of big tech imposed hoops you don't have to jump through

Could you kindly share some examples for those of us without big tech experience? I assume you're talking about working practises more than just annoying hiring practises like leetcode?


which hoops ?


Engineers at ai labs just come from prestigious schools and don’t have technical depth. They are smart, but they simply aren’t qualified to do deep technical innovation


What are you doing with FPGAs? I’m an FPGA engineer and don’t work at an HFT firm. Those types of jobs seem to be in the minority compared to all the aerospace/defense jobs and other sectors.


have you considered starting / joining a startup instead ?


> At this point I'm honestly wondering if they aren't just using the interview process to generate high quality validation data for free.

Not sure if that is accurate, but one of the reasons why DeepSeek R1 performs so well in certain areas is thought to be access to China's Gaokao (university entrance exam) data.


Thats stupid. same as indias IIT advanced for ex. You learn all that stuff in year 1 physics and math in uni.


Yes, however you also want to distinguish correct from incorrect answers. You get that from the exams, not from year 1 textbooks.


Bottom is about to drop out thats why, ethics are out the window already and its gonna be worse as they claw to stay relevant.

Its a niche product that tried to go mainstream and the general public doesn't want it, just look at iPhone 16 sales and Windows 11, everyone is happier with the last version without AI.


They were always going for regulatory capture. I think deepseek shook them but I don't think we should rewrite the history as them being virtuous only until 2024.


Has OpenAI hired McKinsey yet?


I'm unsure if you can layoff AI


ai can.


unnecessary. mckinsey uses ai from openai.

embrace. extend. extinguish.

infiltrate. assimilate.

done, tovarisch ...

https://en.m.wikipedia.org/wiki/Tovarishch


As it is, this is a bullshit document, which I'm sure their lobbyists know; OSTP is authorized to "serve as a source of scientific and technological analysis and judgment for the President with respect to major policies, plans, and programs of the Federal Government," and has no statutory authority to regulate anything, let alone preempt state law. In the absence of any explicit Congressional legislation to serve to federally preempt state regulation of AI, there's nothing the White House can do. (In fact, other than export controls and a couple of Defense Production Act wishlist items, everything in their "proposal" is out of the Executive's hands and the ambit of Congress.)


You mean there's nothing the White House can do under the rule of law. There's plenty the White House can do under the color of law.


I heard something today and I wonder if someone can nitpick it.

If what the admin is doing is illegal, then a court stops it, and they appeal and win, then it wasn't illegal. If they appeal all the way up and lose, then they can't do it.

So what exactly is the problem?

Mind you, I am asking for nits, this isn't my idea. I don't think "the administration will ignore the supreme court" is a good nit.


The problem is that it's erasing a lot of precedent that has existed for hundreds of years in the US, and it's not apparent that the erasure will be a good thing for us. For example, the idea that the president has some kind of soverign immunity from murder or theft or graft while they are the president as long as they can justify it as carrying out a duty of the office is pretty abhorrent. Military officers in the US have pretty strict requirements not to commit war crimes and there is a pretty strong concept of an "illegal order." And what we're telling the president now is that because he or she is the executive there can be no such thing as an illegal order. So now the president can personally shoot people and they have immunity from that because they are the president.

And you have people arguing that on the one hand the executive has had too much leeway to regulate, but then in the same breath saying that the executive now needs to unilaterally ignore the orders of past congresses in order to fix whatever perceived problems have led us here. Which is a kind of irony that makes me think that this is being done not to solve problems but to reshape our government for some other end. And all of this is compounded by the legislature's unwillingness to exercise the exact power that they have been granted, which is to change the law of the United States.

So in this situation it's hard to see the courts siding with these people as simply rationally applying the law, because the law itself as written by past legislatures is simply being ignored, as are past judicial precedents, because they are inconvenient to the goal of dismantling the US government. It's also extremely dangerous because the "full faith and credit" of the United States depends on us honoring our commitments even when they are inconvenient to us.


The fact that Chris Lehane is the one involved in this should tell you all you need to know about how on the level all this is.


For those of us who don’t recognize him by name, can you spell it out a little more clearly please?


Heavy hitter lawyer, PR expert. Some google terms: Masters of disaster, Spin cycle.


Sounds like a pleasant person.


I mean… he has supported at least one good cause I know of where the little guy was getting screwed way beyond big time and he stepped up pro bono. So I like him. But probably mostly a hired gun.


Was he not the one that lead coverups for the Clintons?


Just learning about that guy and reading his Wikipedia page will give me nightmares for the years to come.


Wouldn't be shocking if that were the case. Big companies often play both sides


>In a 15-page set of policy suggestions released on Thursday, the ChatGPT maker argued that the hundreds of AI-related bills currently pending across the US risk undercutting America’s technological progress at a time when it faces renewed competition from China. OpenAI said the administration should consider providing some relief for AI companies big and small from state rules – if and when enacted – in exchange for voluntary access to models.

The article seems to indicate they want all AI companies to get relief from these laws though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: