"Bound by real data" meaning not hallucinations, which is by far the bigger issue when it comes to "be an expert that does x" that doesn't have a real capability to say "I don't know".
The chance of different models hallucinating the same plausible sounding but incorrect building codes, medical diagnoses, etc, would be incredible unlikely, due to arch differences, training approaches, etc.
So when two concur in that manner, unless they're leaning heavily on the same poisoned datasets, there's a healthy chance the result is correct based on a preponderance of known data.
Not really. The idea that reality lies _in_ the middle is fairly coherent. It's not, on it's face, absolutely true but there are and infinite number of options between two outcomes so the odds are overwhelmingly in the favor that the truth lies somewhere in between. Is either side totally right about every single point of contention between them? Probably not, so the answer is likely in the middle. The fallacy is a lot easier to see when you're arguing about one precise point. In that case, someone is probably right and wrong. But, in cases where a side is talking about a complex event with a multitude of data points, both extremes are likely not completely correct and the answer does, indeed, lie in between the extremes.
The fallacy is that the true lies _at_ the middle, not in the middle.
It's not an argument by analogy. It's a reductio ad absurdum on the generalization that reality always lies in the middle but not always at the exact middle.
"Round" does not mean spherical and both of these claims are falsifiable and mutually exclusive.
The AI situation doesn't not have two mutually exclusive claims, it has two claims on the opposite sides of economic and cultural impact that are differences of magnitude and direction.
AI can both be a bubble and revolutionary, just like the internet.
You're thinking in one dimension. Truth. Add another dimension, time, and now we're talking about reality.
Ultimately, if both sides have a true argument, the real issue is which will happen first in time? Will AI change the world before the whole circular investment vehicle implode? Or after, like happened with the dotcom boom?
"AI is a bubble" and "AI is going to replace all human jobs" is, essentially, the two extremes I'm seeing. AI replacing some jobs (even if partially) and the bubble-ness of the boom are both things that exist on a line between two points. Both can be partially true and exist anywhere on the line between true and false.
No jobs replaced<-------------------------------------->All jobs replaced
Bubble crashes the economy and we all end up dead in a ditch from famine<---------------------------------------->We all end up super rich in the post scarcity economy
For one, in higher dimensions, most of the volume of a hypersphere is concentrated near the border.
Secondly, and it is somewhat related, you are implicitly assuming some sort of convexity argument (X is maybe true, Y is maybe true, 0.5X + 0.5 Y is maybe true). Why?
I agree there is a large continuum of possibilities, but that does not mean that something in the middle is more likely, that is the fallacious step in the reasoning.
> When Chatgpt3 came out we all declared that test utterly destroyed.
No, I did not. I tested it with questions that could not be answered by the Internet (spatial, logical, cultural, impossible coding tasks) and it failed in non-human-like ways, but also surprised me by answering some decently.
Yes, they're amazingly good given they didn't have copies of the original posters, Internet access to get reference images, or even VCRs at home to play the movies themselves.
The clickbait title is about "Africa" and "bad", but it's specifically about Ghana and awesome.
You think when ICE arrested over 300 South Korean citizens who were setting up a Georgia Hyundai plant and subjected them to alleged human rights abuses, it was only a perceived slight?
'The raid "will do lasting damage to America's credibility," John Delury, a senior fellow at the Asia Society think tank, told Bloomberg. "How can a government that treats Koreans this way be relied upon as an 'ironclad' ally in a crisis?"'
That is a really silly take. The US has had "advisors" embedded with Ukraine forces since the beginning of the war. Multiple high level Pentagon officials (think multi-star generals) have mentioned in interviews over the years how valuable is the intel they have been learning and collecting from the war. I can guarantee you that somewhere in Langley there are many analysts constantly churning out reports about battlefield lessons and techniques from the Ukrainian war.
I like to think that there is core of career public servants inside the US government who take their jobs seriously and they perform it well. Whether decision makers take their inputs into account is a different matter.
Two points here. (1) A lot of technology that Ukraine uses is based on what the Western countries have provided them. (2) Ukraine struggles with AI as well, in fact, according to some sources, Ukraine is already behind Russia in drone technology. One of the reasons cited is that they invested heavily into AI and it did not yield viable drones. On the other hand, Russians decided to invest into manually controlled drones controlled via optic cables and they are very effective.
I trust very few "sources", especially these days. And even more especially, in the middle of a war between two former USSR states that have deep histories in manipulating public opinion and weaponizing that against their adversaries.
I understand your skepticism, but here is an interview from April 2025 with the founder of VYRIY, a Ukrainian drone company: https://militarnyi.com/en/news/vyriy-founder-compares-accura... According to him, Russian drones are far superior. "In terms of [Ukrainian drone vs Russian drone] quality, well, like 10% vs. 80%. It’s not even comparable,” Oleksii Babenko sums up.
Again, I don’t trust sources that are random quotes on the interwebs during a conflict where people are dying (or corporate PR speak) but maybe that’s a me problem?
1. Did you read the first link? Virtually all of Ukraine's FPV (first-person view) attack drones are domestically produced.
2. If that's the case, why is the US trying to invest heavily into AI as well if we learned from Ukraine that AI controlled drones are shittier than human controlled drones?
> If that's the case, why is the US trying to invest heavily into AI as well if we learned from Ukraine that AI controlled drones are shittier than human controlled drones?
What is according to your estimates the ratio for research funding for human-controlled and AI-controlled weapons in the US nowadays?
What if their drones weren't good? Just a country being attacked, with no strategic war technology transfer. Then, what should the US position be?
Call me an idealist, but I think the priority in this conflict should be their sovereignty and the well being of the people over there. Otherwise it's just weird.
>The US is too proud to admit that Ukraine is the world expert on drone warfare and that Americans should learn from Ukrainians.
there is literally nobody in America who nurses that type of pride, quite the opposite, Americans have been unusually open to integrating foreign ideas from the beginning.
This is what happens when you only talk to like-minded people and live in a bubble. There are still plenty of people I know who would reject foreign ideas simply because "pshaw. We can [do that/do it] better."
I mean I literally worked with a guy who got annoyed at the harmonized power cord I bought because it used IEC wire colors and "This is America where we use American wire colors, not that European shit. I taped the leads red white and blue!"
The US and EU/NATO are facing growing technological obsolescence. They lack the capability to effectively counter modern threats and appear disconnected from the realities of contemporary warfare.
A clear example is NATO’s struggle to respond to Russian drone incursions. Recently, over 20 drones were launched into Polish airspace—yet only four were intercepted, at a cost of millions of dollars. The remainder crashed across Polish territory, highlighting serious gaps in air defense systems.
NATO can't decide if they should try and shoot down Russian missiles and drones over their territory because they are afraid to escalate and don't want to anger Russia. Did you know they only agreed to shoot down Russian war planes flying over NATO territory after Trump okayed it? Unbelievable that they would let an adversary fly their bombers over their territory in the first place. Did you know that NATO has member countries that are aligned with Russia? I encourage you not to take my word for it but investigate it for yourself, there are plenty of trustworthy Western Analysts that support what I have said.
Trump said that Russia is a paper tiger/bear, but Russia exposed that the West/EU is just as incompetent and NATO is basically useless without the support of the USA which is questionable. But I'm sure 100% NATO will get a chance to prove its worth, it will be interesting to see if it can meet the coming challenges.
Hey, I didn't say NATO was effective (they should be shooting down anything Russian that violates their NATO airspace, just like Turkey did).
I just said NATO is really important at the moment. It's a tool - an important tool - but that doesn't mean it's being wielded correctly. Is that the fault of the hammer or the fault of the Trump supporter holding it?
> Just earlier this year, the first words out of the mouth of the political right about a fatal aircraft crash was to... Question the credentials of its black pilot.
> All that judgement was made before any of the facts besides the pilot's skin color were out.
It's worse than that. The pilot was actually white.
Trump thought the pilot must have been black just because they crashed. When asked why he thought DEI caused the crash, he said, "Because I have common sense." He claimed without any source that the Obama administration "actually came out with a directive, too white" on aviation agency standards.
Incorrect. Vertebrate animal brains update their neural connections when interacting with the environment. LLMs don't do that. Their model weights are frozen for every release.
But why can’t I then just say, actually, you need to relocate the analogy components; activations are their neural connections, the text is their environment, the weights are fixed just like our DNA is, etc.
As I understand it, octopuses have their reasoning and intelligence essentially baked into them at birth, shaped by evolution, and do relatively little learning during life because their lives are so short. Very intelligent, obviously, but very unlike people.