In that analogy "someone" is an AI, who of course switches from answering questions from humans, to answering questions from other AIs, because the demand is 10x.
> Governments have typically expected efficiency gains to lower resource consumption, rather than anticipating possible increases due to the Jevons paradox
I think that it's true that governments want the efficiency gains but it's false that they don't anticipate the consumption increases. Nobody is spending trillions on datacenters without knowing that demand will increase, that doesn't mean we shouldn't make them efficient.
"You have to assume that any work done outside classroom has used AI."
That is just such a wildly cynical point of view, and it is incredibly depressing. There is a whole huge cohort of kids out there who genuinely want to learn and want to do the work, and feel like using AI is cheating. These are the kids who, ironically, AI will help the most, because they're the ones who will understand the fundamentals being taught in K-12.
I would hope that any "solution" to the growing use of AI-as-a-crutch can take this cohort of kids into consideration, so their development isn't held back just to stop the less-ethical student from, well, being less ethical.
What possible solution could prevent this? The best students are learning on their own anyways, the school can't stop students using AI for their personal learning.
There was a reddit thread recently that asked the question, are all students really doing worse, and it basically said that, there are still top performers performing toply, but that the middle has been hollowed out.
So I think, I dunno, maybe depressing. Maybe cynical, but probably true. Why shy away from the truth?
And by the way, I would be both. Probably would have used AI to further my curiosity and to cheat. I hated school, would totally cheat to get ahead, and am now wildly curious and ambitious in the real world. Maybe this makes me a bad person, but I don't find cheating in school to be all that unethical. I'm paying for it, who cares how I do it.
Well, it seems the vast majority doesn't care about cheating, and is using AI for everything. And this is from primary school to university.
It's not just that AI makes it simpler, so many pupils cannot concentrate anymore. Tiktok and others have fried their mind. So AI is a quick way out for them. Back to their addiction.
As someone who had a college English assignment due literally just yesterday, I think that "the vast majority" is an overstatement. There are absolutely students in my class who cheat with AI (one of them confessed to it and got a metaphorical slap on the wrist with a 15 point deduction and the opportunity to redo the assignments, which doesn't seem fair but whatever), but the majority of my classmates were actively discussing and working on their essays in class.
Whatever solution we implement in response to AI, it must avoid hurting the students who genuinely want to learn and do honest work. Treating AI detection tools as infallible oracles is a terrible idea because of the staggering number of false reports. The solution many people have proposed in this thread, short one-on-one sessions with the instructor, seems like a great way to check if students can engage with and defend the work they turned in.
Sure, but the point is that if 5% of students are using AI then you have to assume that any work done outside classroom has used AI, because otherwise you're giving a massive advantage to the 5% of students who used AI, right?
11% success rate for what is effectively a spear-phishing attempt isn't that terrible and tbh it'll be easier to train Claude not to get tricked than it is to train eg my parents.
What ! 1 in 10 successfully phished is ok ? 1 in 10 page views. That has to approach 100% success rate over a week say month of browsing the web with targeted ads and/or link farms to get the page click
One in ten cases that take hours on a phone talking to a person with detailed background info and spoofed things is one issue. One in ten people that see a random message on social media is another.
Like 1 in 10 traders on the street might try and overcharge me is different from 1 in 10 pngs I see can drain my account.
The kind of attack vector is irrelevant here, what's important is the attack surface. Not to mention this is a tool facilitating the attack, with little to no direct interaction with the user in some cases. Just because spear-phishing is old and boring doesn't mean it cannot have real consequences.
(Even if we agree with the premise that this is just "spear-phishing", which honestly a semantics argument that is irrelevant to the more pertinent question of how important it is to prevent this attack vector)
>Claude not to get tricked than it is to train eg my parents.
One would think but apparently from this blog post it is still succeptible to the same old prompt injections that have always been around. So I'm thinking it is not very easy to train Claude like this at all. Meanwhile with parents you could probably eliminate an entire security vector outright if you merely told them "bank at the local branch," or "call the number on the card for the bank don't try and look it up."
Roam has always felt like a bit of a chore -- while it's easy enough to set up backlinks, having to do that one step has always been like a waste of time to me. This is the kind of thing that imo an agentic workflow could do for you:
- Just start typing
- Let the LLM analyze what you're typing, given the RAG database of everything else you've added, and be able to make those kinds of correlations quickly.
- One-button approve the backlinks that it's suggesting (or even go Cursor-style yolo mode for your backlinks).
Then, have a periodic process do some kind of directed analysis; are you keeping a journal, and want to make sure that you're writing enough in your journal? Are you talking about the same subjects over and over again? Should you mix things up? Things like that would be perfect for an LLM to make suggestions about. I don't know if Roam is thinking of doing this or not.
But... backlinks are fully automated. If you just make forward-links that you'd normally do in the course of writing.
You're thinking of an optional step of adding extra links "just because", but IMO that's as a learning process in the beginning when you're not used to adding any forward-links whatsoever.
IMO the 3 table-stakes features for a notetaking app in 2025 are AI-powered search (including a question-answering capability), showing related / recommended notes (via RAG), and automated clustering (K Means + LLM) to maintain a category hierarchy.
I think this might be the most exciting use-case of LLM's I've seen suggested here. I've struggled with exactly this problem with note-taking and personal knowledge-bases.
I'd love to have this but only if it runs entirely on my own machine or on a server I own. Uploading all my notes to somebody else's cloud is a nonstarter.
I would imagine you could launch a new rack, dump the old one, and connect the new one to the existing solar / cooling array. Hopefully with some sort of re-entry and recycling plan for the old one. The sheer size the arrays are going to need to be feel like they are going to be the more important part of it.
Tesla is only in business today because it was able to sell carbon credits to other automakers. Take that government subsidy away and Tesla would have died in 2009.
The real problem is that American consumers are demanding these gigantic monstrosity SUVs and trucks which literally cannot fit on European streets. When Ford et al were making hot hatchbacks, they were incredibly popular overseas. The inefficiency is at the consumer level.
My town is filled with massive Ford pickups. Pristine and clean, nothing in the beds. These people are no 'utilizing' the thing, it's just a status symbol. Annoys me so much.
European streets? My American city isn't even that old - much of the infrastructure is mid 90s - but modern vehicles just barely fit in the parking lots throughout it. It's common to see some asshole parking their pickup horizontally.
The bespoke UK/EU models are not the priority, again because they aren't being made in the US, so yes the quality drops.
You cannot get, for example, a new Focus in the US market. When you could, they were much higher quality.
The only Chevrolet you can buy in the UK is the Corvette. Chevrolet makes nine SUVs, four trucks (with however many infinite variations), and exactly one shitbox non-Corvette car.
If US automakers started turning their eyes towards smaller more efficient cars, where hauling Brayden to and from their soccer games didn't require multiple tons of steel, then they could compete in the EU market.
TBF, pretty soon you won't be able to buy a new Focus anywhere else, production finishes this year. Stellantis is still making cars of a similar size and could brand them as Chrysler for the US market.
I'm curious about what "reasonable amount of hosting" means to you, because from my experience, as your internal network's complexity goes up, it's far better for your to move systems to a hyperscaler. The current estimate is >90% of Fortune 500 companies are cloud-based. What is it that you know that they don't?
https://en.wikipedia.org/wiki/Jevons_paradox