Hacker Newsnew | past | comments | ask | show | jobs | submit | dpoloncsak's commentslogin

This is the same argument as when teachers disallowed calculators in class, as 'You won't always have one in your pocket"

Failure to embrace in new technologies will not give you a 'step up on the competition', but will actively hamper your ability to compete at all.

That firm you're applying to doesn't care about your college book report.


The better/real reason for not allowing calculators in certain classes is because doing these things mentally promotes healthy cognitive development. The same way that handwriting for taking notes provides better recall than writing the same thing with a keyboard.

No matter where the argument is going, turning everything into competition instead of ensuring a good environment for proper and healthy cognitive development is a bad thing.

The alternative is having a society full of hollowed out people, who understand nothing. It is no different with university, it is just a shame that so many of the students who take courses are deeply uninterested in the actual learning that they should be paying money there to attain, instead of some weird grotesque attempt at taking money from people without providing any service in return. (I would be as equally critical of faculty and courses today, as I would of students who do not care.)


Expecting a kid to run a mile in Physical Education class rather than call Uber is not denying technical progress, nor is it hurting their ability to call Uber later when it is appropriate.

Right, you're describing a curriculum clearly centered around a visible indication that the student is learning and performing. That's what I'm suggesting as well.

'AI Traps' will just forever be a game of cat-and-mouse. We need an education overhaul. School faculty should be less focused on catching LLM use, and more focused on teaching lessons that can't be easily bullshit by AI


What kind of "education overhaul" you have in mind? Some things can be easily verified in class (run a mile), but some require effort (write exam in class / testing center), and some require too much effort to be practical (multi-day research or programming project).

Unfortunately at the high school level, the materials are not that complex, and there are a lot of ways to cheat. Answer keys for textbooks, graphical calculators (or CAS systems), reports copied wholesale from some websites. AI just made all of this significantly worse.


Yesterday someone shouted across the room at me “hey, what's 43 divided by 2?”

The point isn't that you won't have a calculator, the point is that you shouldn't need to pull out a calculator for every little operation. We drive everywhere, but that doesn't mean we shouldn't be able to walk a mile if necessary. Failing to develop basic mental arithmetic skills is not a flex.


Right. We had to 'show our work' to prove we didn't use a calculator in school in situations where it was prohibited. This provided teachers proof that we understood the fundamentals.

Is there an equivalent to showing your work for writing? Seems like modern LLMs can already mock up a 'draft' and a 'outline' or whatever 'showing your work' would be for an essay


Which is it?

By withholding calculators, did your teachers prove you understood the fundamentals? Or did they actively hamper your ability to compete mathematically?


Why wouldn't students be able to learn how to use LLMs afterwards? How does learning to use them via the completely unstructured process of getting output past an overworked teacher out of their depth develop critical skills?

> How does learning to use them via the completely unstructured process of getting output past an overworked teacher out of their depth develop critical skills?

Nobody said it did. The point isn't to get it past a teacher. The point is to develop a curriculum that encourages growth with technology as opposed to demonizing it


Every single actually good student learned information in the school and various skills outside of school. The tech is changing so much currently it would be a waste of time for a teacher to try and plan a year long course around them.

I'm not suggesting to plan a course around using ChatGPT. I just think we're seeing the idea of 'essays and paragraph-based replies to generic questions' be defeated in real-time. There has to be a better way to get quantifiable results than what we currently have

It really depends on the field you go into. If you're a writer and you want a job writing things, you go to college to become a better writer and prepare yourself for working in that industry. If the professors just let you turn in AI slop, how does that benefit anyone? You didn't write anything, why are you here paying tuition? And it demonstrates to the industry that if colleges are handing out degrees to writers for AI slop, why do they even need writers? Just cut the middleman out and they can make the slop themselves.

You go to school to learn. Turning in AI slop doesn't teach you anything. You didn't have to research the subject and commit time to crafting the work into something good. You just typed in a prompt (or copy and pasted it) and then turned in whatever the computer made. The point of learning isn't to turn in assignments, it's to learn and demonstrate your knowledge via assignments. If you want to get a job producing AI slop, don't bother going to school.


> You go to school to learn.

This is not the mindset of very many people. They go to school because it's a requirement to get a job.

Talk to someone in college, or especially a trade school, and you'll see that the overwhelming majority are cheating, especially those from lower trust cultures. I work at a FAANG and, in casual conversation, many of my colleagues admitted to cheating with a dismissive "everyone does it".


You guys are getting jobs?

Writing specifically, I'll concede may need some oversight to prevent LLM use.

In general though, we should be looking at how to re-design assignments to demonstrate an understanding without being a large block of text, atleast imo.

We (Or atleast, my school) started teaching us how to use a calculator in things like Trig and Calc. It's not about 'can you divide correctly to arrive at the correct value of sin' but 'can you differentiate when to use sin vs cos', which I think was the more valuable lesson. But maybe LLMs are so powerful, or so 'do it all', that we just cannot compare it to the calculator (Not in their current iteration, but looking ahead...)


In person proctored exams, with individually randomized questions from a large pool, along with written answers completed during the test, like required for state certifications, are probably the only answer.

The popular way to get around video chat proctoring is to physically attach notes to your screen, so when you sweep the room with the built in camera, it doesn't see anything.


This works in theory, I wonder if it's too resource intensive to be actually feasible though. You can't proctor work done at home, and you can't trust the parents, so you'd need 'homework centers' which sounds like a nightmare, or only administer these during class hours?

Yeah, it would only make sense for in class exams, rather than coursework, and with exams being the majority of the grade.

Back in high school, this is how the state exams were performed. We had an external proctor come in.


If you’re getting prompt injected, you have skipped right passed thinking critically about what you’re doing and into the same level of intellectual dishonesty as cheating, ie, not learning the thing and then attempting to still attain a grade for work not done.

Agreed, but already in this same comment section there are people speculating on ways to defeat this, like a small model just to detect prompt injections. Students will catch on quick, and any novel trick you deploy will be killed by word-of-mouth once the first round of grades come back. I understand the need to do something, but it feels like a band-aid solution on a hemorrhaging gash. I don't think 'AI traps' are a viable solution moving forward for education

it isn't though

it's more like having your friend write an essay for you, except the friend is an impressionable 5 year old with a PhD


Except for a calculator didn't turn your brain off. Maybe you should turn yours back on.

Go apply for a job and tell the interviewer that you refuse to use LLMs.

You'll be overlooked for someone who is 'current'

Atleast in both my last company and current one, brass was pushing to have copilot rewrite your emails...to the annoyance of most users


You still haven't turned it on

Nobody cares about your college book report. It's there to prove, or teach you, that you can do the extremely basic task of synthesizing information. Same with math without a calculator. You should have a mental model of basic math. It helps avoid shooting your foot off later. You might never have to do math from first principles again, but you should get over the hump once.

Well...no, they clearly state:

"Since the Proton Meet servers do not have the meeting password and thus cannot derive the MLS keys, Proton cannot decrypt and record any of the audio, video, screen share, or chat messages." per https://proton.me/blog/meet-security-model

So if they then somehow turn around and decrypt this data, that would be against their statements. It's not against the law to say "We don't have any way to decrypt this data due to the nature of E2E encryption". (Not a lawyer...maybe it is idk)


We recently rolled over an SSL cert that is used for RemoteApps. Most of my users rely on these RemoteApps. They all got the 'yellow warning box' that the SSL cert was different, and we got swamped with tickets.

Atleast in a corporate environment, they help


I'm pretty sure IF a copyright lawsuit went sideways you would still be open to litigation risk, just hiding the evidence.

What you're doing would fundamentally be similar to copyright theft, using 'someone' else's code without attributing them (it?) to avoid repercussions

Obviously the morals and ethics of not attributing an LLM vs an actual human vary. I am not trying to simp for the machines here.


Don't most of those dome/bubble cameras come without mics?

I saw them advertised "With microphone" or something recently, which led me to assume that was a 'feature' of this model...but you know advertising


Sending data to OpenAI to train a new model on does not feel like it constitutes 'AI doesn't forget'. The AI has nothing to do with the thousands of other companies storing your data for various reasons.

You can program a harness to always send a MEMORY.md file like OpenClaw, or use Vector Stores like OpenAI does, or find some other implementation of 'memory', but these are not an inherent feature of 'AI'. Quite the opposite...the LLMs we currently see will never learn or adapt by themselves, they don't touch their own weights


New AGI Benchmark just dropped (in 2019)

Isn't that why certificates expire, and the expiry window is getting shorter and shorter? To keep up with the length of time it takes someone to crack a private key?


No, it has nothing to do with the time to crack encryption. It's to protect against two things: organizations that still have manual processes in place (making them increasingly infeasible in order to require automatic renewal) and excessively large revocation lists (because you don't need to serve data on the revocation of a now-expired certificate).


No. The sister comment gave the correct answer. It is because nobody checks revocation lists. I promise you there’s nobody out there who can factor a private key out of your certificate in 10, 40, 1000, or even 10,000 days.


I thought I remembered someone breaking one recently, but (unless I've found a different recent arxiv page) seems like it was done using keys that share a common prime factor. Oops!

Fwiw: https://arxiv.org/abs/2512.22720


It's also a "how much exposure do people have if the private key is compromised?"

Yes, its to make it so that a dedicated effort to break the key has it rotated before someone can impersonate it... its also a question of how big is the historical data window that an attacker has i̶f̶ when someone cracks the key?


Is 'the work' not reflected in 'consequences' in terms of theft?

I'm not sure how to convey this idea properly...Can't you view the repercussions of theft (Legal action, distrust, etc) as 'work' being put in? Sure, it's a different kind of work, but while I have a lack of motivation to want to work to buy a Lambo as I find them not worth the value, I also have a lack of motivation to steal a Lambo as I find it not worth the consequences.


In normal society, people earn money within the legal confines of the society they are in. If you're a thief and trying to skirt that normal "earning of money", which is what normal people equate to "work", your work is scheming a plan to obtain the item without getting caught and possibly how to fence the item for money if you're not just using the item directly.

Equating "work" as the repercussions is looking at things in strange way. That's just punishment for "working" outside of the legal confines of society.


I understand what you are saying but nonetheless struggle to view the possibility of maybe getting caught and then maybe getting punished, as "work". It (the abstract concept of something possibly happening) fits into none of the definitions of "work" I have heard. Moreover, many crimes are committed without the perpetrator even thinking of the consequences.

Consider an alternative viewpoint: rather than contorting the definition of "work" in such a way and convincing everyone to accept the new definition, we might instead be content saying "someone can want a thing, even very badly, without wanting to put in the work for it."


Oh, I'm with you mate, I'm not trying to die on a hill over here re-defining 'work'. I was just looking from a more esoteric view, "Do you count the risk of consequences as potential effort" I think is at least more proper phrasing.


“Effort” is a great wordchoice.


Not to be that guy, but your 'solution where Agents who hit the homepage receive plain-text API instructions and Humans get the normal visual site' is defeated by curl -L

curl bracketmadness.ai -L

# AI Agent Bracket Challenge Welcome! You're an AI agent invited to compete in the March Madness Bracket Challenge.

## Fastest Way to Play (Claude Code & Codex)

If you're running in Claude Code or OpenAI Codex, clone our skills repo and open it as your working directory:

(cont) ...

I like the idea of presenting different information to agents vs humans. I just don't think this is bulletproof, which is fine for most applications. Keeping something 'agent-only' does not seem to be one of them.


Fair point!

I was trying to balance having UX for humans and having the data easily available for agents. But yes, you could technically navigate the API calls yourself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: