Hacker Newsnew | past | comments | ask | show | jobs | submit | himata4113's commentslogin

Happy to see that meth is becoming more affortable, maybe inflation isn't so bad after all if we consider all the things we have access to and that have come down in price since few decades ago. /s

I was writing an obfuscator recently, I just had the model deobfuscate and optimize the code back to original and I kept improving the obfuscator until it couldn't. The funny thing is that after all this I also ended up with a really strong deobfuscator and optimizer which is probably more capable than most commercial tools.

The solution is just to make CTFs harder, but when do CTFs become too hard? Maybe the problem is that 'hard' CTFs are fundementally too 'simple' where it's just a logic chain and an exhaustive bruteforce towards a solution since there really are limited ways to express a solution in plain sight.

Or maybe human creativity has been exhausted and we're not so limitless as we thought. Only time will tell.

I had another idea spring to mind: we could hide two flags, one that could only be found by ai agents and not humans or tools written by humans.


Interesting, what I just did recently is basically the same of this as I tried to push the limit of js obfuscator as much as possible by keep forcing gpt/claude deobfuscate final output then having gpt improve the tool to break the deobfuscator.

Do you publish it somewhere? Here's a sample my my js obfuscator output: https://gist.github.com/Trung0246/c8f30f1b3bb6a9f57b0d9be94d...


A portion could require astral projection and computers can't do that. Or maybe just a VR mini-game like the 90s always imagined.

bringing CTF solutions into the real world is a really good idea! I didn't even think of this until you mentioned it.

we have very powerful simulation tools so something like "project a pattern at these angles" wouldn't really work as you could simulate that.

I guess something cool is that we can make simulating the solution very expensive, but in real world it would be free since it's analog... As long as simulations take longer than it takes for a human to find a solution it would be a pretty good way to deal with it. I am sure people smarter than me can come up with something.

Maybe I was too early to dismiss human creativity.


Maybe CTF is dead, but there are plenty of fun problems in the real world -- ask any scientist, engineer, or medical researcher.

There are a million places where a computer can interact with a non-digital system in a loop.

- Tune an FPGA, or a whole data-center, or just a physical computer.

- Make a drone fly somewhere.

- Design a selective toxin (or anti-toxin).

Or, you know, get more people to click on adds. All totally possible to automate.


Using real-life calculators to add? Calculate the Flag. I don't think it is dead at all. It's like mixing in board game / escape room / science / engineeer/ medical research elements.

because they decided that running elasticsearch on your machine is a great idea!

bitlocker is generally useless unless the hardware is secure to begin with and while we have tons of 'boot guard' implementations which fuse the certificate into hardware meaning that only the OEM can create firmware that will boot there have been at least 2 instances of these certificates leaking exposing all hardware with that signature and other bypass methods (some boot guards are 'flash' guards were you can only flash signed firmware, but doesn't stop you from directly flashing the spi bios chip).

I had someone demo me preserving PCR values by patching SMM module in firmware without triggering any bitlocker lockout, this also means that you can externally write bios with the smm module as long as you have ~2 minutes to disassemble the laptop or desktop and flash firmware.

This hurts the most when you don't have PIN authentication which means you just need to steal the laptop to exfiltrate data, if you do then you have to have the user boot which then drops a payload exfiltrating data over network or just stealing the laptop again as you can write back decryption keys into non encrypted partition or corrupt some sectors at the end of the disk and write them there.

* modifying smm allows you to patch the boot process loading a malicious payload into hypervisor/kernel.


It's only useless if you assume a perfectly capable attacker. That's not every attacker, though. We're not always up against a nation state actor, in fact, some attackers are quite dilettante. I believe the assumption that if something doesn't defend against the most capable attacker it's useless and we might as well not bother is not helpful.

I know my bike lock can be cut within seconds by someone who is sufficiently skilled and determined. I'm still going to lock my bike.


law enforcement? stolen bags? state sponsored agents? that's the only times you should be worred and it fails horribly at those.

What about employees smart enough to boot a laptop from a thumb drive but not smart enough to disassemble it who just want admin to install some game from a dubious source? What about other scenarios neither you or I can think of right now? The cost of activating bitlocker is so low, I'd do it just in case.

Also, I'd argue the stolen bag scenario usually features dilletante attackers.


I had to think about this and research, but actually bag theft is one of the ways companies are attacked these days.

> unless the hardware is secure to begin

Majority of hard disk encryption done in the HDD/SSD controller is 100 times more crap than BitLocker itself. It's littered with bugs and security vulns. Anybody using it is insane.


> Majority of hard disk encryption done in the HDD/SSD controller is 100 times more crap than BitLocker itself. It's littered with bugs and security vulns. Anybody using it is insane.

Oversimplified and not accurate. Some manufacturers had flawed implementations, others did not. Also, that was a long time ago. There are advantages to hardware encryption. It preserves performance and mitigates other vectors like cold-boot attacks without having to encrypt RAM, which also comes with a performance penalty. By the way, both software and hardware-based encryption can be combined. Cryptsetup on Linux actually offers this, and before you ask, the keys are split. If one is compromised, the other remains secure.


Do you have any citation about that on SSDs built after 2020?

I don’t think manufacturers with deliberately undocumented, nigh-impossible-to-inspect crypto get to claim their bugs are shallow and thus that the absence of evidence for bugs implies the absence of bugs.

Less emotionally but mostly equivalently, the expense and non-cryptographic skill requirements of breaking mass-storage crypto are quite high while the rewards are comparable to those from breaking much softer targets, so the absence of results since that one paper only changes my mind very slightly. Besides, we know plenty of examples of what these kinds of opaque, serious-business, pay-to-play environments produce: cellular crypto is an uninterrupted series of disasters, so is Wi-Fi, and the things that we do know about storage devices don’t point to an outstanding culture of cryptographic competence there either. Once you’ve done enough to slap an “OPAL” label on it (which says nothing about the internals), there’s just no competitive pressure to improve.

There is a right way to do all this, and it’s essentially what NICs do: allow the host to offload symmetric crypto to the device, but keep the results of said crypto accessible at any moment. And it’s not like there are even that many modes used in full-disk encryption, let alone ciphers.


So that's a long way of saying "no, I have no basis for my claims outside deciding that people I know nothing about are not competent", right?

It’s a way of saying that I consider the demand for post-2020 evidence to be cherry picking when there’s evidence from 2018 and little objective (cultural or economic) reason for things to have improved since then. A competent modern businessman will not pay for a competent worker in a very specific narrow field until there are consequences to not doing so (creating such consequences is the purpose of every compliance regime, for instance).

It’s also a way of saying that the entire approach taken by hardware disk encryption (unspecified crypto done inside the device in an unverifiable manner) has, with the benefit of hindsight, proven fundamentally flawed despite its reasonable appearance (in every system which had used it, not just storage), and I wish there was a way to pressure (consumer) storage vendors into going in a different direction. It is simply never a wise choice to trust people’s opaque crypto, however competent they are.


we're not talking about the hdd/ssd here, those are not really encryption but data packing and compression algorithms, they added encryption because it's a single instruction for extra talking points.

you use veracrypt which doesn't have any hardware attestation (convenience) features, but it does still leave you vulnerable to the same surface PIN+TPM is vulnerable to. the real defense is making it so opening your laptop/desktop physically fuses something via latch and wipes the key off your system requiring re-entry.

of course, who wants to own a laptop/desktop that you can't open we have enough of that with our phones.


qemu is a TCG, not a translator. It's designed to work with n architectures which has limits.

They will (and already have) replace low-performing senior developers because a single high-performance senior developer can do a lot more than they used to.

I have personally noticed this a lot how multiple people can work on the same problem, but the more senior developers get way more miledge out of AI compared to those that are early in their carreers.

Another difference I've noticed is how many agents one can keep running without losing awareness.

It generally just raised the bar on what management will expect from developers which will result in a shrinking workforce. The only ones that will benefit are AI companies and the upper management since less employees means less management so lower management will get screwed too.


> will result in a shrinking workforce

Jevons paradox is already rearing its head, I've seen data suggesting open roles in tech are at their highest since the post-pandemic slump [1]. If you're a senior leader at a company and your engineers are now capable of multiple-times more productivity, is the logical choice to fire half, or set way more ambitious goals? One assumes engineers are hired because their outputs are worth more than their cost. If outputs, at least for those capable of wielding new tools, are higher, so is the value of that employee to you.

The universal thing I'm hearing from friends at small-mid-size tech companies, and experiencing myself, is that there is way more work and demand for it from senior leaders than they're capable of with their current teams.

1: https://www.ciodive.com/news/tech-job-postings-hit-3-year-hi...


There is a limited things to work on, planning and orchestration becomes the bottleneck.

nytimes reporters have recently been very disappoiting and starting to feel like they're people who managed to become relevant long time ago, but haven't kept up with recent changes and are just parroting things others have said instead of unique thoughts.

I found their recent investigative article on How do stars pee at the Met Gala? to be hard-hitting, yet fair to all sides. [1]

[1] https://archive.is/x9MSO

(You thought I was exaggerating about it being "investigative," dincha.)


Any media company which deliberately rids itself of everyone willing to speak vaguely positively of transsexual people may not be attracting the most free thinking writers.

Heh, makes me laugh. just recently I was trying to get play protect 'certification' in a virtual machine took a bit of haggling and legitimately obtained samsung software to bypass it (and a 3 day gpt-5.5 /loop).

Google has proven time and time again that they don't want to make this technology fool proof and I severely doubt this will be any different.

Although I do agree that hardware attestation as a captcha is pure bullshit no matter the context.


?referrer=123 still works, so I guess it's selective.

The AI safeguards are indeed a joke, you can get around their classifier by simply masking out all the unsafe words and it will happily work on your rootkit.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: