Hacker Newsnew | past | comments | ask | show | jobs | submit | Sanzig's commentslogin

Ugh. Yeah, this misses the point: not everyone wants their content archived. Of course, there are no feasible technical means to prevent this from happening, so robots.txt is a friendly way of saying "hey, don't save this stuff." Just because theres no technical reason you can't archive doesn't mean that you shouldn't respect someone's wishes.

It's a bit like going to a clothing optional beach with a big camera and taking a bunch of photos. Is what you're doing legal? In most countries, yes. Are you an asshole for doing it? Also yes.


If you want to go overboard, there's NOAA's Global Forecast System.

https://www.emc.ncep.noaa.gov/emc/pages/numerical_forecast_s...

Updated four times per day and has predictions out to about two weeks. It's used as the core input of most weather forecasts.


ECMWF also offers free forecast charts on their website. It's a bit more modern.

Eh, the problem with AES is side channel attacks in software implementations. They aren't necessarily obvious, especially if you're deploying on an oddball CPU architecture.

This standard targets hardware without AES accelerators like microcontrollers. Now, realistically, ChaCha20-Poly1305 is probably a good fit for most of those use cases, but it's not a NIST standard so from the NIST perspective it doesn't apply. Ascon is also a fair bit faster than ChaCha20 on resource constrained hardware (1.5x-3x based on some benchmarks I found online).


Knowing how much of a dog SHA-3 is due to its sponge construction basis, it’s superficially surprising to me to see a sponge-based LWC algorithm.

I guess we’ve had quite a few years to improve things.


SHA-3 in fast when implemented in hardware.

Its slowness in software and quickness in hardware have almost nothing to do with it being sponge-based, but are caused by the Boolean functions executed by the Keccak algorithm, which are easy to implement in hardware, but need many instructions on most older CPUs (but much less instructions on Armv9 CPUs or AMD/Intel CPUs with AVX-512).

The sponge construction is not inherently slower than the Merkle–Damgård construction. One could reuse the functions iterated by SHA-512 or by SHA-256 and reorganize them to be used in a sponge-based algorithm, obtaining similar speeds with the standard algorithms.

That is not done because for the sponge construction it is better to design a mixing function with a single wider input instead of a mixing function with 2 narrower inputs, like for Merkle–Damgård. Therefore it is better to design the function from the beginning for being used inside a sponge construction, instead of trying to adapt functions intended for other purposes.


Speck? To my knowledge there aren't any serious flaws despite a lot of public cryptanalysis. I think what sank Speck was that it came out a few years after after the Dual_EC_DRBG fiasco and nobody was ready to trust an NSA developed cipher yet - which is fair enough. The NSA burned their credibility for decades with Dual_EC_DRBG.

Speck uses less resources to implement and is faster when I have tested it to compared ASCON.

I think the biggest problem is how they went about trying standardize it back in the day.


I mean, yeah, but also Simon and Speck aren't as good as the new generation of low-footprint designs like Ascon and Xoodyak. We know more about how to do these things now than we did 15 years ago.

In what ways is it better? Security margin or something? I thought Speck has held up pretty well to cryptanalysis (unlike you I'm not in the security field so maybe I'm wrong).

I quite liked the remarkable simplicity of Speck. Performance was better than Ascon in my limited testing. It seems like it should be smaller on-die or in bytes of code, and with possibly lower power consumption. And round key generation was possible to compute on-the-fly (reusing the round code!) for truly tiny processors.


Makes sense! Also, how does Speck fare in power analysis side channel attacks vs Ascon? My understanding was that was also one of the NIST criteria.

I am way out of my depth both on power consumption and leakage, but presumable Ascon does better on both counts than Chapoly.

Realy ChaCha seems trivially implementable without leaking anything.

> Even the tiniest MCU can typically perform more than one cryptographic operation per second. If your MCU has any cycles to spare at all it usually has enough cycles for cryptography.

Well, no. If you can do 1 AES block per second, that's a throughput of a blazing fast 16 bytes per second.

I know that's a pathological example, but I do understand your point - a typical workload on an MCU won't have to do much more than encrypt a few kilobytes per second for sending some telemetry back to a server. In that case, sure: ChaCha20-Poly1305 and your job is done.

However, what about streaming megabytes per second, such as an uncompressed video stream? In that case, lightweight crypto may start to make sense.


1 operation per second would refer to cryptographic signatures. If you are doing Chacha, the speeds are more like 1 mbps. AES is probably closer to 400 kbps.

An uncompressed video stream at 240p, 24 frames per second is 60 mbps, not really something an IoT device can handle. And if the video is compressed, decompression is going to be significantly more expensive than AES - adding encryption is not a meaningful computational overhead.


>Even the tiniest MCU can typically perform more than one cryptographic operation per second. If your MCU has any cycles to spare at all it usually has enough cycles for cryptography.

>1 operation per second would refer to cryptographic signatures. If you are doing Chacha, the speeds are more like 1 mbps. AES is probably closer to 400 kbps.

It sounds to me like you, sir or madame, have not worked with truly tiny MCUs. :-)

But yes, there are inexpensive MCUs where you can do quite a bit of crypto in software at decent speeds.


Why would you compare an uncompressed video stream to anything in this discussion? Especially at such a small frame size in modern video usage.

Modern encrypted streaming uses pre-existing compressed video where the packets are encrypted on the way to you by the streaming server. It would have to uniquely encrypt the data being sent to every single user hitting that server. So it's not just a one and done type of thing. It is every bit of data for every user. So that scales to a lot of CPU on the server side to do the encryption. Yes, on the receiving side while your device is only dealing with the one single stream, more CPU cycles will be spent decompressing the video compared to decrypting. But again, that's only have of the encrypt/decrypt cycle


If it's compressed you don't need to decompress it first.

on the receiving end.

The device doing the decryption may not be the same device that does the decompression.

Eg a small edge gateway could be doing the VPN, while the end device is decoding the video.


A VPN’s encryption is different than a streaming platform’s encryption. The streamer’s encryption is their form of rights management. So the device/app decompressing the video very much is the point of decryption. If not, the rights management is very broken. If the small edge gateway is somehow decrypting the video stream, you’d have a very rogue device that lots of people would be curious to learn more

CCTV doesn’t need rights management. Nor do advertising billboards.

This whole framing is weird, because you can't spend $0.50 per already-deployed part to upgrade to something that can viably do AES.

Sometimes, the little guy does win, but only after a lengthy court battle: https://en.wikipedia.org/wiki/Nissan_Motors_v._Nissan_Comput...

The first P25 standards came out in 1989, so encrypted police radios were certainly starting to be deployed in the early 90s. Obviously, adoption rate depended on the department budget, with many rural departments taking until the 2010s to finally switch.


A couple of bright physics grad students could build a nuclear weapon. Indeed, the US Government actually tested this back in the 1960s - they had a few freshly minted physics PhDs design a fission weapon with no exposure to anything but the open literature [1]. Their design was analyzed by nuclear scientists with the DoE, and they determined it would most likely work if they built and fired it.

And this was in the mid 1960s, where the participants had to trawl through paper journals in the university library and perform their calculations with slide rules. These days, with the sum total of human knowledge at one's fingertips, multiphysics simulation, and open source Monte Carlo neutronics solvers? Even more straightforward. It would not shock me if you were to repeat the experiment today, the participants would come out with a workable two-stage design.

The difficult part of building a nuclear weapon is and has always been acquiring weapons grade fissile material.

If you go the uranium route, you need a very large centrifuge complex with many stages to get to weapons grade - far more than you need for reactor grade, which makes it hard to have plausible deniability that your program is just for peaceful civilian purposes.

If you go the plutonium route, you need a nuclear reactor with on-line refueling capability so you can control the Pu-239/240 ratio. The vast majority of civilian reactors cannot be refueled online, with the few exceptions (eg: CANDU) being under very tight surveillance by the IAEA to avoid this exact issue.

The most covert path to weapons grade nuclear material is probably a small graphite or heavy water moderated reactor running on natural uranium paired up with a small reprocessing plant to extract the plutonium from the fuel. The ultra pure graphite and heavy water are both surveilled, so you would probably also need to produce those yourself. But we are talking nation-state or megalomaniac billionaire level sophistication here, not "disgruntled guy in his garage." And even then, it's a big enough project that it will be very hard to conceal from intelligence services.

[1] https://en.wikipedia.org/wiki/Nth_Country_Experiment


> The difficult part of building a nuclear weapon is and has always been acquiring weapons grade fissile material.

IIRC the argument in the McPhee book is that you'd steal fissile material rather than make it yourself. The book sketches a few scenarios in which UF6 is stolen off a laxly guarded truck (and recounts an accident where some ended up in an airport storage room by error). If the goal is not a bomb but merely to harm a lot of people, it suggests stealing miniscule quantities of Plutonium powder and then dispersing it into the ventilation systems of your choice.

The strangest thing about the book is that it assumes a future proliferation of nuclear material as nuclear energy becomes a huge part of the civilian power grid, and extrapolates that the supply chain will be weak somewhere sometime, but that proliferation never really came to pass, and to my understanding there's less material circulating around American highways now than there was in 1972 when it was published.


The other thing is the vast majority of UF6 in the fuel cycle is low-enriched (reactor grade), so it's not useful for building a nuclear weapon. Access to high-enriched uranium is very tightly controlled.

You can of course disperse radiological materials, but that's a dirty bomb, not a nuclear weapon. Nasty, but orders of magnitude less destructive potential than a real fission or thermonuclear device.


I have two Tapo units at home, they seem to be working fine without an internet connection.

I created a new subnet and an associated WiFi SSID for it, connected the Tapo cameras, and set them up to act as RTSP cams. I then firewalled the subnet off from anything other than my Frigate NVR server and gateway. They still work fine, they are streaming video to Frigate without complaint. Maybe because they have DNS from my gateway still? (I should probably block that off, it's a common data exfil vector).

Very annoying that internet connectivity is required for initial setup, I'll agree there. They could have just had a bare bones web interface.


Interstellar space is pretty empty, and we have good models for it thanks to the radio astronomy community. Dispersion is low enough to be nearly negligible, even over tens of light years.

Determining theoretical interstellar link rates is a fairly straightforward link budgeting exercise, easier in fact than most terrestrial link calculations because you don't have multipath to worry about.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: