Hacker Newsnew | past | comments | ask | show | jobs | submit | tennysont's commentslogin

(Notwithstanding that this is a joke) Maybe it's just me, but I read this as a solution that would be implemented internally at a large company to distribute pain/accountability/tech-debt across time to a team which might have high turnover. i.e., a way to align incentives by punishing teams with bombs (via their metrics) in their code, before the bomb actually detonates.

I heard a second-hand story about some team at Google who did this, and named the variable something like:

  I_ACKNOWLEDGE_THAT_THIS_CODE_WILL_PERMANENTLY_BREAK_ON_2022_09_20_WITHOUT_SUPPORT_FROM_TEAM_X=1
a year before the deadline. I would be mildly amused by adding

  _AND_MY_USER_ID_IS="<user_id>"

FYI “cope” is closer to “delusion used to help you cope with reality” rather than “superficial fix”

Also, I think that some strategies, such as “comfort asking a parent for help navigating a situation” are timeless defenses against strategies like blackmail. There are probably some street smarts that change and some that stay the same.


Well yes, street smart is both.

It's a temporary solution based on the delusion that you can't work on a systemic level to reduce criminal or thuggish behavior.

Ultimately I do think some form of self defense is good to know, but you can't expect it to be effective than situational.


Why wouldn’t your eye lens focus LIDAR photons from the same source onto a small region of your retina in the same way that a phone camera lens focuses same-origin photos to a few pixels?

Sorry if this is a silly question, I honestly don’t have the greatest understanding of EM.


It's incredibly important to understand that eyes and glass have different optical properties at these wavelengths. It's hard to conceptualize because to us clear is clear, but that's only at visible light. The same way that x-rays and infrared and other spectra can show things human eyes can't see, or can't see things visible light can see, it's a 2 dimensional problem. The medium and the wavelength are both at play. So, when you have the eye which is known to absorb such light, and artificial optics which are known to pass it without much obstruction, they're going to behave like opposites. Imagine if the glass/plastic they used in the car blocked the light. Wouldn't really work.

There is a flip side to this though. Quick searches show that the safety of being absorbed and then dissipated by the water in the eye also makes that wavelength perform worse in rain and fog. I think a scarier concept is a laser that can penetrate through water (remember humans are mostly bags of salt water) which could, maybe, potentially, cause bad effects.


Depends on the wavelength of lidar. Near IR lidars (850 nm to 940 nm, like Ouster, Waymo, Hesai) will be focused to your retina whereas 1550 nm lidars (like Luminar, Seyond) will not be focused and have trouble penetrating water, but they are a lot more powerful so they instead heat up your cornea. To quote my other comment [1]:

> If you have many lidars around, the beams from each 905 nm lidar will be focused to a different spot on your retina, and you are no worse off than if there was a single lidar. But if there are many 1550 nm lidars around, their beams will have a cumulative effect at heating up your cornea, potentially exceeding the safety threshold.

[1] https://news.ycombinator.com/item?id=46127479


Follow up question that you might know: would multiple LIDAR sensor actually be additive like that? If you can stand a foot away from a car's LIDAR sensor and be unharmed, then can't you have:

  | Distance | # of Sensors |
  | 1        |            1 |
  | 3        |            9 |
  | 5        |           25 |
  | 10       |          100 |
  | 25       |          625 |
  | 50       |         2500 |
  | 100      |        10000 |
x^2 sensors at x feet from you and have the same total energy delivered? If sensors are actually safe to look at from 6in or 3in, then multiple the above table by 4 or 16.

It seems like, due to the inverse square law, the main issue is how close you can get your eye to a LIDAR sensor under normal operation, not how many sensors are scattered across the environment. The one exception I can think of is a car that puts multiple LIDAR arrays next to each other (within a foot or two). But maybe I'm misunderstanding something!


Do you if there has been any work how lasers affect other animals and insects?

Am I being catastrophically pessimistic to think that in addition to swatting insects as it moves forward, the cars lidar is blinding insects in a several hundred meter path ?

I’m very optimistic about automated cars being better than most humans but wonder about side effects.


If we have automated anti-mosquito vehicles just roaming around, the world would be a better place. There might be some second order effects from removing mosquitoes that we haven't predicted, but fuck mosquitoes.

Unfortunately not all insects are mosquitoes, and one reason we have many fewer birds in (e.g.) the UK than when I was young, is the decline of insect life.

GP is slightly wrong. IIRC those problematic LIDARs are operating at higher power than traditionally allowed, with the justification that the wavelength being used is significantly less efficient at damaging human eyes, therefore it's safe enough at those powers, which is likely true enough. But it turned out that camera lenses are generally more transparent than our eyes and therefore the justification don't apply to them.

Amusingly the lenses are worse than silicon at transmitting that wavelength.

1550nm might be worse for sensors because a good portion of the light is only being dumped into the metal layers - pure silicon is mostly transparent to 1550nm. Not sure how doped silicon would work. I can tell you that 1070nm barely works on an IQ3 Achromatic back…

https://www.pmoptics.com/silicon.html


Your eyes a much larger sensor area than the opening, they do the opposite of concentrating light in a small area.

A point source in the visual field will create a point image on the retina. The "sensor area" you're referring is what's necessary to capture the entire visual field simultaneously.

I disagree that it’s a point source at distances of peak concern.

Also, it’s something of a nitpick but physically point sources still end up as a circle.


That's fair, strictly speaking, but I'm not sure there's a meaningful difference to be made.

Wasn't sure what level of knowledge you were coming from re: PSFs, so I was keeping it basic.


My understanding is that this non-reciprocity is why international law often feels so permissive of seemingly bad actions. It generally aims to forbid only strategies that are the highly destructive and non-effective at winning wars. The idea is that such actions are not necessary in warfare in any circumstance, rather than a coordinated and mutual choice to leave effective strategies on the proverbial table.

This non-reciprocity is also why many such laws come with large conditional statements. For example, hospitals are typically illegal targets. However, you cannot label a military outpost a hospital as a loophole. There is a gray area in between, where the law is generally more permissive than a layperson might expect.

It is unclear if these laws accomplish this goal in all circumstances. A smaller, modern army attempting to hide might not be able to find non-civilian concealment (e.g., the jungle in the Vietnam war), and there is probably a conversation about the (unfortunate) effectiveness of inflecting civilian damage on an enemy's will to fight and economic output. However, the above is my best understanding of what international law sets out to do.

Disclaimer: I asked AI to evaluate the above comment before posting, and it made the following (paraphrased) criticisms that you might want to consider:

- The primary purpose of IHL (international humanitarian law) is to distinguish civilian from military, not to only ban what doesn't work. Hence, the banning of chemical weapons and landmines.

- The hospital example is better framed as a requirement to distinguish between a civilian hospital and a military target

- Non-reciprocity has the advantage of being simpler to obey (the legal analysis does not depend on the enemy's past actions)


I prefer the `price = value = relative wealth != wealth = resources` paradigm. Thus, wars destroy wealth and tech advances create wealth, but that's just me

I was under the impression that most supply chain attacks target source code, not binaries, especially for large projects like OpenBSD.

Does StageX audit source code to the same extend that OpenBSD does? If not, then how would you compare the downgrade in security due to less code auditing vs the reassurance of reproducible builds?

Or, how would you compare StageX with Gentoo, in which the entire system is installed from source. Sure, you have to trust your initial installer, but how could I get a StageX system setup without first having access to a computer with some software installed? If we're at the point where we're worried that every Haskell program that has ever been compiled is owned, then I wonder why I should trust any software that might install StageX onto my computer, or the underlying hardware for that matter?


The Haskell compiler creates a slightly different output every time you compile a program[1]. This makes it difficult to ensure that the binary that is free-to-download downloaded is actually malware free. If it were easy to check, then you could rest easy, assuming that someone out there is doing the check for you (and it would be big news if malware was found).

If you're a hardened security person, then the conversations continues, and the term "bootstrap" becomes relevant.

Since you do not trust compiled binaries, then you can compile programs yourself from the source code (where malware would be noticed). However, in order to compile the Haskell compiler, you must have access to a (recent) version of the Haskell compiler. So, version 10 of the compiler was built using version 9, which was built using version 8, etc. "Bootstrapping" refers (basically) to building version 1. Currently, version 1 was built approximately with smart people, duct tape, and magic. There is no way to build version 1, you must simple download it.

So if you have high security requirements, then you might fear that years ago, someone slipped malware into the Haskell compiler version 1 which will "self replicate" itself into every compiler that it builds.

Until a few years ago, this was a bit of a silly concern (most software wasn't reproducible) but with the rise of Nix and Guix, we've gotten a lot closer to reproducible-everything, and so Haskell is the odd-one-out.

[1] The term is "deterministic builds" or "reproducible builds". Progress is being made to fix this in Haskell.


From 9.12, -fobject-determinism[1] will guarantee deterministic objects.

If it ever doesn't, do open a bug report[2]

[1] https://downloads.haskell.org/ghc/latest/docs/users_guide/us... [2] https://gitlab.haskell.org/ghc/ghc/-/issues


Good to know! Half the battle covered then.


Unlike Nix and Guix, Stagex goes much further in that it has a 100% mandate on supply chain integrity. It trusts no single maintainer or computer and disallows any binary blobs. It is thus not possible to package any software that cannot be bootstrapped, reproduced, and signed by at least two maintainers.

Haskell and Ada are the only languages not possible for us to support, or any software built with them.

Everything else is just fine though.

I do hope both languages address this though, as it is blocking a lot of important open source software like pandoc or coreboot from being used in security critical environments.


How are you bootstrapping a modern C compiler without an existing C/C++ compiler and linker?


From 180 bytes of human readable machine code all the way up.

https://codeberg.org/stagex/stagex/src/branch/main/packages/...


In assembly, like stage0 does: https://github.com/oriansj/stage0


Technically it is raw x86 machine code in hexadecimal, a scheme called "hex0"


I understood this as a tool to fight bot-net scraping. I imagined that this would add accountability to clients for how many requests they make.

I know that phrasing it like "large company cloudflare wants to increase internet accountability" will make many people uncomfortable. I think caution is good here. However, I also think that the internet has a real accountability problem that deserves attention. I think that the accountability problem is so bad, that some solution is going to end up getting implemented. That might mean that the most pro-freedom approach is to help design the solution, rather than avoiding the conversation.

Bad ideas:

You're getting lots of bot requests, so you start demanding clients login to view your blog. It's anti-user, anti-privacy, very annoying, readership drops, everyone is sad.

Instead, what if your browser included your government id in every request automatically? Anti-user, anti-privacy, no browser would implement it.

This idea:

But ARC is a middle ground. Subsets of the internet band together (in this case, via cloudflare) and strike a compromise with users. Individual users need to register with cloudflare, and then cloudflare gives you a million tokens per month to request websites. Or some scheme like this. I assume that it would be sufficiently pro-social that the IETF and browsers all agree to it and it's transparent & completely privacy-respecting to normal users.

We already sort of have some accountability: it's "proof of bandwidth" and "proof of multiple unique ip addresses", but that's not well tuned. In fact, IP addresses destroy privacy for most people, while doing very little to stop bot-nets.


> Individual users need to register with cloudflare, and then cloudflare gives you a million tokens per month to request websites. Or some scheme like this.

This seems like it would just cause the tokens to become a commodity.

The premise is that you're giving out enough for the usage of the large majority of people, but how many do you give out? If you give out enough for the 95th percentile of usage then 5% of people -- i.e. hundreds of millions of people in the world -- won't have enough for their normal usage. Which is the first problem.

Meanwhile 95% of people would then have more tokens than they need, and the tokens would be scarce, so then they would sell the ones they're not using. Which is the second problem. The people who are the most strapped for cash sell all their tokens for a couple bucks but then get locked out of the internet.

The third problem is that the AI companies would be the ones buying them, and since the large majority of people would have more than they need, they wouldn't be that expensive, and then that wouldn't prevent scraping. Unless you turn the scarcity way up and make the first and second problems really bad.


I think the idea would be that you ask your credit card to convert $10 into 10 untraceable tokens, and then spend them one at a time. You do a handshake dance with the credit card company so you walk away with tokens that only you know, and you have assurance that the tokens are in the same pool as everyone else who asked for untraceable tokens from that credit card company.

Then you can go and spend them freely. The credit card company (and maybe ever third parties?) can verify that the tokens are valid, but they can't associate them with a user. Assuming that the credit card company keeps a log, they can also verify that a token has never been used before.

In some sense, it's a light-weight and anonymous block chain.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: