Hacker Newsnew | past | comments | ask | show | jobs | submit | rossnordby's commentslogin

For irreversible computation, yup. If reversible computation turns out to be practical, things are a little weirder, and the bounds are... further away.

Stuff like: https://en.wikipedia.org/wiki/Margolus%E2%80%93Levitin_theor... https://en.wikipedia.org/wiki/Bremermann%27s_limit

The Landauer limit is dependent on temperature, though. If you fast forward the universe a little bit to let the CMB cool down, even irreversible computing can get pretty efficient in theory.


I think a lot of people might be missing some cultural context, or something. At the risk of killing the joke, a bunch of this post is worded as it is specifically for the opportunities for absurdity. It's an excuse to write sentences like:

Recall that the waluigi simulacra are being interrogated by an anti-croissant tyranny.

The post is also trying to make an actual point, but while having fun with it.

When you read "... and as literary critic Eliezer Yudkowsky has noted..." just place your tongue firmly in your cheek.


>Most people would not be able to procreate - the world would overcrowd.

This isn't quite true. Suppose no one dies. Starting from a population of 10 billion, suppose every person can have one child.

The population after one iteration is 15 billion. If those 5 billion then each have one child, the next iteration's population is 17.5 billion. This converges to 2x the original population. In fact, so long as not everyone has 2 kids, it will converge to a finite value.

(Edit: also, this problem takes places on the timescale of HUNDREDS or THOUSANDS of years. The year 2500 is not going to look like today.)

>The really bad people of history could find themselves in permanent long-lasting power (think Putin for a modern example).

In many cases, dictatorships do not end with the dictator's old age. This is true both in the sense that they sometimes die early, and in the sense that they're prone to dynasties. The system of a dictatorship already has agelessness, just not indestructibility.

>I think you can come up with 100 reasons why curing death would be bad.

You can indeed do that, but it's worth keeping the screaming torture of the default in perspective. It's really hard to do worse than aging. I wouldn't want to see my parents wither, suffer, and die. I don't want to die knowing my children would wither, suffer, and die. I don't want you to wither, suffer, and die. I don't want people to have to grieve the loss of their loved ones.

We can just... get rid of the ultrabadness, and have more of the good things. There are new problems that would have to be addressed, but they are addressable! Sometimes, things just aren't that complicated.


Having done a little work across the spectrum, this is definitely correct. Years ago, I targeted the cheaper end, and the amount of work expected was absurd. The nonmonetary terms of the deals were often even worse; think business destroying levels of exploitation. (I declined those!)

In contrast, I've quoted hundreds of dollars an hour while explaining that I didn't think I would offer enough value to actually justify that price in context and encouraged alternatives- and got the job immediately. And the project went very smoothly. In another recent case, I requested tens of thousands of dollars for some project-relevant expenditures, and received a deposit without a single question.

Organizations that are willing to spend money tend to be the ones that understand what they're buying and how to value it.


I gave up on the cheaper end long ago - too many trials by fire. Raising my rates was the best thing I did for my own sanity. I still run into an odd client here and there that makes my life Hell on Earth for a while. I also stopped giving discounts. And on the advice of good friend who runs a sizable creative agency in Los Angeles, came up with a "fuck off" rate for when I really don't want the work. Right now, I don't charge a sky-high rate, but I've started applying "9am to 6pm, no on-call, no toxicity, no arseholes, no high pressure to deliver at all costs." I am so over client-induced PTSD & anxiety disorders.


"Do less with less" is a good way to put it. It can be helpful to look at the extremes- for example, people who are starving can end up with suppressed thyroid function (among other things). This appears to be adaptive; a hypothyroid state will tend to avoid building or sometimes even maintaining metabolically expensive muscle, overwhelming exhaustion will tend to suppress nonvital calorie expenditure, and even fidgeting behaviors can be suppressed. In other words, by reducing burn rate, you starve slower.

This is not something you want happening in a well-nourished individual. Beyond making you more likely to die to predation or accidents from severe muscle wasting, it also just feels horrible. There's a reason why people with untreated hypothyroidism (unrelated to starvation) struggle with exercise and weight loss.

I've also personally observed some people on... inadvisable extreme crash diets getting some weird bloodwork numbers. Like TSH spiking by a factor of 10- which, unlike the above starvation case which typically suppresses TSH, may imply malnutrition and inability to produce sufficient thyroid hormone. Their empirically derived caloric burn rate dropped by more than 30% over the duration of the diet, and a substantial amount of that was from dramatic muscle wasting. Not exactly ideal!


I can't comment on the ASP/web side of things, but C# matches up with my atypical use cases extremely well. The high points:

1. Fast compile times for most of development. I'm fine with waiting a while to do a special highly optimized deployment, but getting 95-100% of optimal performance with 0-3 second build times is really nice.

2. Controllable memory access through value types. Nothing getting in the way of C-like contiguous buffers and managing cache line or load alignment.

3. GC that gets out of the way. C# has a GC, but in most of my applications the GC never has to run because I rarely allocate GC-managed instances. It's definitely a nonidiomatic use of C#, but the fact that it's pretty still pretty easy to do is nice. And when I don't have to care about the GC's overhead, the presence of the GC just makes everything easier developmentally.

4. Ability to actually make use of the hardware. The compiler's improved massively in the last several years, and the way vectorization is exposed actually feels a lot nicer than my experiences in C++-land.

And less relevant to my own needs, but still interesting, nowadays you can directly link a C# library into a C/C++/Rust application just like any C library. Among other things.

Almost a decade ago, I was considering exiting the ecosystem in favor of C++/D/not-yet-even-1.0 Rust/etc, but the open source push and a sudden focus on performance basically made the jump unnecessary. C# occupies a really nice sweet spot.


Yes! I suspect (but I don't know for sure) that C# has the highest performance sealing of all the VM-based languages with a tracing garbage collector. structs, Span<T> and ReadOnlySpan<T> for type-safe access to contiguous regions of memory allow us to do a lot without touching the heap. C# also has hardware intrinsics for non-cross platform SIMD instructions if you're interested in that kind of thing.


Perhaps it can be manually optimized the best indeed, but the JVM still has a better GC if I’m not mistaken.

Value types are indeed very cool, and I really hope they will get implemented sooner on the Java side as well. Just as a note, Java recently got a SIMD API as well.


You're right; Java has ZGC, a best-in-class garbage collector with average pause times of 50 microseconds while still having great throughput. https://malloc.se/blog/zgc-jdk16#:~:text=With%20concurrent%2...


Can you recommend any resource on how to get into writing high-performant C#?


Pretty much all the standard performance advice from other languages like C/C++/Rust applies. The concepts driving "data oriented design" will get you a long way. The biggest difference is learning how to play nice with the GC- avoiding allocations where necessary, keeping the heap simple so that collections are cheap, or just setting things up so the GC never has to run.

I wrote a bit of stuff a few years ago: https://www.bepuentertainment.com/blog/2018/8/30/modern-spee...

Some of the implementation details (like the operator codegen) are surprisingly outdated now given the speed at which the runtime has moved (and the library is now ~4x faster or something silly like that), but the fundamentals are still there.

Since I wrote that, codegen has improved a lot, the hardware intrinsic vector APIs have offered a way to optimize critical codepaths further, additional cross platform vectorization helpers have made it even easier, NativeAOT is getting close, and all sorts of other stuff. The C# discord #allow-unsafe-blocks channel contains a variety of denizens who might be able to offer more resources.


Thanks! That link especially contains some really helpful write-ups. I never really needed to think much about performance, as my C# usage always was constrained to simple CRUD stuff in server backends, which is a shame really, as it is such a nice language.


Another related option that sidesteps a big chunk of perceptible latency is to send clients a trivially reprojectable scene. In other words, geometry or geometry proxies that even a mobile device could draw locally with up to date camera state at extremely low cost. The client would have very little responsibility and effectively no options for cheating.

View independent shading response and light simulations can be shared between clients. Even much of view dependent response could be approximated in shared representations like spherical gaussians. The scene can also be aggressively prefiltered; prefiltering would also be shared across all clients.

This would be a massive change in rendering architecture, there's no practical way to retrofit it onto any existing games, and it would still be incredibly expensive for servers compared to 'just let the client do it', and it can't address game logic latency without giving the client more awareness of game logic, but... seems potentially neat!


Not OP, but consider being 6 feet and 8 inches tall. In many buildings (especially older ones), you will have to dodge light fixtures regularly. Many doorways will actually be too short and you'll need to duck. Cooking in most kitchens will end up being quite uncomfortable since you'll be hunched over the entire time. You won't fit in a large number of cars. Flying in planes becomes physically painful unless you get an exit row or pay more. And so on.

You can still interact with the world, but being an outlier adds a whole lot of little bits of friction and discomfort that a more statistically average person doesn't experience.


I am tall. I have problems finding clothes. I cannot fit in the best light plane (in my opinion) and I am a pilot. My grandmother's house had the ceiling very low, it was about my shoulder (she was 5 ft tall). Never complained. Never complained the world is not built around me. I don't understand how that works. I am not entitled to anything.


You may not complain, but you do observe that your experiences are different in a way that can sometimes be unpleasant.

It appears that when another person recounted how their experiences are different in a way that can sometimes be unpleasant, you assumed that they were placing a burden on others to change for their benefit (or that they were entitled to it). I don't think that is a correct interpretation.


Nice job moving the goalposts. The original question was whether the world can be said to be "designed". What you, personally choose to complain about is irrelevant.


To any fellow seekers of the Tootbird, I can answer questions if you have them.


(alas the path of the tootbird is a lonely one)


The usual approach is some form of sweep to get a time of impact. Once you've got a time of impact, you can either generate contacts, or avoid integrating the involved bodies beyond the time of impact, or do something fancier like adaptively stepping the simulation to ensure no lost time.

If the details don't matter much, it's common to use a simple ray cast from the center at t0 to the center at t1. Works reasonably well for fast moving objects that are at least kinda-sorta rotationally invariant. For two dynamic bodies flying at each other, you can test this "movement ray" of body A against the geometry of body B, and the movement ray of body B against the geometry of body A.

One step up would be to use sphere sweeps. Sphere sweeps tend to be pretty fast; they're often only slightly more complicated than a ray test. Pick a sphere radius such that it mostly fills up the shape and then do the same thing as in the previous ray case.

If you need more detail, you can use a linear sweep. A linear sweep ignores angular velocity but uses the full shape for testing. Notably, you can use a variant of GJK (or MPR, for that matter) for this: http://dtecta.com/papers/jgt04raycast.pdf

If you want to include angular motion, things get trickier. One pretty brute forceish approach is to use conservative advancement based on distance queries. Based on the velocity and shape properties, you can estimate the maximum approaching velocity between two bodies, query the distance between the bodies (using algorithms like GJK or whatever else), and then step forward in time by distance / maximumApproachingVelocity. With appropriately conservative velocity estimates, this guarantees the body will never miss a collision, but it can also cause very high iteration counts in corner cases.

You can move a lot faster if you allow the search to look forward a bit beyond potential impact times, turning it into more of a root finding operation. Something like this: https://box2d.org/files/ErinCatto_ContinuousCollision_GDC201...

I use a combination of speculative contacts and then linear+angular sweeps where needed to avoid ghost collisions. Speculative contacts can handle many forms of high velocity use cases without sweeps- contact generation just has to be able to output reasonable negative depth (separated) contacts. The solver handles the rest. The sweeps use a sorta-kinda rootfinder like the Erin Catto presentation above, backed up by vectorized sampling of distance. A bit more here, though it's mainly written for users of the library: https://github.com/bepu/bepuphysics2/blob/master/Documentati...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: