Hacker News new | past | comments | ask | show | jobs | submit | more pornel's comments login

Fun fact: Box<T> is allowed in C FFI in arguments and return types. From C's perspective it's a non-NULL pointer to T, but on the Rust side it gets memory management like a native Rust type.

Option<Box<T>> is allowed too, and it's a nullable pointer. Similarly &mut T is supported.

Using C-compatible Rust types in FFI function declarations can remove a lot of boilerplate. Unfortunately, Vec and slices aren't one of them.


Hyundai shows which one.


Mine doesn't (i30 2020 EU model).


VW ID. cars have the worst fake buttons on the steering wheel. Multiple buttons were merged into a single mushy creaky touch-sensitive plastic face that is inconvenient and unreliable when you press intentionally, but easy to accidentally activate by brushing your hand over it.


Yet another "innovation" stemming from electric "cars". Truly an abhorrent abomination on top of true cars...


They put the exact same faux buttons in their ICE cars.

This is not an EV thing. It's a contemporary trend, and it just happens that most newly designed cars are EVs now.

The rise of touchscreen technology was just coincidental with the rise of EVs. The first Tesla Roadster, Nissan Leaf, and Renault Zoe had crappy little screens, and real buttons for everything, like most cars of their era.

OTOH today EV-hating Toyota keeps making screens bigger. The latest Lamborghini has multiple touchscreens too.

This change would have happened even if EVs didn't exist. iPad is more to blame for that trend than an electric drivetrain.


Until 2012, Rust 0.6 required a postfix `.` on enum variants:

https://github.com/rust-lang/rust/commit/04a2887f879

And here's a 2013 bug for the ambiguity:

https://github.com/rust-lang/rust/issues/10402


I was just about to comment that it's unfortunate that languages seem to have settled on the (false) dichotomy that enum variants should be either `MyEnumName.variantname` or just plain `variantname`. Often, you want enum variants to read almost like inbuilt literals, as that leads to more readable "natural-sounding" code than needing to say `MyEnumName.variantname` every time. However, having them actually be bareword literals leads to both ambiguity while reading and to long term maintainability issue like the one in the post. Postfix `.` may or may not be a great syntax specifically, but something like it still seems like a great compromise between these two extremes.

Do you know why the syntax was decided to get axed? Is this part of a larger PR where the decision is elaborated upon?


The problem has been known from the beginning, because async I/O on Windows has the same issues as io_uring.

Rust went with poll-based API and synchronous cancellation design anyway, because that fits ownership and borrowing.

Making async cancellation work safely even in presence of memory leaks (destructors don't always run) and panics remains an unsolved design problem.


You're namedropping the hot new card, but every card before it was also the hot new card when it was released. It reminds me of the NEVER OBSOLETE sticker e-machines put on their unbelievably fast 500Mhz Pentium PCs.

These cards will inevitably become worse than worthless when the increased running costs of older-generation hardware exceed the cost of buying next-generation hardware. At some point it won't make sense to use more electricity, more cooling, more rack space to run a hospice for old cards, when the same workload can be done easier, quicker, and cheaper on newer hardware, even after adding the cost of buying the new hardware.

The Xeons that used to cost $4000 can now be found on eBay for 1% of their original sticker price, because they're so unprofitable to run.


The GB200 is specifically called out by the linked article - I didn't pick it at random.

"The researchers point out that the weight of Nvidia's latest Blackwell platform in a rack system — designed for intensive LLM inference, training and data processing tasks — tips the scales at 1.36 tons, demonstrating how material-intensive GenAI can be"

While I certainly agree that old Xeons are selling for 1% of their original MSRP, it doesn't really seem like this disagrees with what I'm saying - if someone's buying it for 40$, that's not e-waste (yet). I do agree that eventually things can become e-waste if the initial savings are significantly offset by running costs. However it's not clear how much longer we're going to continue to see such large generational improvements in power efficiency or whether these GB200s will be entirely obsolete when such improvements eventually stop happening.


Those would be 10 year old or so CPUs. Not only is it because they're unprofitable to run, but they're end of life. I know CPUs do and can run longer, but realistically they're towards the end.


Optimizations in compilers like LLVM are done by many individual code transformation passes, one applied to the result of the previous.

This layering makes the order of the passes important and very sensitive. The passes usually don't have a grand plan, they just keep shuffling code around in different ways. A pass may only be applicable to code in a specific form created by a previous simplification pass. One pass may undo optimizations of a previous pass, or optimize-out a detail required by a later pass.

Separation into passes makes it easier to reason about correctness of each transformation in isolation, but the combined result is kinda slow and complicated.


Essentially yes.

There's ambient occlusion that computes light intensity with high spatial resolution, but completely handwaves the direction the light is coming from. OTOH there are environment maps that are rendered from a single location, so they have no spatial resolution, but have precise light intensity for every angle. Cascade Radiance observes that these two techniques are two extremes of spatial vs angular resolution trade-off, and it's possible to render any spatial vs angular trade-off in between.

Getting information about light from all angles at all points would cost (all sample points × all angles), but Radiance Cascades computes and combines (very few sample points × all angles) + (some sample points × some angles) + (all sample points × very few angles), which works out to be much cheaper, and is still sufficient to render shadows accurately if the light sources are not too small.


So I've been reading

https://graphicscodex.com/app/app.html

and

https://mini.gmshaders.com/p/radiance-cascades

so I could have a basic grasp of classical rendering theory.

I made some assumptions:

1. There's an isometric top-down virtual camera just above the player

2. The Radiance Cascades stack on top of each other, incresing probe density as they get closer to the objects and players

I suspect part of the increased algorithm efficiency results from:

1. The downsampling of radiance measuring at some of the levels

2. At higher probe density levels, ray tracing to collect radiance measurements involves less computation than classic long path ray tracing

But I'm still confused about what exactly in the "virtual 3D world" is being downsampled and what the penumbra theory has to do with all thus.

I've gained a huge respect for game developers though - this is not eady stuff to grasp.


Path tracing techniques usually focus on finding the most useful rays to trace, to focus only rays that hit a light (importance sampling).

RC is different, at least in 2D and screen-space 3D. It brute-force traces fixed sets of rays in regular grids, regardless of what is in the scene. There is no attempt to be clever about picking the best locations and best rays. It just traces the exact same set of rays every frame.

Full 3D RC is still too expensive beyond voxels with Minecraft's chunkiness. There's SPWI RC that is more like other real-time raytracing techniques: traces rays in the 3D world, but not exhaustively, only from positions visible on screen (known as Froxels and Surfels elsewhere).

Penumbra hypothesis is an observation that hard shadows require high resolution to avoid looking pixelated, but soft shadows can be approximated with bilinear interpolation of low-res data.

RC adjusts its sampling resolution to be the worst resolution it can get away with, so that edges of soft shadows that are going from dark to light are all done by interpolation of just two samples.


Thanks for taking the time to provide more details on how resonance cascades work.


This is a rendering technique designed for real-time graphics, and it's not applicable to that kind of image analysis. It does what has already been possible with ray tracing, but using an approximation that makes it suitable for real-time graphics.

However, the technique has been used to speed up astrophysics calculations:

https://arxiv.org/pdf/2408.14425


Rust git2 bindings are pretty good and actively maintained.

However, Rust also has `gix` which is a ground-up reimplementation, and once that matures, it will probably sideline libgit2.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: