Hacker Newsnew | past | comments | ask | show | jobs | submit | dminik's commentslogin

I've not explored every program domain, but in general I see two kinds of program memory access patterns.

The first is a fairly generic input -> transform -> output. This is your generic request handler for instance. You receive a payload, run some transform on that (and maybe a DB request) and then produce a response.

In this model, Arc is very fitting for some shared (im)mutable state. Like DB connections, configuration and so on.

The second pattern is something like: state + input -> transform -> new state. Eg. you're mutating your app state based on some input. This fits stuff like games, but also retained UIs, programming language interpreters and so on on.

Using ARCs here muddles the ownership. The gamedev ecosystem has found a way to manage this by employing ECS, and while it can be overkill, the base DOD principles can still be very helpful.

Treat your data as what it is; data. Use indices/keys instead of pointers to represent relations. Keep it simple.

Arenas can definitely be a part of that solution.


It takes some hubris to post "rust is consistently 10% slower than C" take and back it up with a page which shows a result 6:4 in Rusts favor.

I said "every time I try it" and I said "idiomatic Rust." The fast rust implementations are consistently starred there, indicating use of hand SIMD or unsafe or others. If you read them, they are not very idiomatic.

Also note that 3 of 4 comparisons in favor of Rust are marked "contentious" for using different algorithms with different limitations - they are not equivalents. The last one is k-nucleotide, which is a very algorithmically heavy problem where nobody has given optimized C and Rust has won at producing a decently optimized thing quickly. Note that the fastest C++ implementation handily beats Rust on that one, also.


It just fundamentally does not make sense to compare languages by comparing codegen backends. GCC and LLVM do not produce the same code for equivalent code, especially when optimizations are applied. It's an apples-to-oranges comparison.

Using Clang instead of GCC, the comparison becomes slightly better, at least for microbenchmarks that don't rely too much on libraries.

These benchmarks are still useful from a practical viewpoint - answering the question "what's the expected performance bracket of using language X in real projects today". But it doesn't say anything fundamental about the language design or even the quality of the implementation.


I know what you said. I just don't think it was interesting. If by idiomatic rust, you mean generic "unoptimized" Rust being slower than optimized C, then yeah. No shit.

On a similar note, I don't think it's worth talking about C as if the only C being written is highly optimized hand rolled assembly style of C. That's one in a thousand projects.

Now, as for the benchmarks game, you mean 3 of 6 comparisons in favor of Rust. Rust is winning the benchmarks against C there.

I had a look and the top Rust and C entries are using the same pcre2 library in regex-redux. Same for pidigits where both libraries are using GMP.

The only library difference I can see is that the C entries are using OpenMP and both Rust entries are using Rayon. Now, you could claim that using Rayon gives Rust an unfair advantage. But an entirely userland library beating an industry standard with support from compilers is not a good look for C.


The easiest* solution would be to do what rust does. You need to use & on both sides and error out on mismatch. Eg.

fn foo(bar: &Bar) { ... }

bar(&Baz)

* This would be a breaking change, so a non-starter.


Tbh, at this point I would pay for paint.NET on Linux.

Pinta is interesting, but the UI is terrible. Did we really have to remove the resize handles? They're there when adding shapes, but not when manipulating pixels/selection? Half the options I need being hidden in a hamburger menu isn't great either.

Gimp is gimp. I don't need Photoshop. And I don't want a Photoshop level of a learning curve.

Krita is interesting, but it seems to be aimed at drawing. I struggled to copy the color code from an image. By default my eyes are drawn to the massive advanced color selector on the right, but it's a trap. You actually need the tiny color selector in the top bar. It shouldn't be this hard.

I need a subset of image manipulation features in my work and each tool has a different one.


I was under the impression that Inkscape explicitly doesn't follow the gnome guidelines.

That's why every few months, there's a proposal to redesign it which trades usability for minimalism. Here's one I pulled from a random Google search:

https://gitlab.com/inkscape/ux/-/issues/236


https://wiki.inkscape.org/wiki/Inkscape_invariants

They claim it's one of the cornerstones of their project. Who am I to argue.

Personally, I like how functional Inkscape's UI is AND how minimal Files is, for example..


As far as I'm concerned, there are two main issues with profiles:

1. They're either unimplementable or useless (too many false positives and false negatives).

I think this is pretty evident based on the fact that profiles have been proposed for a while and that no real implementation exists. Worse, out of all of the open source projects and for profit companies, noone has been able to implement any sort of static analysis that would even begin to approach the guarantees Rust makes.

2. The language doesn't give you any tools to actually write safe code.

Ok, let's say that someone actually implements safety profiles. And it highlights your usage of a standard library type. What do you do?

Safe C++ didn't require a new standard library just because. The current stdlib is riddled with safety issues that can't really be fixed and would not be fixed because of backwards compatibility.

You're stuck. And so you turn the safety profile off.


Ok, I think I can reproduce it by starting the selection from the end of the previous line.

I've reported the issue here: https://github.com/hyperlink-academy/leaflet/issues/196


The blog itself is build on leaflet.pub.

I'll take a look and file an issue.


I'm sorry, but please do read the post.

The middleware section is a setup. The real trouble starts when ejecting from Next and using a custom server still doesn't allow you to do anything because Next is a black box.

I would have been happy with installing fastify and just using it's middleware, but even that doesn't work.


Yeah, I was actually recommended the instrumentation route by a commenter on Reddit.

I spent a similar amount of time setting up opentelemetry with Next and while it would have been titled differently, I would have likely still written a blog post after this experience too.

This isn't your fault, but basically every opentelemetry package I had to setup is marked as experimental. This does not build confidence when pushing stuff to production.

Then, for the longest time I couldn't get the pino instrumentation working. I managed to figure it out eventually, but it was a pain.

First, pino has to be added to serverExternalPackages. If it's not, the OTel instrumentation does not work.

Second, the automatic instrumentation is extremely allergic to import order. And also for whatever reason, only the pino default export is instrumented. Again, this took a while to figure out.

Module local variables don't work how I would expect. I had to use globalThis instead.

And after all that I was still hit by this: https://github.com/vercel/next.js/issues/80445

It does work, but it was not great to set up. Granted, I went with the manual router (eg. not using vercel/otel).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: