Hacker Newsnew | past | comments | ask | show | jobs | submit | swfsql's commentslogin

I think I miss a suckless but async and for everything (lots and lots of apps). To the point where each of those different apps are single thread, and in such a way that they collaboratively all share the same single thread. So I'm looking for a single app that has many library-apps inside of it, all brutalistic, async and single thread.


Single threading sucks. It's 2025 and even low-end computers have dozens of hardware threads. I don't want to compute like it's 1995 anymore.


I agree, but there is a sort of beauty in programs that were written for absurdly slow hardware.

There was a thing on HN like seven years ago [1] that talked about how command line tools can be many times faster than Hadoop; the streams and pipelines are just so ridiculously optimized.

Obviously you're not going to replace all your Hadoop clusters with just Bash and netcat, and I'm sure there are many cases where Hadoop absolutely outperforms something hobbled together with a Bash script, but I still think it serves a purpose: because these tools were written for such tiny amounts of RAM and crappy CPUs, they perform cartoonishly fast on modern computers.

I don't like coding like it's 1995 either, and I really don't write code like that anymore; most of the stuff I write nowadays can happily assume several gigs of memory and many CPUs, but I still respect people that can squeeze every bit of juice out of a single thread and no memory.

[1] https://adamdrake.com/command-line-tools-can-be-235x-faster-...


Single threading always makes it run slower though.

Also lots of 1995 assumptions lead to outrageously slow software if used today. Python in 1995 was only marginally slower than C++. It's orders of magnitude slower today.


Yeah that’s not true. In fact I would say it’s almost the opposite; most things (other than IO) will go slower if you just throw extra threads at it.

There’s overhead with thread creation, locks can introduce a lot of contention and waiting and context switches, coordination between threads has a non-zero cost, and the list goes on.

Well-optimized multithreaded code will often be faster but thats harder than it sounds, and it’s certainly not the case that “single threading always makes it run slower”.


If it sucks we can call it suckmore then


Is this reinventing cooperative multitasking + lack of process-based memory protection with more steps?

The reason why (almost) everyone migrated away to preemptive multitasking + memory protection is because it only takes one piece of code behaving sightly different from what the system/developer expected to bring the entire thing down to a halt, either by simply being slower that expected, or by modifying state it's not supposed to.


I disagree, they didn't straight out pointed this, because this is nonsense. Semantic changes can break anything, even if it's some intermediary API.

There are more breakage in rust due to the type-system-related semantics, but ideally a C dev would also want their system to break if the semantics aren't right. So this is a criticism on C..?

So following this argument, they don't want Rust because C falls short? Nonsense.

edit: The speaker did mention that they didn't want to force limited use on the base APIs, but that for a great deal of their usage, they could have determined fixed semantics, and make intermediary APIs for it. So this was not about limiting the basic APIs.


Here are the software requirements (inferred from the commenter):

- (1) the C code will be refactored periodically

- (2) when refactored internally it can break C code, but the change author should fix any breaking in C

- (3) Rust must not break when (1) happens

It's the Rust devs' job to meet those requirements if they want to contribute. It looks in the video like they don't understand this, which is pretty basic.


I read around that DeepSeek's team managed to work-around hardware limitations, and that in theory goes against the "gatekeeping" or "frontrunning" investment expectations from nvidia. If a partial chunk of investment is a bet on those expectations, that would explain a part of the stock turbulence. I think their 25x inference price reduction vs openai is what really affected everything, besides the (uncertain) training cost reduction.


We all use PCs and heck even phones that have thousands of times the system memory of the first PCs.

Making something work really efficiently on older hardware doesn't necessarily imply less demand. If those lessons can be taken and applied to newer generations of hardware, it would seem to make the newer hardware all the more valuable.


Using a local 7B for chatting, I saw it tries very hard to check for inconsistencies of itself, and that may spill to also checking for the user's "inconsistencies".

Maybe it's better to carefully control and explain the talk progression. Selectively removing old prompts (adapting where necessary) - which also reduces the context - results in it not having to "bother" to check for inconsistencies internal to irrelevant parts of the conversation.

Eg. asking it to extract Q&A from a line of text and format it to json, which could be straightforward, sometimes it would wonder about the contents from within the Q&A itself, checking for inconsistencies eg:

  - I need to be careful to not output content that's factually incorrect. Wait but I'm not sure about this answer I'm dealing with here..
  - Before the questions were about mountains and now it's about rivers, what's up with that?
  - etc..
I had to strongly demand it to treat it all as jumbled text/verbatim, and never think about their meaning. So it should be more effective if I always branched from the starting prompt when entering a new Q&A for it to work on. So this is what I meant by "selectively remove old prompts".


So they still aren't lamp geniuses that you can rub and ask for 3 wishes??


I think that since training must happen on a non-bitnet architecture, tuning towards bitnet is always a downgrade on it's capabilities, so they're not really interested in it. But maybe they could be if they'd offer cheaper plans, since it's efficiency is relatively good.

I think the real market for this is for local inference.


I think so, specially with finetuning


It's already at 0.8, just another 20% and..!


I wish there existed something like this for arbitrary videos such as for movies..



I'm not familiar with those functions, but I had the impression they actually shouldn't have the same name.

Since the Rust function has implicit/automatic behavior depending on how it's state is and how it's used by the callsite, and since the C one doesn't have any implicit/automatic behavior (as in, separate/explicit lifecycle calls must be made "manually"), I don't even see the reason for them to have the same name.

That is to say, having the same name would be somehow wrong since the functions do and serve for different stuff.

But it would make sense, at least from the Rust site, to have documentation referring to the original C name.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: