Hacker Newsnew | past | comments | ask | show | jobs | submit | naasking's commentslogin

> It used to be that the US stood out because it took the law seriously and believed in its ideals to do the right thing

You're in a bubble.


> All of it using natural language.

Combining this with those approaches that recursively reason in latent space would be interesting.


I think this can be fine if the header provides a clean abstraction with well-defined behaviour in C, effectively an EDSL. For an extreme example, it starts looking like a high-level language:

https://www.libcello.org/


> abusing the preprocessor this much shouldn't be necessary and the result is almost unreadable.

The readability of error messages was always my beef with macro-based generics. Maybe 15 years ago I played around with using UTF characters for the brackets and type separators when translating a generic type into name-mangled type, so at least the generated source and error messages would still be clear and understandable.

Robust UTF support for identifiers and preprocessor was still kind of inconsistent across compilers though, so it only sort of worked. I expect this has improved considerably now though, so maybe I should try again and write something up. You can embed a surprising amount of advanced programming language theory into suitable name resolution.


> To simplify it greatly, an LLM neuron is a single input single output function. A human brain neuron takes in thousands of inputs and produces thousands of outputs

This is simply a scaling problem, eg. thousands of single I/O functions can reproduce the behaviour of a function that takes thousands of inputs and produces thousands of outputs.

Edit: As for the rest of your argument, it's not so clear cut. An LLM can produce a complete essay in a fraction of the time it would take a human. So yes, a human brain only consumes about 20W but it might take a week to produce the same essay that the LLM can produce in a few seconds.

Also, LLMs can process multiple prompts in parallel and share resources across those prompts, so again, the energy use is not directly comparable in the way you've portrayed.


> This is simply a scaling problem, eg. thousands of single I/O functions can reproduce the behaviour of a function that takes thousands of inputs and produces thousands of outputs.

I think it's more than just scaling, you need to understand the functional details to reproduce those functions (assuming those functions are valuable for the end result as opposed to just the way it had to be done given the medium).

An interesting example of this neuron complexity that was published recently:

As rats/mice (can't remember which) are exposed to new stimuli, the axon terminals of a single neuron do not all transmit a signal when there is an action potential, they transmit in a changing pattern after each action potential and ultimately settle into a more consistent pattern of some transmitting and some not.

IMHO: There is interesting mathematical modeling and transformations going on in the brain that is the secret sauce for our intelligence and it is yet to be figured out. It's not just scaling of LLM's, it's finding the right functions.


Yes, there may be interesting math, but I didn't mean "scaling LLMs", necessarily. I was making a more general point that a single-I/O function can pretty trivially replicate a multi-I/O function, so the OP's point that "LLM neurons" are single-I/O and bio neurons are multi-I/O doesn't mean much. Estimates of brain complexity have already factored this in, which is why we know we're still a few orders of magnitude away from the number of parameters needed for a human brain in a raw compute sense.

However, the human brain has extra parameters that a pure/distilled general intelligence may not actually need, eg. emotions, some types of perception, balance, and modulation of various biological processes. It's not clear how many of the parameters of the human brain these take up, so maybe we're not as far as we think.

And there are alternative models such as spiking neural networks which more closely mimic biology, but it's not clear whether these are really that critical. I think general intelligence will likely have multiple models which achieve similar results, just like there are multiple ways to sort a set of numbers.


> Yes, there may be interesting math, but I didn't mean "scaling LLMs", necessarily.

Ya, I realized later that the LLM scaling part of my post sounded like it misinterpreted what you said when it was really a separate point unrelated to the topic of neurons that just happened to include the word "scaling" also.

I do agree with you somewhat that just because biological neurons are vastly more complex and functional than typical artificial neurons, it just means we need more artificial neurons to achieve similar functionality.

> Estimates of brain complexity have already factored this in

I don't agree with this, the estimates I've seen don't seem to factor it in, and many of those estimates were prior to things discovered within just the last 5 years that expose significantly more complexity and capability that needs to be understood first.

> And there are alternative models such as spiking neural networks which more closely mimic biology, but it's not clear whether these are really that critical.

I kept reading that people wanted to use spiking networks and I thought the same thing as you, it didn't seem to provide a benefit. A while ago I read some paper about why they want to use spiking networks and I can't remember the details but they described some functional capabilities that really were much easier with spiking. I vaguely remember that it had to do with processing real-time sensory information, it was easier to synchronize signals based on frequency instead of trying to rely on precise single signal timing (something like that). And I think there were benefits in other areas also.


I agree with both of you, but scaling isn't feasible with this paradigm. You could need continent-sized hardware to approximate general intelligence with the current paradigm.

> You could need continent-sized hardware to approximate general intelligence with the current paradigm.

I doubt it, if by "current paradigm" you mean the hardware and general execution model, eg. matrix math. Model improvements from progress in algorithms have been outpacing performance improvements from hardware progress for decades. Even if hardware development stopped today, models will continue improving exponentially.


Because your argument is more persuasive to more people if you don't expand your criticism to encompass things that are already normalized. Focus on the unique harms IMO.

> Italy has seen tremendous success with privatizing High Speed rail.

I think whether privatization helps or hurts depends a lot on how corrupt or inept the government is, ie. the more corrupt or inept, the more privatization can probably help.


MudBlazor is decent.

.NET needs runtime code generation for some of its core features, like generics. Bytecode makes this much easier.

I agree that .NET uses bytecode and likely cannot practically remove it outside of narrow cases. My argument is, if .NET were a greenfield project, they likely would not use bytecode today.

Sure, with quality libraries like "is-even", who can deny the supremacy of the JS ecosystem?

https://www.npmjs.com/package/is-even


Wow thanks, a perfect package, just what I was looking for. I needed something to determine is even values.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: