Hacker News new | past | comments | ask | show | jobs | submit | mjburgess's comments login

That's not what it was designed for, that's just a mixture of propaganda and confusion.

It was designed to solve the double-spending problem with digital currencies, replacing the need for "a authoritative ledger" with a one difficult to forge.

The political project around this was to provide people with a deflationary currency akin to gold, whose inflation could not be controlled by government.

The lack of government control over the inflation of this particular currency, and the lack of an authoritative ledger, are an extremely minimal sense of currency protections (, freedoms). They have as much to do with anarchy as the internet had with porn.


It was designed to avoid the need for existing financial institutions. The doublespend problem was merely the blocker that prevented people from otherwise doing it.

> A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.


> “The left,” Tabellini concludes, “has underestimated the fact that culture can matter more than income. But as long as it insists on talking only about inequality, without addressing the identity theme, it will continue to lose out among its own former constituency.”

Is an interesting conclusion, since that seems askew from the kind of left which has predominated recently. Though it may amount to analysis of the modern left which says that they permit cultural identity to be the most salient political factor if you're one of their favoured "oppressed identities" whereas, if you arent, then you have to be analysed in purely economic terms.

I'm not entirely convinced by the analysis. Is a politics of identity displacing traditional "economic politics" -- or is it that economic politics has become a matter of identity? Eg., consider that 20-year-olds today face a society economically designed to privilege certain identities through corporate affirmative actions and the like. Policies who could only plausibly morally target the older generations where salary gaps exist -- yet have a disproportionate impact on younger groups with no such disparities (or the converse, as in the UK where young women slighly out-earn young men).

And eg., do people feel immigration has created economic deprivation at home (and so on) -- is all this just not a displaced class analysis?

We might assume not because, by economic analysis, immigration cannot really explain the "economic concerns" which are attributed to it -- but do the public know this? Or is the failure of the welfare state, of popular government policy and regulation really just misattributed to immigration (or, to these "other" identities)? Is this just a matter of spurious correlation: as western birthrates plummet, goverments are heavily endebted and unable to provide high quality services, the public observe increased immigration?

This analysis fails to realise the degree to which these "identities" have become specailisations of economic classes by policy, law and accident.


People conflat the insanity of running a network cable through every application with the poor performance of their computers.

Correction: devs have made the mistake of turning everything into remote calls, without having any understanding as to the performance implications of doing so.

Sonos’ app is a perfect example of this. The old app controlled everything locally, since the speakers set up their own wireless mesh network. This worked fantastically well. Someone at Sonos got the bright idea to completely rewrite the app such that it wasn’t even backwards-compatible with older hardware, and everything is now a remote calls. Changing volume? Phone —> Router —> WAN —> Cloud —> Router —> Speakers. Just… WHY. This failed so spectacularly that the CEO responsible stepped down / was forced out, and the new one claims that fixing the app is his top priority. We’ll see.


Presumably they wanted the telemetry. It's not clear that this was a dev-initiated switch.

Perhaps we can blame the 'statistical monetization' policies of adtech and then AI for all this -- i'm not entirely sold on developers.

What, after all, is the difference between an `/etc/hosts` set of loop'd records vs. an ISP's dns -- as far as the software goes?


> Presumably they wanted the telemetry

Why not log them to a file and cron a script to upload the data? Even if the feature request is nonsensical, you can architect a solution that respect the platform's constraints. It's kinda like when people drag in React and Next.js just to have a static website.


someone out there now has a cool resume line item about doing real time cloud microservices on the edge

You’re right, and I shouldn’t necessarily blame devs for the idea, though I do blame their CTO for not standing up to it if nothing else.

Though it’s also unclear to me in this particular case why they couldn’t collect commands being issued, and then batch-send them hourly, daily, etc. instead of having each one route through the cloud.


We (probably) can guess the why - tracking and data opportunities which companies can eventually sell or utilize for profit is some way.

The region where `p` hits the red line should be called "publish or perish".

A counter-point, in a certain sense: when the conclusions of scientific papers (in these softer science fields), contradict common sense, they tend to be unreproducible; the ones which don't, are.

The problem with studying humans is, roughly, the central limit theorem doesnt work: properties of biological and social systems do not have well-behaved statistics. So all this t-test pseudoscience can be a great misdirection, and common sense more reliable.

In the case where effect sizes are small and the data generating process "chaotic", assumptions of the opposite can be more dangerous than giving up on science and adopting "circumstantial humility". (Consider eg., that common sense is very weakly correlated across its practicioners, but "science" forces often pathological correlations on how people are treated -- which can signficantly mangify the harm).


> when the conclusions of scientific papers (in these softer science fields), contradict common sense, they tend to be unreproducible; the ones which don't, are.

Citation needed?

I don't know what would lead to that conclusion. And it would seem to run counter to the entire history of the field of psychology, for example.


Can't find the citation, but remember gwern mentioning a study in one of his posts on replication that found that unintuitive findings tend to be both less replicable and more cited than intuitive ones.

Psychology is the field that is most hit with replication failures and has a slew of unintuitive results that turn out to be malpractice.


Psychology is also the field with a slew of unintuitive results that have been repeatedly replicated as correct. And what is "intuitive" anyways? What was extremely non-intuitive a century ago is common sense today.

So that's why I question the assertion. You're right that there are tons of replication failures, but whether intuition correlates with replicability way doesn't seem relevant. Especially when the point of so much research is to look for currently "non-intuitive" things, so of course that's where more replication issues might exist. It doesn't mean you should stop researching in that direction.


I fear the lack of our ability to measure your mind might render you without many of the legal or moral protections you imagine you have. But go ahead, tare down the law to whatever inanity can be described by the trivial machines of the world's current popular charlatans. Presumably you weren't using society's presumption of your agency anyway.

> I fear the lack of our ability to measure your mind might render you without many of the legal or moral protections you imagine you have.

Society doesn't need to measure my mind, they need to measure the output of it. If I behave like a conscious being, I am a conscious being. Alternatively you might phrase it such that "Anything that claims to be conscious must be assumed to be conscious."

It's the only answer to the p-zombie problem that makes sense. None of this is new, philosophers have been debating it for ages. See: https://en.wikipedia.org/wiki/Philosophical_zombie

However, for copyright purposes we can make it even simpler. If the work is new, it's not covered by the original copyright. If it is substantially the same, it isn't. Forget the arguments about the ghost in the machine and the philosophical mumbo-jumbo. It's the output that matters.


In your case, it isnt the output that matters. Your saying "I'm conscious" isn't why we attribute consciousness to you. We would do so regardless of your ability to verbalise anything in particular.

Your radical behaviourism seems an advantage to you when you want to delete one disfavoured part of copyright law, but I assure you, it isn't in your interest. It doesnt universalise well at all. You do not want to be defined by how you happen to verbalise anything, unmoored from your intention, goals, and so on.

The law, and society, imparts much to you that is never measured and much that is unmeasurable. What can be measured is, at least, extremely ambiguous with respect to those mental states which are being attributed. Because we do not attribute mental states by what people say -- this plays very little role (consider what a mess this would make of watching movies). And none of course in the large number of animals which share relevant mental states.

Nothing of relevance is measured by an LLM's output. It is highly unambigious: the LLM has no mental states, and thus is irrelevant to the law, morality, society and everything else.

It's a obcene sort of self-injury to assume that whatever kind of radical behaviourism is necessary to hype the LLM is the right sort. Hype for LLMs does not lead to a credible theory of minds.


> We would do so regardless of your ability to verbalise anything in particular

I don't mean to say that they literally have to speak the words by using their meat to make the air vibrate. Just that, presuming it has some physical means, it be capable (and willing) to express it in some way.

> It's a obcene sort of self-injury to assume that whatever kind of radical behaviourism is necessary to hype the LLM is the right sort.

I appreciate why you might feel that way. However, I feel it's far worse to pretend we have some undetectable magic within us that allows us to perceive the "realness" of others peoples consciousness by other than physical means.

Fundamentally, you seem to be arguing that something with outputs identical to a human is not human (or even human like), and should not be viewed within the same framework. Do you see how dangerous an idea that is? It is only a short hop from "Humans are different than robots, because of subjective magic" to "Humans are different than <insert race you don't like>, because of subjective magic."


AI has always been a marketing term for computer science research -- right from the inception. It's a sin of academia, not the public.

As if anyone in the public cared about marketing for CS research. Hardly anyone is even exposed to it.

AI in the public mind comes from science fiction, and it means the same thing it meant for the past 5+ decades: a machine that presents recognizable characteristics of a thinking person - some story-specific combination of being as smart (or much smarter) than people in a broad (if limited) set of domains and activities, and having the ability (or at least giving impression of it) to autonomously set goals based on its own value system.

That is the "AI" general population experiences - a sci-fi trope, not tech industry marketing.


The scifi AI boom in the 60s follows the AI research boom. This was the original academia hype cycle and one which still scars the public mind via this scifi.

Rust, in many ways, is a terrible first systems programming language.

To program a system is to engage with how the real devices of a computer work, and very little of their operation is exposed via Rust or even can be exposed. The space of all possible valid/safe Rust programs is tiny compare to the space of all useful machine behaviours.

The world of "safe Rust" is a very distorted image of the real machine.


> Rust, in many ways, is a terrible first systems programming language.

Contrariwise, Rust in, in many way, an awesome first systems programming language. Because it tells you and forces you to consider all the issues upfront.

For instance in 59nadir's example, what if the vector is a vector of heap-allocated objects, and the loop frees them? In Rust this makes essentially no difference, because at iteration you tell the compiler whether the vector is borrowed or moved and the rest of the lifecycle falls out of that regardless of what's in the vector: with a borrowing iteration, you simply could not free the contents. The vector generally works and is used the same whether its contents are copiable or not.


A lot of idiomatic systems code is intrinsically memory unsafe. The hardware owns direct references to objects in your address space and completely disregards the ownership semantics of your programming language. It is the same reason immediately destroying moved-from objects can be problematic: it isn’t sufficient to statically verify that the code no longer references that memory. Hardware can and sometimes does hold references to moved-from objects such that deferred destruction is required for correctness.

How is someone supposed to learn idiomatic systems programming in a language that struggles to express basic elements of systems programming? Having no GC is necessary but not sufficient to be a usable systems language but it feels like some in the Rust community are tacitly defining it that way. Being a systems programmer means being comfortable with handling ambiguous object ownership and lifetimes. Some performance and scalability engineering essentially requires this, regardless of the language you use.


None of these "issues" are systems issues, they're memory safety issues. If you think systems programming is about memory saftey, then you're demonstrating the problem.

Eg., some drivers cannot be memory safe, because memory is arranged outside of the driver to be picked up "at the right time, in the right place" and so on.

Statically-provable memory saftey is, ironically, quite a bad property to have for a systems programming language, as it prevents actually controlling the devices of the machine. This is, of course, why rust has "unsafe" and why anything actually systems-level is going to have a fair amount of it.

The operation of machine devices isnt memory safe -- memory saftey is a static property of a program's source code, that prevents describing the full behaviour of devices correctly.


Water is wet.

Yes, touching hardware directly often requires memory unsafety. Rust allows that, but encourages you to come up with an abstraction that can be used safely and thereby minimize the amount of surface area which has to do unsafe things. You still have to manually assert / verify the correctness of that wrapper, obviously.

> This is, of course, why rust has "unsafe" and why anything actually systems-level is going to have a fair amount of it.

There are entire kernels written in Rust with less than 10% unsafe code. The standard library is less than 3% unsafe, last I checked. People overestimate how much "unsafe" is actually required and therefore they underestimate how much value Rust provides. Minimizing the amount of code doing unsafe things is good practice no matter what programming language you use, Rust just pushes hard in that direction.


> For instance in 59nadir's example, what if the vector is a vector of heap-allocated objects, and the loop frees them?

But the loop doesn't free them. This is trivial for us to see and honestly shouldn't be difficult for Rust to figure out either. Once you've adopted overwrought tools they should be designed to handle these types of issues, otherwise you're just shuffling an esoteric burden onto the user in a shape that doesn't match the code that was written.

With less complicated languages we take on the more general burden of making sure things make sense (pinky-promise, etc.) and that is one that we've signed up for, so we take care in the places that have actually been identified, but they need to be found manually; that's the tradeoff. The argument I'm making is that Rust really ought to be smarter about this, there is no real reason it shouldn't be able to understand what the loop does and treat the iteration portion accordingly, but it's difficult to make overcomplicated things because they are exactly that.

I doubt that most Rust users feel this lack of basic introspection as to what is happening in the loop makes sense once you actually ask them, and I'd bet money most of them feel that Rust ought to understand the loop (though in reading these posts I realize that there are actual humans that don't seem to understand the issue as well, when it's as simple as just reading the code in front of them and actually taking into account what it does).


> But the loop doesn't free them.

What if it did free them in a function you don't directly control?


> forces you to consider all the issues upfront.

Ever wonder why we do not train pilots in 737s as their first planes? Plenty of complex issues do NOT, in fact, need to be considered upfront.


YMMV, naturally, but I've found that some embedded devices have really excellent hardware abstraction layers in Rust that wrap the majority of the device's functionality in an effectively zero-overhead layer. Timers? GPIO? Serial protocols? Interrupts? It's all there.

- https://docs.rs/atsamd-hal/

- https://docs.rs/rp2040-hal/


That systems languages have to establish (1) memory saftey, (2) statically; (3) via a highly specific kind of type system given in Rust; and (4) with limited inference -- suggests a lack of imagination.

The space of all possible robust systems languages is vastly larger than Rust.

It's specific choices force confronting the need to statically prove memory saftey via a cumbersome type system very early -- this is not a divine command upon language design.


> The space of all possible robust systems languages is vastly larger than Rust.

The space of all possible CVE is also vastly larger outside of Rust as well.

My biggest takeaway from Rust isn't that it's better C++. But that it's extremely fast (no runtime limited GC) and less footgunny Java.


sure, rust is not the final answer to eliminating memory safety bugs from systems programming. but what are the alternatives, that aren't even more onerous and/or limited in scope (ats, frama-c, proofs)?

My preference is to have better dynamic models of devices (eg., how many memory cells does ram have, how do these work dynamically, etc.) and line them up with well-defined input/output boundaries of programmes. Kinda "better fuzzing".

I mean, can we run a program in a well-defined "debugging operating system" in a VM, with simulated "debugging devices" and so on?

I dont know much about that idea, and the degree to which that vision is viable. However it's increasingly how the most robust software is tested -- by "massive-scale simulation". My guess is it isnt a major part of, say, academic study because its building tools over years rather than writing one-off papers over months.

However, if we had this "debuggable device environment", i'd say it'd be vastly more powerful than Rust's static guarantees and allow for a kind of "fearless systems programming" without each loop becoming a sudoku puzzle.


The comparison is to use a physics library. Only in the LLM case are you trying to write the physics engine yourself. And if its not the kind of physics that's in a library, yes, you will need to learn it to ship a game.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: