Hacker Newsnew | past | comments | ask | show | jobs | submit | moring's commentslogin

> The first thing in the decision tree is "do you send a crew?" and you're trading off the hard problem of teleoperating the thing (...)

Tele-operating a whole facility and more is something where you can make huge improvements down here on earth, before you even start into space, and even make a profit from it.


Two things that come to my mind:

1. Sometimes "lock-free" actually means using lower-level primitives that use locks internally but don't expose them, with fewer caveats than using them at a higher level. For example, compare-and-set instructions offered by CPUs, which may use bus locks internally but don't expose them to software.

2. Depending on the lower-level implementation, a simple lock may not be enough. For example, in a multi-CPU system with weaker cache coherency, a simple lock will not get rid of outdated copies of data (in caches, queues, ...). Here I write "simple" lock because some concepts of a lock, such as Java's "synchronized" statement, bundle the actual lock together with guaranteed cache synchronization, whether that happens in hardware or software.


Reminder that lock-free is a term of art with very specific meaning about starvation-freedom and progress and has very little to do with locking.


I understood the criticism to be about describing it as open source when it isn't, i.e. that

"We say it's open source because we expect the reader to know that we're not telling the truth"

should be replaced by

"It's open source except for the BLE firmware blob, which can't be open source due to regulatory reasons."

To be fair, the article just repeated the claims made on the GitHub page for the SDK.


Some rambling...

I always wonder if something like these undocumented opcodes could be used as a concept in more modern processors. Backend, transistors were a precious resource, and the result was those opcodes. Nowadays, instruction encoding space is more precious because of pressure on the instruction cache. Decoding performance might also be relevant.

The result of these thoughts is something I called "PISC", programmable instruction set computer, which basically means an unchanged back-end (something like RISC + vector) but a programmable decoder in front of it. So then different pieces of code could use different encodings, optimized for each case.

...which you get in RISC with subroutines + instruction cache, if you regard the CALL instructions as "encoded custom instructions", but not quite because CALLs waste a lot of bits, and you need additional instructions to pass arguments.

For pure RISC, all of this would at best take some pressure of the instruction cache, so probably not worth it. Might be more interestring for VLIW backends.


ARM has the Thumb opcodes, which aren't your "PISC" concept (which is a limited form of loadable microcode, another thing that's been done) but special, shorter encodings of a subset of ARM opcodes, which the CPU recognizes when an opcode flips an internal bit. There's also Thumb-2, which has a mix of short (16-bit) and full-size (32-bit) opcodes to help fix some performance problems of the original Thumb concept:

https://developer.arm.com/documentation/dui0473/m/overview-o...

> ARMv4T and later define a 16-bit instruction set called Thumb. Most of the functionality of the 32-bit ARM instruction set is available, but some operations require more instructions. The Thumb instruction set provides better code density, at the expense of performance.

> ARMv6T2 introduces Thumb-2 technology. This is a major enhancement to the Thumb instruction set by providing 32-bit Thumb instructions. The 32-bit and 16-bit Thumb instructions together provide almost exactly the same functionality as the ARM instruction set. This version of the Thumb instruction set achieves the high performance of ARM code along with the benefits of better code density.

So, in summary:

https://developer.arm.com/documentation/ddi0210/c/CACBCAAE

> The Thumb instruction set is a subset of the most commonly used 32-bit ARM instructions. Thumb instructions are each 16 bits long, and have a corresponding 32-bit ARM instruction that has the same effect on the processor model. Thumb instructions operate with the standard ARM register configuration, allowing excellent interoperability between ARM and Thumb states.

> On execution, 16-bit Thumb instructions are transparently decompressed to full 32-bit ARM instructions in real time, without performance loss.

For an example of how loadable microcode worked in practice, look up the Three Rivers PERQ:

https://en.wikipedia.org/wiki/PERQ

> The name "PERQ" was chosen both as an acronym of "Pascal Engine that Runs Quicker," and to evoke the word perquisite commonly called a perk, that is an additional employee benefit


And for a description of how to build a computer with loaded microcode you could start with Mick and Brick [1] (PDF). It describes using AMD 2900 series [2] components, the main alternative at the time was to use the TI 74181 ALU [3] and build your own microcode engine.

[1] https://www.mirrorservice.org/sites/www.bitsavers.org/compon... [2] https://en.wikipedia.org/wiki/AMD_Am2900 [3] https://en.wikipedia.org/wiki/74181


> On execution, 16-bit Thumb instructions are transparently decompressed to full 32-bit ARM instructions in real time, without performance loss.

That quote is from the ARM7TDMI manual - the CPU used in the Game Boy Advance, for example. I believe later processors contained entirely separate ARM and Thumb decoders.


Sounds a lot like what Transmeta was doing.


> because the point of the site is not to give you a personalized answer, but to build a reference where the questions are useful to everyone

This is a strawman. Marking two different questions as duplicates of each other has nothing to do with a personalized answer, and answering both would absolutely be useful to everyone because a subset of visitors will look for answers to one question, and another subset will be looking for answers to the other question.

To emphasize the difference: Personalized answers would be about having a single question and giving different answers to different audiences. This is not at all the same as having two different _questions_.


>This is a strawman. Marking two different questions as duplicates of each other has nothing to do with a personalized answer, and answering both would absolutely be useful to everyone because a subset of visitors will look for answers to one question, and another subset will be looking for answers to the other question.

What you're missing: when a question is closed as a duplicate, the link to the duplicate target is automatically put at the top; furthermore, if there are no answers to the current question, logged-out users are automatically redirected to the target.

The goal of closing duplicates promptly is to prevent them from being answered and enable that redirect. As a result, people who search for the question and find a duplicate, actually find the target instead.

It's important here to keep in mind that the site's own search doesn't work very well, and external search doesn't understand the site's voting system. It happens all the time that poorly asked, hard-to-understand versions of a question nevertheless accidentally have better SEO. I know this because of years of experience trying to use external search to find a duplicate target for the N+1th iteration of the same basic question.

It is, in the common case, about personalized answers when people reject duplicates - because objectively the answers on the target answer their question and the OP is generally either refusing to accept this fact, refusing to accept that closing duplicates is part of our policy, or else is struggling to connect the answer to the question because of a failure to do the expected investigative work first (https://meta.stackoverflow.com/questions/261592).


> The goal of closing duplicates promptly is to prevent them from being answered and enable that redirect. As a result, people who search for the question and find a duplicate, actually find the target instead.

Why would you want to prevent answers to a question, just because another unrelated question exists? Remember that the whole thread is not about actual duplicates, but about unrelated questions falsely marked as duplicates.

> ... because objectively the answers on the target answer their question ... > ... because of a failure to do the expected investigative work first ...

Almost everybody describing their experience with duplicates in this comment section tells the story of questions for which other questions have been found, linked from the supposedly-duplicate question, and described why the answers to that other question do NOT answer their own question.

The expected investigative work HAS been done; they explained why the other question is NOT a duplicate. The key point is that all of this has been ignored by the person closing the question.


> Why would you want to prevent answers to a question, just because another unrelated question exists? Remember that the whole thread is not about actual duplicates, but about unrelated questions falsely marked as duplicates.

Here, for reference, is the entire sentence which kicked off the subthread where you objected to what I was saying:

> It is without merit ~90% of the time. The simple fact is that the "nuance" seen by the person asking the question is just not relevant to us, because the point of the site is not to give you a personalized answer, but to build a reference where the questions are useful to everyone.

In other words: I am defending "preventing answers to the question" for the exact reason that it probably actually really is a duplicate, according to how we view duplicates. As a reminder, this is in terms of what future users of the site will find the most useful. It is not simply in terms of what the question author thinks.

And in my years-long experience seeing appeals, in a large majority of cases it really is a duplicate; it really is clearly a duplicate; and the only apparent reason the OP is objecting is because it takes additional effort to adapt the answers to the exact situation motivating the original question. And I absolutely have seen this sort of "effort" boil down to things like a need to rename the variables instead of just literally copying and pasting the code. Quite often.

> Almost everybody describing their experience with duplicates in this comment section tells the story of questions for which other questions have been found, linked from the supposedly-duplicate question, and described why the answers to that other question do NOT answer their own question.

No, they do not. They describe the experience of believing that the other question is different. They don't even mention the answers on the other question. And there is nowhere near enough detail in the description to evaluate the reasoning out of context.

This is, as I described in other comments, why there is a meta site.

And this is HN. The average result elsewhere on the Internet has been worse.


> "It's not a good idea for me to be trying to kill demons while I'm driving,"

Driving was the first thing that came to my mind. It's also dangerous to drive when tired, distracted, drunk, or one of a hundred other conditions. Yet somehow GTP is portrayed as the problem here when, in fact, driving a car is simply one of the most dangerous daily activities, even absurdly dangerous compared to other tasks.


The garbage that dominates the web has everything to do with centralization of power, and nothing with HTML vs JS. The former is a people problem and the latter is just tech.


The rest of the world has decided that the web is for applications at least as much as it is for documents.


You have now stated that "those people should feel bad" for the second time. Personal attacks will hardly bring any change into this world. I'd instead suggest that you propose actual ways to solve the same problems that Javascript-based SPAs have solved which the non-JS web is still stuck with.


Just add a matter of precision, personal attacks won't generally improve the world on the overwhole aftermaths. But they do generally change the world into a less pleasant place to live in.


This will prevent re-submission but still be confusing for the user. Why even allow submitting when navigating back? If the order has been submitted already, the submit button should be greyed out with a message saying that the order has been submitted already.

The original task was to do this without JS, so my first guess would be: Instruct the browser to re-load the page upon navigating back (cacheability headers), identify the order using an ID in the URL, then when reloading detect its already-submitted state on the server.


This is why I suggested the URL for the submission page be unique- having a session nonce / token or similar. That way, once the user checks out you invalidate the checkout session, and if the user hits the back button you redirect them to the appropriate page.

I specifically called out the issue of re-submitting certain forms and proposed the above solution. I don't think relying on cache headers is going to be sufficiently reliable.


I'm not arguing against a re-submission check. You'll need that anyway to prevent attackers from bypassing the browser and messing up your data.

But even with a nonce and a re-submission check, the cache headers are essential to make sure that when the user presses the back button, they'll see a greyed-out submit button. If the browser does not reload that page, the button will still be clickable. It won't work correctly because the re-submission check will fail, but a clickable and guaranteed non-functional button is very bad UI.

The latter is one of the main reasons that we have so much JS/SPAs. Sure, you can build an application without it that is somewhat functional, but the UI will be low-quality -- even if this particular example might be fixable with cacheability headers.


There is no re-submission check. When the user hits the back button, and requests the HTML from the server, the serve responds with a redirect. The user never sees the expired cart.


> Instruct the browser to re-load the page upon navigating back (cacheability headers), identify the order using an ID in the URL, then when reloading detect its already-submitted state on the server

And how would one do that without using JS?


Which part exactly?

Re-loading the page on navigating back would be done using cacheability headers. This is the most shaky part, and I'm not sure if it is possible today. If this does indeed not work, then this would be one of the "things that Javascript has solved that the non-JS web is still stuck with" I meantioned in my other post, i.e. one of the reasons that JS is more popular than non-JS pages today.

Identifying the order using an ID in the URL is standard practice everywhere.

When the order page gets requested, the server would take that ID, look the order up in the database and see that it is already submitted, responding with a page that has its submit button disabled.


The must have js part for me starts where one can open the store in multiple tabs then add and remove things on the first two and check out on the 3rd tab.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: