Hacker Newsnew | past | comments | ask | show | jobs | submit | VogonPoetry's commentslogin

I've written to <voxmeditantis@gmail.com> about how deceptive it was to put the Editorial Note at the end instead of upfront. I stopped reading because sections felt fabricated - but it was presented as an oral history or actual interview. What a terrible way to present the work of a pioneer.


I have received feedback from Vox. The article has been updated with a new leading paragraph indicating the fictional nature of the article.


There is something off about this piece. Particularly the section that starts "You passed away on your eighty-eighth birthday – 4th August 2020. Do you reflect on mortality?" I stopped reading after that.


An Editorial Note is at the bottom (as others have now noted), it should have been at the top. Had I not seen other comments I would likely have believed everything was made up. This is a terrible way to recount the memory of Frances Allen.


I have received feedback from Vox. The article has been updated with a new leading paragraph indicating the fictional nature of the article.


I did a maths undergrad degree and the way my blind, mostly deaf friend and I communicated was using a stylized version of TeX markup. I typed on a terminal and he read / wrote on his braille terminal. It worked really well.


Thanks! Did you communicate in "raw" TeX, or was it compiled / encoded for braille? Can you point me at the software you used?


Yes, mostly raw TeX, just plain ascii - not specially coded for Braille. This was quite a long time ago, mid 1980's, so not long after TeX had started to spread in computer science and maths communities. My friend was using a "Versa Braille" terminal hooked via a serial port to a BBC Micro running a terminal program that I'd written. I cannot completely remember how we came to an understanding of the syntax to use. We did shorten some items because the Versa Braille only had 20 chars per "line".

He is still active and online and has a contact page see https://www.foneware.net. I have been a poor correspondent with him - he will not know my HN username. I will try to reach out to him.


Now that I've been recalling more memories of this, I do remember there being encoding or "escaped" character issues - particularly with brackets and parentheses.

There was another device between the BBC Micro and the "Versa Braille" unit. The interposing unit was a matrix switch that could multiplex between different serial devices - I now suspect it might also have been doing some character escaping / translation.

For those not familiar with Braille, it uses a 2x3 array (6 bits) to encode everything. The "standard" (ahem, by country) Braille encodings are super-sub-optimal for pretty much any programming language or mathematics.

After a bit of (me)memory refresh, in "standard" Braille you only get ( and ) - and they both encode to the same 2x3 pattern! So in Braille ()() and (()) would "read" as the same thing.

I now understand why you were asking about the software used. I do not recall how we completely worked this out. We had to have added some sort of convention for scoping.

I now also remember that the Braille terminal aggressively compressed whitespace. My friend liked to use (physical) touch to build a picture, but it was not easy to send spatial / line-by-line information to the Braille terminal.

Being able to rely on spatial information has always stuck with me. It is for this reason I've always had a bias against Python, it is one of the few languages that depends on precise whitespace for statement syntax / scope.


Thank you so much for all this detail. This is very interesting & quite helpful, and it's great you were able to communicate all this with your friend.

For anyone else interested: I wanted to be able to typeset mathematics (actual formulas) for the students that's as automated as possible. There are 1 or 2 commercial products that can typeset math in Braille (I can't remember the names but can look them up) but not priced for individual use. My university had a license to one of them but only for their own use (duh) and they did not have the staff to dedicate to my students (double duh).

My eventual solution was to compile latex to html, which the students could use with a screen reader. But screen readers were not fully reliable, and very, very slow to use (compared to Braille), making homework and exams take much longer than they need to. I also couldn't include figures this way. I looked around but did not find an easy open source solution for converting documents to Braille. It would be fantastic to be able to do this, formulas and figures included, but I would've been very happy with just the formulas. (This was single variable calculus; I shudder to think what teaching vector calc would have been like.)

FYI Our external vendor was able to convert figures to printed Braille, but I imagine that's a labor intensive process.

Partway through the term we found funding for dedicated "learning assistants" (an undergraduate student who came to class and helped explain what's going on, and also met with the students outside of class). This, as much or more than any tech, was probably the single most imapctful thing.


I have experienced some similar issues. I think some of it related to the "locked" state of the device. Siri needs context data to answer, particularly the mom or some destination questions. Specifically for contacts or recent places data. This context isn't remotely stored, but provided by the device to Siri each time. I think when the phone is locked it doesn't have access to the data (reading or writing). When I mean "Siri", I mean both the on device and remote parts of it.

I think this also interacts with countries and states that have (possibly misguided) strict laws forbidding the "touching" of phones "while driving". My experiences suggest that using Siri when driving and the device is locked, it just gives up - I sort of see the start of it working then, bam, it stops. If I retry, I suspect that I've somehow "looked" at the phone in frustration, it saw my attention and unlocked. I now wonder if where I have placed the device is making a difference.

It does seem to work much better (when driving) if the device is already unlocked.

I also see odd things when using Shortcuts for navigation. If I've previously asked for walking directions and then speak the shortcut while driving it won't give directions until I switch to the "car" icon in maps. I think it might be trying to calculate the 15Km walking directions, but it doesn't complete before I tell it, frustrated, to stop.

When Siri doesn't work it is usually the times when I need it to. This is definitely a multiplier in disastisfaction.


After writing this I decided to look at my shortcut. The action seems to have been a simple "get directions to <place>" and sent verbatim to Siri.

I was not able to edit / update it! However, there was now a new "maps" option for `Open <type> directions from <Start> to <Destination>`

Where type can now be {driving,walking,biking,transit} and <start> is Current Location by default.

After updating, this now seems to correctly set actual driving directions, even if I'd previously set up a walking route!


Perhaps using AI assistance is good OPSEC. It could help to shield the author from stylometry or author profiling.


And then the author posts it himself to Hacker News. Nah, that's not opsec.


To get feedback / commentary you likely need to change the permissions on the repository, currently it seems to be private.


You mean the pages doesnt open? Thank you for that… and for all the fish


I see a different error now - a 404 with "There isn't a GitHub Pages site here".


Ah yes when i switched it public the old “obfuscated” url is gone. I posted normally, but it needs a bit of work still. You can play tho


VM in this usage means Virtual Memory - i.e. with page tables enabled. Two "processes" can use the same memory addresses and they will point to different physical memory. In Real Address mode every program has to use different memory addresses. The VM86 mode lets you to have several Real Mode programs running, but using Virtual Memory.


VM does not mean Virtual Memory in this context. VM does mean Virtual Machine. When an OS/DPMI Server/Supervisor/Monitor provides an OS or program a virtual interface to HW interrupts, IO ports, SW interrupts, we say that OS or program is being executed in a Virtual Machine.

For things like Windows 3.x, 9x, OS/2, CWSDPMI, DOS/4G (DPMI & VCPI), Paging & Virtual Memory was an optional feature. In fact, CWSDPMI/djgpp programs had flags (using `CWSDPR0` or `CWSDPMI -s-` or programmatic calls) to disable Paging & Virtual Memory. Also, djgpp’s first DPMI server (a DOS extender called `go32`) didn’t support Virtual Memory either but could sub-execute Real Mode DOS programs in VM86 mode.

http://www.delorie.com/djgpp/v2faq/faq15_2.html


I agree that my comment about VM was imprecise and inaccurate.

I do dispute your assertion that virtual memory was "disabled". It isn't possible to use V86 mode (what the Intel Docs called it) without having a TSS, GDT, LDT and IDT set up. Being in protected mode is required. Mappings of virtual to real memory have to be present. Switching in and out of V86 mode happens from protected mode. Something has to manage the mappings or have set it up.

Intel's use of "virtual" for V86 mode was cursory - it could fail to work for actual 8086 code. This impacted Digital Research. And I admit my experiences are mostly from that side of the OS isle.

I did go back and re-read some of [0] to refresh some of my memory bitrot.

[0] https://www.ardent-tool.com/CPU/docs/Intel/386/manuals/23098...


Slight nitpick: you could run fire up V86 mode without any LDT entries.

It's also possible to run Virtual 8086 mode without paging enabled, and when in Virtual 8086 mode, the 386 doesn't care about what's in the LDT/GDT (unless an interrupt happens). In practice this is never done because the Virtual 8086 task would only be able to use the bottom 1MB of physical memory.


Slight nitpick: OS/2 2.x+ did not have a way to disable paging, although you could disable virtual memory in any version of OS/2 by simply setting MEMMAN=NOSWAP.

On Windows 3.x, paging and swapping was optional - if you started it in 286 ("standard") mode. On Windows 95, paging is not optional, and it's not optional in Windows 3.11 for Workgroups either.


I view it more as a ransom / hostage payment or a response to bullying. There was a threat of tariffs; I'm going to hold your business hostage. The ransom was paid and the tariffs weren't imposed.

I think a bribe is better defined as "you cannot have this thing you want, unless you give me this". A quid pro quo.

I guess it comes down to who the "active" party was.

I would definitely call it a bribe if Tim Cook was the one that asked to get special treatment or lower Tariffs than anyone else and the response was give me a "gift".

Even if you believe it was a bribe, the value of it was purely symbolic. What was given wasn't a change in policy, it was a material gift of zero value to anyone else except for scrap. Others that have been subjected to this behavior have given up things like changes in hiring practices and working with "non favored" organizations.


Everything humans do is carbon negative. Breathing, eating, driving, pooping (Westerners: toilet paper, All: waste management) and building. Getting every human to be carbon neutral would be an amazing thing!

From searched / online numbers, Apple has shipped 150 million AirPods since 2016. The AirPods 3 weigh 5.5g. The gross weight of a basic Tesla model 3 is 1760Kg. I picked a Tesla because it has plastic, metals, magnets, copper, lithium - similar materials.

In the ~10 years Apple has been making AirPods the materials used (by weight) is ~ 470 Tesla cars. So, per year (avg, not really good for this), resources consumed is about 47 Teslas (by weight).

Apple claims 40% of AirPods 3 are from recycled materials, so ~ 290 crashed / discarded Teslas could provide part of these materials - on average 29 per year.

I did the above because it perceptually relates to "real things". Teslas are NOT carbon neutral, very much carbon negative.

The reality / HORROR of waste is far, far worse. Any single plastic bag used to dispose of weekly waste likely weighs more than a pair of AirPods. Any can, made of steel or aluminum, could likely could make a lot of AirPods. The toy or product you bought that had a flap that you could lift up - there was likely a magnet under there. Any disposed single use battery might have zinc, if it was a CR2032 or "watch" battery, lithium (or silver).

Yes, AirPods might be disposable. Do they improve the qualify of life of the humans that purchase them? What is the real cost in perspective - with everything else taken into consideration? If the AirPods are used to listen to music or entertainment, then the positive mental health aspects are likely significant in a positive value direction.


A few random notes:

- Your model neglects the charging case with its own battery and microcontroller.

- A Trsla is more likely to be recycled properly than the same amount of earbuds. Itbis also more recycleable because ofnthe high amount of steel and other bulk materials in it.

- The ratio of semiconductor electronics to total weight is extremely different between the two products. And semiconductor manufacturing is extremely resource intensive. All we typically see is squeaky clean neon lit clean rooms, which belie the use of high amounts of aggressive and toxic chemicals that have to be produced somewhere and generate waste that is never talked about.


Many of these are all excellent points. I did exclude the case.

The car recycling v.s. electronics recycling question is interesting. I once had a very interesting conversation with an electronics recycler on a plane trip - phones could not be recycled like other electronics because of some of the metals content - in particular beryllium copper content (used in spring contacts). He described it as a lot of electronics is ground up and a chemical processes used to extract the valuable elements. With phones the grinding up was the toxic / dangerous prohibited part.

I think the semiconductor numbers are more subtle though. It is sq. mm in the product and yield that are factors. A single power switching element in a Tesla will likely exceed the total silicon sq. mm usage in an AirPod. There is a lot of electronics in a Tesla. Some of the Tesla circuitry is more exotic, Silicon Carbide or GaN. I need to look into how much recovery / reprocessing silicon mfg is using for the reagents. The waste produced isn't as bad as it was in the Silicon Valley heyday where every original mfg site is now a superfund site with very large plumes of toxic waste in the subsoil.


This is on the verge of pedantry - CHERI determinism isn't strictly true, garbage collecting abandoned descriptors is currently done asynchronously. Malicious code could attempt to reuse an abandoned descriptor before it is "disappeared". I think it might be possible to construct a synthetic situation where two threads operating with perhaps different privilege in the same address space (something CHERI can support!) have an IPC channel might be affected by the timing.

There is a section in the technical reports that talks about garbage collection.

I don't think CHERI is currently being used with different privileged threads in the same address space.


I suspect that the parent poster was referring to MTE's memory protection being probabilistic. There are only 16 tag values for an attacker to guess. You can combine MTE and PAC, but PAC is also only probabilistic.

With CHERI, there is nothing to guess. You either have a capability or you don't.


Right, but the problem with CHERI is that you may (probabilistically) continue to have that capability even after you shouldn't. That's the problem.

That's because the capability (tagged pointer) itself is what gives you the right to access memory. So you have to find all the capabilities pointing to a segment of memory and invalidate them. Remember, capabilities are meant to be copied.

Early work on CHERI (CHERIvoke) proposed a stop-the-world barrier to revoke capabilities by doing a full scan of the program's memory (ouch!) to find and invalidate any stale capabilities. Because that is so expensive, the scan is only performed after a certain threshold amount of memory has been freed. That threshold introduces a security / battery life trade-off.

That was followed by "Cornucopia", which proposed a concurrent in-kernel scan (with some per-page flags to reduce the number of pages scanned) followed by a shorter stop-the-world. In 2024 (just last year), "Reloaded" was proposed, which add still more MMU hardware to nearly eliminate pauses, at the cost of 10% more memory traffic.

Unfortunately, the time between free and revocation introduces a short-but-not-zero window for UAF bugs/attacks. This time gap is even explicitly acknowledged in the Reloaded paper! Moreover, the Reloaded revocation algo requires blocking all threads of an application to ensure no dead capabilities are hidden in registers.

In contrast, with MTE, you just change the memory's tag on free, which immediately causes all formerly-valid pointers to the memory granule to become invalid. That's why you would want both: They're complementary.

* MTE gives truly instantaneous invalidation with zero battery impact, but only probabilistic spatial protections from attackers.

* CHERI gives deterministic spatial protection with eventually-consistent temporal invalidation semantics.


> Unfortunately, the time between free and revocation introduces a short-but-not-zero window for UAF bugs/attacks. This time gap is even explicitly acknowledged in the Reloaded paper!

Yes, revocation is batched and asynchronous. This does mean that capabilities remain valid beyond the language-level lifetime of the allocation. However, that does not mean that, within that window, we have not dealt with any UAF attacks. The vast majority of UAF attacks do not care about the fact that the memory has been freed, but rather that the memory has since been repurposed for something else (whether the allocator's own internal metadata or some other new allocation). Cornucopia (both versions) ensures that this does not happen until the next revocation pass; that is, it "quarantines" the memory. Effectively, when you call free, it's "as if" the free were deferred until revocation time. Therefore, if your capability is still valid, that memory is still only in use by you, and so the vast majority of attacks no longer work. This protects you against UAF in a similar way to how making free a no-op protects against most attacks. This is not all attacks, very occasionally the bug is a result of something like undefined behaviour that follows, but I don't know if we've found even one real-world instance of a UAF that this approach isn't going to catch. I'm sure they exist, but the nuance is crucial here to be able to reason about the security of various models.

But yes, MTE+CHERI are complementary in this regard. We have drafted ideas for using MTE with CHERI, which would (a) let you immediately prevent access (noting though that the capability would remain valid for a while, still) (b) let you recycle memory with different MTE colours before needing to quarantine the memory (hoping that, by the time you run out of colours for that memory region, a revocation pass has reclaimed some of them). That is, in theory it both gives stronger protection and better performance. I say in theory because this is just a sketch of ideas, nobody has yet explored that research.

I also note that MTE does not fix the undefined behaviour problem; it will only trap when it sees a memory access, but vulnerabilities introduced due to compilers exploiting undefined behaviour for optimisation purposes may not perform a memory access with the pointer before it's too late.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: