I have been using mu4e for years, and am generally happy with it, and yet... I've never recommended it to anyone else. Unlike, say, org-mode or magit, which I'd happily evangelize.
The pain points are what other commenters have said:
- I don't find the default config a good fit for me, and run it heavily customized. As someone said everything in Emacs turns into a project...
- Performance can be an issue, especially indexing new mail (and especially if you like to lug around a copy of most of your emails locally as I do). On a laptop while traveling this used to be more of a problem, but newer versions are notieably quicker and newer laptops have better battery life.
- HTML rendering isn't great. Thankfully I don't get too many important messages that isn't just plain text. This might be a reasonable use case for xwidget-webkit though I'd imagine there are security/privacy issues to work out. (Another Emacs project -- yay!)
When I started I thought it would be an efficient way to get through lots of emails, and it has been for the most part. I'm just not sure I've saved time overall unless one counts the hours configuring it as "entertainment / hobby" rather than "work".
I too am a bit surprised this made it on the front page. Mu4e is definitely niche, and I wouldn't crow about it like I do org or magit. I've only been using it for less than a month and it will be a while before I know whether it is a net win.
Also, the real test would have been my much more voluminous work email!
The HTML rendering isn't great, as you said, but you are two keystrokes from opening that email in a browser, if you have to.
And I have tweaked the config several times now, but I think that's mostly because I'm changing my (and the charity's) email, which involves a lot of shuffling about. Again, in six months, I'll have another look and decide whether it _really_ helped.
I don't question this decision is sometimes (often) driven by the need to increase publication count. (Which, in turn, happens because people find it esaier to count papers than read them.) But there is a counterpoint here, which is that if you write say a 50-pager (not super common but also not unusual in my area, applied math) and spread several interesting results throughout, odds are good many things in the middle will never see the light of day. Of course one can organize the paper in a way to try to mitigate the effects of this, but sometimes it is better and cleaner to break a long paper into shorter pieces that people can actually digest.
Well put. Nobody want salami slices, but nobody wants War and Peace, either (most of the time). Both are problems, even if papers are more often too short than too long.
Not only that but in the academic world 20 papers with 50 citations is worth more than one paper with 1000. Even though the total citation count is the same the former gives you an h-index of 20 (and an i-10 of 20) but the latter only gives you an h-index of 1 (ditto for i-10).
Though truthfully it's hard to say what's better. All can be hacked (a common way to hack citations is to publish surveys. You also just get more by being at a prestigious institution or being prestigious yourself). The metric is really naïve but it's common to use since actual evaluating the merits of individual works is quite time consuming and itself an incredibly noisy process. But hey, publish or perish, am I right?[0]
That's a fantastic example of that that which gets measured gets optimized. The academic world's fascination with this citation metrics is hilarious, it is so reminiscent of programmers optimizing for whatever metric managements has decided is the true measure of programmer productivity. Object code size, lines of code, tickets closed and so on...
It's definitely a toxic part of academia. Honestly if it weren't for that I'd take an academic job over an industry one in a heartbeat.
Some irony is my PhD was in machine learning. Every intro course I now (including mine) discusses reward hacking (aka Goodhart's Law). The irony being that the ML community had dialed this problem up to 11. My peers that optimized this push out 10-20 papers a year. I think that's too many and means most of the papers are low impact. I have similar citation counts to them but lower h-index and they definitely get more prestige for that even though it's harder to publish more frequently in my domain (my experiments take a lot longer). I'm with Higgs though, it's a lazy metric and imo does more harm than good.
- Differenting a function composed of simpler pieces always "converges" (the process terminates). One just applies the chain rule. Among other things, this is why automatic differentiation is a thing.
- If you have an analytic function (a function expressible locally as a power series), a surprisingly useful trick is to turn differentiation into integration via the Cauchy integral formula. Provided a good contour can be found, this gives a nice way to evaluate derivatives numerically.
Like many others, I too would very much like to hear about this.
I taught our entry-level calculus course a few years ago and had two blind students in the class. The technology available for supporting them was abysmal then -- the toolchain for typesetting math for screen readers was unreliable (and anyway very slow), for braille was non-existent, and translating figures into braille involved sending material out to a vendor and waiting weeks. I would love to hear how we may better support our students in subjects like math, chemistry, physics, etc, that depend so much on visualization.
The creator, https://www.reddit.com/user/Mrblindguardian/ has asked for help a few times in the past (I provided feedback when I could), but hasn't needed to as often of late, presumably due to using one or more LLMs.
I did a maths undergrad degree and the way my blind, mostly deaf friend and I communicated was using a stylized version of TeX markup. I typed on a terminal and he read / wrote on his braille terminal. It worked really well.
Yes, mostly raw TeX, just plain ascii - not specially coded for Braille. This was quite a long time ago, mid 1980's, so not long after TeX had started to spread in computer science and maths communities. My friend was using a "Versa Braille" terminal hooked via a serial port to a BBC Micro running a terminal program that I'd written. I cannot completely remember how we came to an understanding of the syntax to use. We did shorten some items because the Versa Braille only had 20 chars per "line".
He is still active and online and has a contact page see https://www.foneware.net. I have been a poor correspondent with him - he will not know my HN username. I will try to reach out to him.
Now that I've been recalling more memories of this, I do remember there being encoding or "escaped" character issues - particularly with brackets and parentheses.
There was another device between the BBC Micro and the "Versa Braille" unit. The interposing unit was a matrix switch that could multiplex between different serial devices - I now suspect it might also have been doing some character escaping / translation.
For those not familiar with Braille, it uses a 2x3 array (6 bits) to encode everything. The "standard" (ahem, by country) Braille encodings are super-sub-optimal for pretty much any programming language or mathematics.
After a bit of (me)memory refresh, in "standard" Braille you only get ( and ) - and they both encode to the same 2x3 pattern! So in Braille ()() and (()) would "read" as the same thing.
I now understand why you were asking about the software used. I do not recall how we completely worked this out. We had to have added some sort of convention for scoping.
I now also remember that the Braille terminal aggressively compressed whitespace. My friend liked to use (physical) touch to build a picture, but it was not easy to send spatial / line-by-line information to the Braille terminal.
Being able to rely on spatial information has always stuck with me. It is for this reason I've always had a bias against Python, it is one of the few languages that depends on precise whitespace for statement syntax / scope.
Thank you so much for all this detail. This is very interesting & quite helpful, and it's great you were able to communicate all this with your friend.
For anyone else interested: I wanted to be able to typeset mathematics (actual formulas) for the students that's as automated as possible. There are 1 or 2 commercial products that can typeset math in Braille (I can't remember the names but can look them up) but not priced for individual use. My university had a license to one of them but only for their own use (duh) and they did not have the staff to dedicate to my students (double duh).
My eventual solution was to compile latex to html, which the students could use with a screen reader. But screen readers were not fully reliable, and very, very slow to use (compared to Braille), making homework and exams take much longer than they need to. I also couldn't include figures this way. I looked around but did not find an easy open source solution for converting documents to Braille. It would be fantastic to be able to do this, formulas and figures included, but I would've been very happy with just the formulas. (This was single variable calculus; I shudder to think what teaching vector calc would have been like.)
FYI Our external vendor was able to convert figures to printed Braille, but I imagine that's a labor intensive process.
Partway through the term we found funding for dedicated "learning assistants" (an undergraduate student who came to class and helped explain what's going on, and also met with the students outside of class). This, as much or more than any tech, was probably the single most imapctful thing.
Question for all of you more knowledgeble than I: can SMART data be tampered with? When I get, say, a refurbished Mac from Apple, I'm trusting Apple won't stoop to that. But a SSD vendor I've never heard of?
Yes. There are vendor-specific utilities that have escaped into the wild that allow bad actors to reset various SMART counters, etc.
A lot of abuse came to light during the launch and initial mining of the (ridiculous) Chiacoin[1] during which Chia miners would burn through SSDs to within a hair of their usable life, reset their SMART stats, and sell them as new on Amazon or ebay.
As can be seen in my above comment, larger distributors like "Maestro Technologies" have their stock polluted with parts like this and I find it very unlikely that they are not aware of the status of these parts they are selling as new.
Yes, it can be tampered with. Drives can even lie about the amount of storage they support. I once bought a 1TB pen drive that was only 32MB for $10. (Yes, I knew it was a scam beforehand.)
MIT Scheme (and ScmUtils) are unfortunately not getting enough maintainence, but they still work with a little effort. Probably better on Linux than any other environment. If you have a Mac you may try this:
Also, new stuff just isn't designed to be opened up.
I took an old monitor that started failing to a local makerspace (which has a very popular monthly repair cafe), and it took some physical force to crack the case open. Once inside it was relatively easy to get the board out and find the leaking capacitors. Not exactly high-tech parts.
It was fun for me and for the volunteer, but I can't imagine anyone trying to do this for a living -- it would take a lot of time, and charging people for what the labor's worth would probably come close to the price of a new monitor.
Thanks! Do you mean MIT Scheme's C backend? I've used MIT Scheme on and off for a long time and have never touched the C backend & have no idea how it works, so this is interesting.
(MIT Scheme also has a native code compiler for Intel CPUs, which seems to be what most users of MIT Scheme (an admittedly small community) actually use.)
If you'd like to define the backend like this, it's the easiest way to compile to C; we're just repurposing C as a general-purpose assembler: https://glouw.com/2023/11/07/Switch.html
https://en.wikipedia.org/wiki/Eunice_aphroditois
Thankfully they don't live on land.
reply