Hacker Newsnew | past | comments | ask | show | jobs | submit | publicola1990's favoriteslogin

I encourage everyone to listen to his 1988 solo album self named “Brian Wilson”. It’s brilliant. Frequently called “Pet Sounds ‘88” since many fans consider it to be a spiritual sequel. The 80s synth dressing might seem off putting at first but the songwriting and musicality of it is just amazing.

Also, give a listen to Smile! - not Smiley Smile, or The Smile Sessions, but the 2004 recreation. It's quite mindblowing. If you close your eyes you can hear it as a true symphony.

https://www.youtube.com/watch?v=8UbNwhm2EX8


This is similar to how java.util.concurrent.atomic.LongAdder works:

https://github.com/openjdk/jdk/blob/master/src/java.base/sha...


I urge you to read Alex's Reckoning series.

> performance concerns in the real world are typically measured in, at worst, hundreds of milliseconds.

I think outside of the privilege bubble where a significant percentage of the world is in, these concerns are measured in the tens of seconds.

> it doesn’t create issues visible to my users

What do your users look like? What kind of devices are they on? Is it mostly mobile or desktop? Are they mostly accessing your service on cellular networks or not? Based on that, what does your INP look like? IMO you'll see the impact of React's legacy synthetic event system by looking at your INP because it's reinventing in userland what the platform already does well. This impact on INP is amplified on mobile and moreso on low-end mobile. See CRUX data visualised by desktop or mobile below:

https://infrequently.org/2022/12/performance-baseline-2023/#...

> Finally, the author doesn’t actually give a prescription as to what to use instead of React.

I think he does and that's a big part of this article. It's almost literally everything from the "OK, But What, Then?" section. The TLDR is there isn't a one-size-fits-all.


Can you read them? Speech to text perhaps. That can also be done locally.

If a note's a minute, 1000 notes are around 16 hours of reading. Scale time needed depending on if it takes less or more than a minute to read. Add a note reference to the start of each recording, like a zettelkasten, so the scanned file, recording and text cross-reference.

If assessing other solutions, that's at least an upper bound on the cost of any other solution.


Robertson Davies' [0] lecture, "Reading" (1990), has a nice a answer to the intro of this post:

> Our grandparents used to say that we must eat a peck of dirt before we die, and they were right. And you must read a lot of rubbish before you die, as well, because an exclusive diet of masterpieces will give you spiritual dyspepsia. How can you know that a mountain peak is glorious if you have never scrambled through a dirty valley? How do you know that your gourmet meal is perfect of its kind if you have never eaten a roadside hot dog? If you want to know what a masterpiece The Pilgrim's Progress is, read Bonfire of the Vanities, and if you have any taste -- which of course may not be the case -- you will quickly find out. So I advise you, as well as reading great books that I have been talking about, read some current books and periodicals. They will help you to take the measure of the age in which you live.

(I encourage reading the whole lecture [1]! It's a fun read and there are a lot of great bits in there that I could have quoted here. But I contented myself with just the one excerpt.)

[0] https://en.wikipedia.org/wiki/Robertson_Davies

[1] https://tannerlectures.utah.edu/_resources/documents/a-to-z/...


Just like with electronics, it all started with individual components. It only looks so incredibly complex when all those individual units are composed into a big whole.

Underneath it there is order similar to the layering of software though.

Here are a few mechanical computing components explained, the context here was real-time in- and output to control ship guns against ground, sea or air targets, taking into account own speed, angles, angular speeds, distances and their changes, and other factors, like wind, if they could be measured.

https://youtu.be/s1i-dnAH9Y4 -- "Basic Mechanisms In Fire Control Computers - US Navy 1953"

So, the engineers coming up with the top level design probably did not think in "gears" but in higher level computing and transmission units.


Those who do not know judy1 are doomed to reinvent it.

http://judy.sourceforge.net/


Ignore any recommendation of React, TypeScript, Vite, or Tailwind. Here are some recommendations that don't require NPM/Node.

Pick a "classless" CSS library from a site like CSSBed[1]. These are kind of like Bootstrap, except you don't need to write any CSS or apply any CSS classes in your HTML for them to work. No tooling necessary; just include a <link> tag in your HTML document. If you'd like to try something similar to this "Tailwind" hotness everyone keeps talking about, try Basscss[2]. Again, no tooling, just need a <link> tag.

Once you start needing to add interactivity to your site, htmx[3] is nice and decently simple. If you really want something React-like, Mithril.js[4] is very similar but much simpler.

[1] https://www.cssbed.com/

[2] https://basscss.com/

[3] https://htmx.org/

[4] https://mithril.js.org/


Honestly, I like to keep the logger as decoupled and minimalist as possible. There’s https://vector.dev which can basically ship your logs from source A to destination B (supports tonnes of usecases). Separation of concerns makes things much more easier.

I think Electron/crossplatform tools are great at very early stages in order to test whether the product can work. You can quickly put together a working cross platform application to start testing features.

I am a noob at startups (working on some of my own ideas) but I almost always start with the most simple set ups (zapier + google forms tbh) to try some process with 5-10 people. If that seems promising I'll build a app over a weekend/5 days using flutter, and then get that into people's hands. I've not used electron, but at least my thinking is that if I can validate an idea and come up with a good business model, I can grow to a point that I can hire actually competent engineers to build the best experience for the users.

The primary goal (at least as it seems to me) is to solve some problem that people are comfortable making some tradeoffs(mostly unnoticed by normal people, let's be honest otherwise they wouldn't even want to try a bare bones google forms + email) while providing far more value to them.

Ultimately these are all tools, use the right one where it matters until it needs to be upgraded or changed.


The interesting part is that it say nothing about performance. Single-core benchmarks have gotten significantly faster over that time period.

If anything, the takeaway is that things like memory/cache access, branch prediction failures, and mutexes have gotten more expensive. They didn't scale while the rest of the CPU sped up!

But even that isn't really true, because it doesn't tell you anything about branch prediction hit/miss rate, memory prefetching, instruction reordering, hyperthreading, etc. For a fair historic comparison you'd have to look at something like the cumulative time the execution units of a core are stalled due to waiting for something.

Honestly, the actual numbers aren't even all that relevant. It isn't really about the time individual operations take, but about the budget you have to avoid a latency penalty.


This is a valid point, and this is an overly long response because it distracts me from watching frightening current events.

There are two ways to look at these sorts of numbers, "CPU performance" and "Systems performance". To give an example from my history;

NetApp was dealing with the Pentium P4 being slower than the Pentium 3 and looking at how that could be. All of the performance numbers said it should be faster. They had an excellent OS group that I was supporting who had top notch engineers and a really great performance analysis team as well, the results of their work was illuminating!

Doing a lot of storage (and database btw) code means "chasing pointers." That is where you get a pointer, and then follow it to get the structure it points to and then follow a pointer in that structure to still another structure in memory. That results in a lot of memory access.

The Pentium 4 had been "optimized for video streaming" (that was the thing Intel was highlighting about it and benchmarking it with.) in part because videos are sequential memory access and just integer computation when decoding. So good sequential performance and good integer performance gives you good results on benchmarking video playback.

The other thing they did was they changed the cache line size from 64 bytes to 128 bytes. The reason they did that is interesting too.

We like to think of things a computer does as "operations" and you say "this operation takes 0.x second, I can do 1/x operations per second." And that kind of works except for something I call "Channel semantics" (which may not be the official name for it but it's in queuing theory somewhere :-).

Channel semantics have two performance metrics, one is how much bandwidth (in bytes/second) a channel has, and the other is what is the maximum channel operation rate (COR) in terms of transactions per second. Most engineers before 2005 or so, ran into this with disk drives.

If you look at a serial ATA, aka SATA, drive it was connected to the computer with a "6 Gb" SATA interface. Serial channels encode both data and control bits into the stream so the actual bytes that go through a 6 gigabit line can be < 600 Mb when the encoding is 10 bits in the channel for every 8 bits sent (called 8b/10b encoding for 8 data bits per 10 channel bits (or bauds)). That means that the channel bandwidth of a SATA drive is 600MB per second. But do you get that? It depends.

The other thing about spinning rust is that the data is physically located around the disk, each concentric ring of data is a track, and moving from track to track (seeking) takes time. Further you have to tell the disk what track and sector you want, so you have to send it some context. So, if you take the "average" seek time, say 10mS, then the channel operation rate (COR) 1/.010 or 100 operations per second.

So let's say you're reading 512 byte (1/2K) sectors from random places on the disk, then you can read 100 of them per second, but wait 100? That would mean you are only transferring 50 kB per second from the disk, what happened to 600MB?

Well as it turns out your disk is slow when randomly accessed, it can be faster if you access everything sequentially because 1) the heads don't have to seek as often, and 2) the disk controller can make guesses about what you are going to ask for next. You can also increase the size of your reads (since you have extra bandwidth available) so if you read, say 4 kb sectors, then 100 x 4 kB is 400 kB/second. And 8 fold increase just by changing the sector size. Of course the reverse is also true, if you were reading 10 Mb per read, at a 100 operations per second that would be 1000 Mb per second which is 400 Mb more than your available bandwidth on the channel!

So when your channel request rate is faster than the COR and/or the data size requests are greater than the available bandwidth, you are "channel limited" and you won't get any more out of the disk no matter how much faster the source of requests improves its "performance" in terms of requests/second.

So back to our story.

Cache lines are read in whenever you attempt to access virtual memory that has not been mapped to the computer's cache. Some entry in the cache is "retired" (which means over written, or written out first if it has been modified, and then overwritten) and the new data is read in.

The memory architecture of the P4 has a 64 bit memory bus (in 72 data bits if you have ECC memory) That means every time you fetch a new cache line, the CPU's memory controller would to two memory requests.

Guess what? The memory bus on a modern CPU is a channel (they are even called "memory channels in most documentation") that are bound by channel semantics. And while Intel often publishes it's "memory bandwidth" number, it rarely would publish its channel operation limits.

The memory controller on the P4 was an improvement over the P3, but it didn't have double the operation rate of the P3. (it was like 20% faster as I recall, but don't quote me on that.) But the micro-architecture of the cache doubled the number of memory transactions for the same workload. This was especially painful on code that was pointer chasing because the next pointer in the chain shows up in the first 64 bytes and that means the second 64 bites the cache fetched for you are worthless, you'll never look at them.

As a result, on the same workload, the P3 system was faster than the P4 even though on a spec basis the P4's performance was higher than that of a P3.

After doing the analysis some very careful code rewriting and non-portable C code which packed more data in the structures into the 128 byte "chunks" where both 64 byte halves had useful data in them. Improved the performance enough for that release. It also was that analysis that gave me confidence that recommending Opteron (aka Sledgehammer) from AMD with its four memory controllers and thus 4x memory operations per second rate was going to vastly outperform anything Intel could offer. (spoiler alert: it did :-))

Bottom line, there are performance" numbers and there is system performance* which are related, but not as linearly as certainly Intel would like.


If someone wants a fast version of x ↦ tan(πx/2), let me recommend the approximation:

  tanpi_2 = function tanpi_2(x) {
    var y = (1 - x*x);
    return x * (((-0.000221184 * y + 0.0024971104) * y - 0.02301937096) * y
      + 0.3182994604 + 1.2732402998 / y);
  }
(valid for -1 <= x <= 1)

https://observablehq.com/@jrus/fasttan with error: https://www.desmos.com/calculator/hmncdd6fuj

But even better is to avoid trigonometry and angle measures as much as possible. Almost everything can be done better (faster, with fewer numerical problems) with vector methods; if you want a 1-float representation of an angle, use the stereographic projection:

  stereo = (x, y) => y/(x + Math.hypot(x, y));
  stereo_to_xy = (s) => {
    var q = 1/(1 + s*s);
    return !q ? [-1, 0] : [(1 - s*s)/q, 2*s/q]; }

Basically, PTP assumes that network delays are deterministic. If that's true, it is very precise. If not, PTP is the wrong tool. NTP assumes that network delays are stochastic, and uses sophisticated algorithms to account for this. PTP has much simpler algorithms, which can be implemented in electronics, where internal timings can be characterized. NTP is more complex, and tends to run on a general purpose computer. This adds internal OS timings to the uncertainty.

I enjoy doing pedestrian stuff; really, really well.

Most of my work is open-source, as I don't really do anything particularly innovative or patent-worthy.

Most of the value in my work is how I do it.

I work carefully, document the living bejeezus out of my work, test like crazy, and spend a lot of time "polishing the fenders."

This is something that anyone can do. It just takes patience, discipline, and care.

I'm weird. I enjoy the end results enough to take the time to do the job well.

It's been my experience that the way I work is deeply unpopular. Some people actually seem to get offended, when I discuss how I work.

Go figure.


With trusted people, I find its best to default to a non-violent communication style[1]. Express your feelings with "I" statements, etc. "I'm feeling X". Believe the other side has positive intent (ie 'hanlons razor'). Express the 2-way nature, and understand feelings can arise for many reason that may not be anyones fault.

Recently I had a conversation with to a colleague where I expressed their earlier perceived 'pressure' made me feel my relationship with them was "transactional". And I felt like my value was about the work I did, not me as a person. I reiterated throughout I didn't think it was their intention. I expressed that this has as much to do with my personality & baggage with how I perceive comments that might not bother others.

I didn't do it perfectly (this is a hard skill to cultivate). But... we left with a better way to communicate. "OK Doug reacts to X statements a bit roughly". On my end, I take accountability for maybe overreacting to X types of statements, and taking a deep breath and being as forgiving as I can. Most importantly our relationship and trust deepened, and we'll work more effectively together...

1-https://en.wikipedia.org/wiki/Nonviolent_Communication


I've written two books over the past decade as well as learning some other skills and hobbies and this is absolutely the most vital lesson I've learned. There is an incredible power in simply pouring a little time into something every day over a long period of time. It feels like a superpower when you see it start compounding.

The Grand Canyon was created by little drops of water bouncing off rocks for millenia. Consistent effort over time is one of the greatest forces in the world. Persistence beats focus, inspiration, and genius 90% of the time.


There are a lot of different drills, several of which are listed below but will touch on.

1. Copy something you like. Take a sentence/paragraph/page/scene and just retype it in. This sounds crazy, but pushing the words not only into your brain but back out through your fingers gives your brain a different avenue into them.

2. Object writing. Learned this one from Pat Pattison's writing better lyrics, but most of the techniques are generally applicable. You take an object/idea, and for 5-10 minutes write about it using all 6 senses (the standard five plus motion). The more you do this the more your writing will shift (at least in my experience).

3. Journaling. Morning pages (3 pages at the very start of your day) is a common one for writers, learning to take the filter off and just write. A lot of crap might come out, and you'll just write about the day before or your concerns about the day ahead a lot, but the act of putting the words down will help you shift your writing.

4. This one isn't a drill but I wanted to include it: explore other types of writing. If you are interested in academic writing, try making short stories or poems. Exploring entirely different uses of words will help you build new, because intent shapes the way the brain uses the words, and learning to unlock different pathways can have surprising results.


Start using your operating system directly instead of relying on libraries to do things for you. Learn Linux system calls and use them for everything. A great way to do this is to compile your C code in freestanding mode and with no libc. You'll have to do everything yourself.

It is easy to get started. Here's a system call function for x86_64 Linux:

https://github.com/matheusmoreira/liblinux/blob/master/sourc...

With this single function, it is possible to do anything. You can rewrite the entire Linux user space with this.

The Linux kernel itself has a nolibc.h file that they use for their own freestanding tools:

https://github.com/torvalds/linux/blob/master/tools/include/...

It's even better than what I came up with. Permissively licensed. Supports lots of architectures. There's process startup code so you don't even need GCC's startfiles in order to have a normal main function. The only missing feature is the auxiliary vector. You can just include it and start using Linux immediately.

You can absolutely do this with Rust as well. Rust supports programming with no standard library. If I remember correctly, the inline assembly macros are still an unstable feature. It's likely to change in the future though.


I disagree, tight feedback loops are always better in every scenario than loose ones. What you do with the new information matters though.

E.g. you would always choose to read an accelerometer sensor in a tight loop in a flight control system - however, you wouldn’t just blindly apply the latest reading as an input to your system (accelerometers are extremely noisy, the data looks like complete trash on the front end before applying some kind of averaging filter).

Further down the chain on a flight control computer is also a good counterpoint to this idea that you should prefer a more lossy sample rate.

Control theory is an established field of study. Better to adopt filtering & PID control strategies than to under sample an input.


A few years back, I had the opportunity to lead the Product Team of a 200+ people company. It took me over a month to understood who does what, and I tend to try to know things around me and at least have the idea of “what to do in an emergency.”

I stumble on one DevOps guy who wrote every bloody thing happening in the system. He likes doing it, but from the way people were asking questions in the common chatrooms, group emails, I knew nobody read them. He loves plain text, nicely formatted with the old-school Unix-styles et al. I had talks with him outside of work and found him fascinating; I asked him help with my hobby, home-lab tinkerings, etc.

By that time, I had already started internal blog groups and was encouraging people to write. And indeed, a lot of people came out to write about engineering, marketing, product, and it became a routine for people to show off what they can do, what they love, and what they are good at.

In one of my regular company-wide emailers, I dedicated a big piece to the DevOps guy, pointing to his work and the beauty of his documentations. People loved it, and they began reading it. He remained a friend. That mailer was the only way I could highlight people I encounter doing good work, to the whole team up to the CxOs.

By the time I left the company, I was responsible for many vital things in the company, but I could transition everything smoothly with the documents and credential details (thanks to 1Password).

In the last year or so, I have been thinking more about -- always be dying. I think the similar vein of Always be Quitting; I try to document, write out details, just in case I die and my family has to figure out the intricacies.


I am from Brazil, I've been working remotely for US companies for 4+ years. Here's some advice: Open an EIRELI type of company (not a MEI), find a job that give you equity as a person and a salary as a company contractor. Ask for no benefits. Should not be that hard to find a job, since US pays so much. You will have to setup yourself health insurance and other stuff, so don't go for 60k, as there will be taxes involved too. Ask for 80k minimum because of all that.

Chess. Almost everything useful in chess I've found I can generalize to decision making in general. It's made me better at totally unrelated things like jiu jitsu.

For example, generally, you want to make decisions that increase your options. In competitive situations you want to restrict your opponent's options.

Find the fundamental patterns of whatever you're learning and get really good at those. Often times if you learn the 15-20% of concepts that show up everywhere, you'll learn the rest of the concepts faster since they're mostly just rehashed versions of them. In chess you'd learn tactical patterns for example. Just learn the 10 most common ones and it'll help you see like 70% of the tactics/checkmates you encounter.

Look for factors that increase the probability of wins, and then increase those factors. Not everything requires an extremely precise plan. For example getting a good position in chess (active/well placed pieces, control of the center, etc) increases the probability that tactics will come out of nowhere.

Getting advantages increases your ability to get more advantages. In economics this is called the Matthew principle (I think).

Since acquiring advantages can increase one's ability to acquire more advantages, advantages "right now" are worth more than advantages later on. Essentially, it seems that advantages have a time value.

One weird thing I've noticed is that space is a super important thing to know how to use. Chess, jiu jitsu, war. Whatever that means for the specific field/context you're trying to get good at - how can you use your ability to increase/decrease space/territory (or whatever is analogous to it in this context) to your advantage? Is control of the "center" or other specific areas important in your situation?

Synergy - finding ways to combine your advantages can be very powerful. Same with finding ways to exploit multiple of your opponent's weaknesses at once.


I have a lot of respect for Leslie Lamport, and I've read quite a few of his papers, but I'm a little reluctant to take his advice to heart here. I think his Paxos paper is a good read, but at the end of it you understand his thinking but are no closer to writing a valid paxos system. In contrast, take Diego Ongaro's raft paper and you leave with a deep understanding of the thinking and a way to start your journey into making the thoughts concrete.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: