Hacker Newsnew | past | comments | ask | show | jobs | submit | hs86's favoriteslogin

That still irks me. The real problem is not tinygram prevention. It's ACK delays, and that stupid fixed timer. They both went into TCP around the same time, but independently. I did tinygram prevention (the Nagle algorithm) and Berkeley did delayed ACKs, both in the early 1980s. The combination of the two is awful. Unfortunately by the time I found about delayed ACKs, I had changed jobs, was out of networking, and doing a product for Autodesk on non-networked PCs.

Delayed ACKs are a win only in certain circumstances - mostly character echo for Telnet. (When Berkeley installed delayed ACKs, they were doing a lot of Telnet from terminal concentrators in student terminal rooms to host VAX machines doing the work. For that particular situation, it made sense.) The delayed ACK timer is scaled to expected human response time. A delayed ACK is a bet that the other end will reply to what you just sent almost immediately. Except for some RPC protocols, this is unlikely. So the ACK delay mechanism loses the bet, over and over, delaying the ACK, waiting for a packet on which the ACK can be piggybacked, not getting it, and then sending the ACK, delayed. There's nothing in TCP to automatically turn this off. However, Linux (and I think Windows) now have a TCP_QUICKACK socket option. Turn that on unless you have a very unusual application.

Turning on TCP_NODELAY has similar effects, but can make throughput worse for small writes. If you write a loop which sends just a few bytes (worst case, one byte) to a socket with "write()", and the Nagle algorithm is disabled with TCP_NODELAY, each write becomes one IP packet. This increases traffic by a factor of 40, with IP and TCP headers for each payload. Tinygram prevention won't let you send a second packet if you have one in flight, unless you have enough data to fill the maximum sized packet. It accumulates bytes for one round trip time, then sends everything in the queue. That's almost always what you want. If you have TCP_NODELAY set, you need to be much more aware of buffering and flushing issues.

None of this matters for bulk one-way transfers, which is most HTTP today. (I've never looked at the impact of this on the SSL handshake, where it might matter.)

Short version: set TCP_QUICKACK. If you find a case where that makes things worse, let me know.

John Nagle


> Seriously, what's the matter with this guy?

According to https://corrupt.tech/1708590130-ocr-compressed.pdf:

> During this time, however, Plaintiffs became concerned with Lee's mental wellbeing and his ability to lead LTM and PIA. Plaintiffs noticed that Lee was a habitual user of marijuana and cocaine, and would frequently abuse drugs in the office _in front of his employees. Lee often combined his drug use with alcohol and would act erratically.

> For example, he told-LTM employees that he wanted to hire a female candidate simply because he wanted to have sex with her. Similarly, Lee expressed his desire to set up a "modeling agency" in the Hollywood Hills that would actually be a front for illegal prostitution (i.e., the "models" would be paid escorts). To that end, Lee circulated a memo to male LTM executives advising them to make sure and wear condoms when having sex with the "models." Lee went so far as to state that he planned to move LTM's offices to the Hollywood Hills mansion once the "modeling agency" was set up


Hot take: A great tragedy of the GNU/Linux ecosystem is the fact that ABI+API is still not figured out, and the most common interface between programs is the C ABI, which is embarassingly primitive even by 80's/90's standards. Some people in the FOSS community just want to leave things as-is to hinder propiertary software, and it's the same story with device drivers. You can debate the merits rightfully so, but then there's still companies pushing out binary blobs which break every few kernel updates. As a FOSS developer it's an eternal fight between good and evil with no winner in sight, as a propiertary developer it's pain in the ass to maintain old and new software between all the permutation of Linux distros, and as a user I get to cry because of the lack of popular software and backwards compatibility. Snaps and flatpaks are an ugly hack, literal ductape around this very fundamental problem, and clearly not the solution. GNU/Linux should have adopted COM and/or an adequete proglang with stable ABI a long time ago, and should have tried to control wasted effort put into duplicate software ecosystems (KDE and Gnome).

An axiom of cryptographic tool design: there are no advanced users. Designs that ignore the axiom will always create dangerous failure modes.

What makes you think ARM is the clear future? It's easier to do a wide decoder thanks to ARMv8's fixed-length instructions, but that's more or less the only difference that results from the ISA. And only Apple is even taking advantage of that currently - all the ARM CPU designs are 4-wide decode. Same as pretty much every modern x86 CPU.

Do you think Amazon will be better at design & fabrication than Intel (when they get their shit together) or AMD? Do you expect Amazon to actually throw Apple's level of R&D money & acquisitions at the problem? And when/if they do, why would you expect any meaningful difference at the end of the day on anything? People love to talk about supposed ARM power efficiency, but the 64-core Epyc Rome at 200w is around 3w per CPU core. Which is half the power draw of an M1 firestorm core. The M1's firestorm is also faster in such a comparison, but the point is power isn't a magical ARM advantage, and x86 isn't inherently some crazy power hog.

Phoronix put Graviton2 to the test vs. Epyc Rome and the results ain't pretty: https://www.phoronix.com/scan.php?page=article&item=epyc-vs-...

"When taking the geometric mean of all these benchmarks, the EPYC 7742 without SMT enabled was about 46% faster than the Graviton2 bare metal performance. The EPYC 7742 with SMT (128 threads) increased the lead to about 51%, due to not all of the benchmarks being multi-thread focused."

Graviton2 being a bit cheaper doesn't mean much when you need 50% more instances.

But right now there's only a single company that makes ARM look good and that's Apple. And Apple hasn't been in the server game for a long, long time now. Everyone else's ARM CPU cores pretty much suck. Maybe Nvidia's recent acquisition will change things, who knows. But at the end of the day if AMD keeps things up, or Intel gets back on track, there really doesn't look to be a bright future for ARM outside of Apple's ecosystem and the existing ARM markets.


You’re welcome to use App Store apps... I don’t see the problem.

Safari PWAs protect your privacy better than native apps, because Safari trusts no one. The App Store review process is trying to find a malicious needle in a haystack because iOS by default trusts native apps more than it should, due to the existence of that nebulous review process. Apple is slowly locking down native apps with each successive iOS version... but PWAs have always been extremely private, and they're still the gold standard as far as I've seen.

PWAs are actually completely isolated from each other and from the rest of Safari, so there is no cross-contamination for tracking purposes.

If you don’t want more choice and more privacy... that’s up to you. I really don’t know what to tell you.

PWAs aren't the "wild west" that sideloading apps would be, yet you're trying to use the classic anti-sideloading argument against PWAs, and that argument simply doesn't work here. Apple has supported PWAs since before there was even an Apple App Store for native apps!

PWAs already exist, and PWAs are already extremely private. Apple just needs to give PWAs push notification support. Users would still have complete control over notifications, just like any native app.

Apple heavily pushes native apps because of the profit they get from it, not because of concerns about user privacy. They have already built Safari to protect your privacy on the open web.


I've been a technical writer for ~8 years (3 at a startup, 5 at Gooble). I don't know Apple's situation but here's my guess:

> Is the documentation team too small? (Likely.)

Documentation is weird because there seems to be widespread agreement among developers about how lacking it is (and conversely I think it's safe to say how important it is for job success) yet technical writing is almost always understaffed, no matter what size company you have. Part of it is that it's really hard to show a causal link between the docs we create and developer success. I know it seems silly but that's why I've been advocating for getting those little "was this page helpful?" links at the bottom of pages, followed by an opportunity to provide freeform feedback. If developers started using those consistently and leaving testimony about how the docs helped them it would be a lot easier to prove the value of docs. This is especially true when you operate at a big, platform-level scale, as is the case on https://web.dev (what I work on) and these Apple API docs. Pageviews give us a rough idea of the demand for certain ideas but don't tell us anything about whether our docs are actually useful.

(As an aside, in some situations it's easier to justify the technical writing team's existence; e.g. your technical writer specifically creates docs in response to support requests and the support team is able to just link customers to the docs rather than re-answering the same question over and over; or you have access to "customer's" code and are able to show that they are following the best practices / use cases you mention and avoid anti-patterns you warn about)

I'll leave with a suggestion because I don't have a horse in this race. If this situation is so bad; your best option might be to create a community-managed MDN-style resource for the Apple ecosystem. I would suggest focusing it on 1) API reference documentation and 2) examples (MDN puts the examples within the API reference pages, that would probably work here).

Another route could be more of what these people are already doing: make a lot of noise until Apple realizes how bad the situation is. I imagine if you can prove that you're leaving their platform because the situation is so bad, that might wake them up.

(No disrespect to Apple people; I know how tough this situation is; just trying to provide constructive feedback for the broader community)


Sigh - people still don't understand this ..... many (many .... many!) years ago I did my first Unix port (very early 80s) it was of V6 to the Vax, my code ran as a virtual machine under VMS - Unix kernel running in supervisor mode in place of the VMS's shell. Ported the kernel and the C compiler at the same time (NEVER do this, it is hell).

Anyway I came upon this comment in the swap in code a bunch of times, never understand it, until I came to it from the right place and it was obvious - a real aha! moment.

So here's the correct explanation - V6 was a swapping OS, only needed a rudimentary MMU, no paging. When you did a fork the current process was duplicated to somewhere else in memory .... if you didn't have enough spare memory the system wrote a copy of the current process to swap and created a new process entry pointing at the copy on disk as if it was swapped out and set that SSWAP flag. In the general swap-in code a new process would have that flag set, it would fudge the stack with that aretu, clear the flag and the "return(1)" would return into a different place in the code from where the swapin was called - that '1' that has "has many subtle implications" is essentially the return from newproc() that says that this is the new process returning from a fork. Interestingly no place that calls the swap in routine that's returning (1) expects a return value (C had rather more lax syntax back then, and there was no void yet), it's returned to some other place that had called some other routine in the fork path (probably newproc() from memory).

A lot of what was going on is tied up in retu()/aretu() syntax, as mentioned in the attached article, it was rather baroque and depended heavily on hidden details of what the compiler did (did I mention I was porting the compiler at the same time ....) - save()/restore() (used in V7) hadn't been invented yet and that's what was used there.


> That tweet is a big deal if true.

One thing you can independently verify is that the backdoor string is still in their client. The installer starts transmitting data during the installation process, so I would not recommend installing it outside of a VM—just look at the files directly. The macOS installer has it in `Backblaze Installer.app/Contents/Resources/instfiles.zip/bztransmit`. The Windows installer is a self-extracting ZIP file, so just use unzip and look in `bztransmit.exe` and `bztransmit64.exe`.

  $ strings bztransmit |grep BACKDOOR
  DoHttpPostSyncHostInfo - BACKDOOR_prefer.xml file exists: 
  ERROR DoHttpPostSyncHostInfo - BACKDOOR_prefer.xml file could not be read: 
  ERROR DoHttpPostSyncHostInfo - BACKDOOR_prefer.xml file existed but less than 10 chars or could not be read: 
  ERROR DoHttpPostSyncHostInfo - BACKDOOR_prefer.xml file did not contain bz_cvt: 
  ERROR DoHttpPostSyncHostInfo - BACKDOOR_prefer.xml file contained bz_cvt but wrong num digits: 
  ERROR DoHttpPostSyncHostInfo - BACKDOOR_prefer.xml file did not contain bz_upload_url: 
  ERROR DoHttpPostSyncHostInfo - BACKDOOR_prefer.xml file contained bz_upload_url but did not start with http: 
  DoHttpPostSyncHostInfo - BACKDOOR_prefer.xml file exists and is valid and bz_cvt=
  DoHttpPostSyncHostInfo - BACKDOOR_prefer.xml SUCCESSFULLY_swapped_in new_bz_cvt=
What does the corresponding code do? I genuinely don’t know. My goal was to find backup software, not to do a security analysis. An easy-to-exploit root code execution vulnerability was enough for me to uninstall the software, submit a report as a professional courtesy, and go do something else.

Clearly it’s dumb to put the word BACKDOOR in your code if your goal is to plant a secret backdoor, but it’s also pretty dumb to use world-writable directories, disable host certificate verification, use magic hard-coded strings to “sign” updates, and implement data encryption in a way which requires the ‘private’ password to be sent to the server, so who knows. As I said in the tweet, even if it turns out to be innocuous, the optics are terrible and show a serious lack of good judgement on their part, especially given how much they claim to be security experts who care about their reputation[0].

> Were there any follow-ups from Backblaze?

No. The only “follow-up”, as it were, was to cancel their HackerOne public bug bounty programme. (Though this was a month ago, their web site still tells people to “visit our public bug bounty program managed through Hacker One”[2].) They have not communicated with me at all except for one tweet about that change[1]. I have seen no public statement from them acknowledging that this happened, or that they made mistakes, or that they have steps they plan to take to improve their internal software development practices.

[0] “We stand by our reputation as trustworthy, careful programmers who have worked in the security field for over a decade. […] we have LOTS of interest in keeping our reputations rock solid and utterly clean.” https://help.backblaze.com/hc/en-us/articles/217664798-Secur...

[1] https://twitter.com/backblaze/status/1308157606368882688

[2] https://www.backblaze.com/security.html


I hear your security concerns.

In case of Discord, there are three key causes for preferring the desktop version - and I don't see any easy workaround to them without adding explicit support to web standards:

1) custom audio codecs & filters; Discord is primarily a gaming VC app and the codecs available on the desktop version are superior for purpose of cancelling echo, noise etc. at low latency,

2) ability to capture any keyboard & mouse shortcuts for VC activation

3) ability to use voice & keyboard shortcut while the browser window is not focused (in particular while in game)

Incidentally adding general support for those to general browsers would lead to subtle security bugs & gotchas.


The software sector needs a bit of aviation safety culture: 50 years ago the conclusion "pilot error" as the main cause was virtually banned from accident investigation. The new mindset is that any system or procedure where a single human error can cause an incident is a broken system. So the blame isn't on the human pressing the button, the problem is the button or procedure design being unsuitable. The result was a huge improvement in safety across the whole industry.

In software there is still a certain arrogance of quickly calling the user (or other software professional) stupid, thinking it can't happen to you. But in reality given enough time, everyone makes at least one stupid mistake, it's how humans work.


All this effort for a highly specialized solution that only fixes one combination in a huge M:N matrix. Wouldn't it be better to start working on an "LLVM for language interoperability"? That way every language only needs to provide a connection to an intermediate specification, and it can connect to any language supporting that same specification.

Currently this is done through C APIs (and it works well enough TBH), but plain C APIs lack annotations for higher-level concepts (like lifetime, bounds checking, etc). Why not work on a modern language interoperability standard instead? Being able to mix many different smaller and specialized languages in a single project instead of trying to create the "one universal language to rule them all" also would speed up language evolution, because less effort is wasted copying features from one "universal language" into other "universal languages", and as a bonus, operating system APIs following that standard would become language-agnostic.


That's really impressive and insightful.

I've been working all my career (12 years) as a remote worker (freelancer for a couple years, then employee) and poor async communication skills is by far the worst and most omnipresent problem I've seen - mainly because as nobody sees who is doing what, allowing interruptions quickly lead to everybody constantly interrupting everybody. Having Hangout or Slack constantly ringing because three people want my attention at the same time is a sure way toward madness.

The lack of async culture also often manifests that way : a manager/lead/whatever sends a grossly imprecise demand as a one-liner email ; then developers ask questions about what the hell that person is talking about ; then the first person gives a few more details (the minimum amount) and concludes with "can we have a call? It would be simpler".

I think the main problem is that many companies, while they want to try remote work as it "attracts top talents", don't have the proper culture for it, which is, ultimately, the culture of writing. Not only being able to type down the words you would have spoken, but to actually perform thinking writing - which implies to draft a few ideas, then read them, modify them, change their order, remove some, add others, and only when you're sure you've nailed the issue, hit send to share it with everyone.

The typical non-remote employee won't want to do that and prefer to throw vague ideas in the room so everybody contribute to making them clearer at the same time. I'm biased of course, but I think this is highly inefficient.


I generally say yes. The fixed timer is for the delayed ACK. That was a terrible idea. Both Linux and Windows now have a way to turn delayed ACKs off, but they're still on by default.

TCP_QUICKACK, which turns off delayed ACKs, is in Linux, but the manual page is very confused about what it actually does. Apparently it turns itself off after a while. I wish someone would get that right. I'd disable delayed ACKs by default. It's hard to think of a case today where they're a significant win. As I've written in the past, delayed ACKs were a hack to make remote Telnet character echo work better.

A key point is asymmetry. If you're the one who's doing lots of little writes, you can either turn set TCP_NODELAY at your end, or turn off delayed ACKs at the other end. If you can. Things doing lots of little writes but not filling up the pipe, typically game clients, can't change the settings at the other end. So it became a standard practice to do what you could do at your end.


If you want to be able to compile a certain language fast, translation units need to be modular. I.e. if Source1 is referenced by Source2 and Source3, you should be able to compile Source1 once, save its public part into a binary structure with logarithmic access time, and then just load it when compiling Source2 and Source3.

This works splendidly with Pascal because each .pas file is split into interface and implementation sections. The interface section is a contextless description of what the module exports, that can be easily translated to a bunch of hash tables for functions, types, etc, that will be blazingly fast.

It's a whole different story with C/C++. In order to support preprocessor macros and C++ templates, referencing a module means recursively "including" a bunch of header files (i.e. reparsing them from scratch), meaning O(n^2) complexity where n is the number of modules. You can speed it up with precompiled headers, but you will still need to separately parse each sequence of references (e.g. #include <Module1.h> \n #include <Module2.h> and #include <Module2.h> \n #include <Module1.h> will need to be precompiled separately. C++20 modules do address this, but since C/C++ is all about backwards compatibility, it's still a hot mess in real-world projects.

That said, C# solves this problem near perfectly. Each .cs file can be easily translated to an mergeable and order-independent "public" part (i.e. hierarchical hash table of public types), and that part can be reused when building anything that references this file. It is also interesting how C# designers achieved most of the usability of C++ templates by using constrained generic types that are actually expanded at runtime (JIT IL-to-native translation to be more specific).


Support and consulting services generally aren't a good startup business model. Your revenue scales linearly with headcount which generally prevents hockey stick growth and massive revenue per employee numbers. It's why so many OSS based startups are doing SaaS models right now.

This posts makes Mozilla problems better understood. One thing are slightly overpaid bosses, but there is more...

"After leaving MDN, I took on a dual role: WADI (Web Advocacy & Developer Initiative) and Firefox OS for TV Partner Engineering. With WADI I had the pleasure of contributing to the Service Worker Cookbook. With the TV role I got a beautiful 60" ultra HD TV where I helped partners bring their video sites and games to life."

it seems there are some jobs at Mozilla that might not necessarily contribute significantly to its success.

And it looks like overall management in Mozilla is not in the greatest shape:

"When things got a bit tough at Mozilla, I shifted to Mozilla's "Productivity Tools" team. My first two weeks were nothing short of hilarious; my new manager didn't know I was a front-end engineer, so I stumbled through completing a big python migration."


One thing to keep in mind when looking at GNU programs is that they're often intentionally written in an odd style to remove all questions of Unix copyright infringement at the time that they were written.

The long-standing advice when writing GNU utilities used to be that if the program you were replacing was optimized for minimizing CPU use, write yours to minimize memory use, or vice-versa. Or in this case, if the program was optimized for simplicity, optimize for throughput.

It would have been very easy for the nascent GNU project to unintentionally produce a line-by-line equivalent of BSD yes.c, which would have potentially landed them in the 80/90s equivalent of the Google v.s. Oracle case.


My problem with apple and google charging 30% is when they are the competitors. So in essence you are asking your competitors to finance you. If apple wants Apple music or a video subscription service they can't charge as Spotify or netflix 30% same for google etc. And I think this should apply to almost all market services/stores not just mobile app stores. If you want to be market infrastructure provider you should not be allowed to compete in it or allowed to charge 30% rather it should be a nominal fee.

I was only commenting on this on HN just the other day. When most people say IO bound what they really mean is "There's a hot CPU but it's across a network" ie: "I wrote really inefficient SQL queries, therefor I'm IO bound, therefor I don't have to care about CPU" - and the process pushes further and further downstream as every service talking to one of these "IO bound" services also becomes "IO bound".

A follow-up recommendation I give (which I suspect might be unpopular with many around here) is to use Python for all but the most trivial one-liner scripts, instead of shell.

Since 3.5 added `subprocess.run` (https://docs.python.org/3/library/subprocess.html#subprocess...) it's really easy to write CLI-style scripts in Python.

In my experience most engineers don't have deep fluency with Unix tools, so as soon as you start doing things like `if` branches in shell, it gets hard for many to follow.

The equivalent Python for a script is seldom harder to understand, and as soon as you start doing any nontrivial logic it is (in my experience) always easier to understand.

For example:

    subprocess.run("exit 1", shell=True, check=True)
    Traceback (most recent call last):
      ...
    subprocess.CalledProcessError: Command 'exit 1' returned non-zero exit status 
Combine this with `docopt` and you can very quickly and easily write helptext/arg parsing wrappers for your scripts; e.g.

    """Create a backup from a Google Cloud SQL instance, storing the backup in a 
    Storage bucket.
    
        Usage: db_backup.py INSTANCE
    """
    if __name__ == '__main__':
        args = docopt.docopt(__doc__)
        make_backup(instance=args['INSTANCE'])
 
Which to my eyes is much easier to grok than the equivalent bash for providing help text and requiring args.

There's an argument to be made that "shell is more universal", but my claim here is that this is actually false, and simple Python is going to be more widely-understood and less error-prone these days.


It is a bit hard to explain in an HN comment something that requires live experience,

You can start by having a look at,

http://toastytech.com/guis/cedar.html

"Eric Bier Demonstrates Cedar"

https://www.youtube.com/watch?v=z_dt7NG38V4

"Alto System Project: Dan Ingalls demonstrates Smalltalk"

https://www.youtube.com/watch?v=uknEhXyZgsg

"SYMBOLICS CONCORDIA ONLINE DOCUMENTATION HYPER TEXT MARKUP 1985"

https://www.youtube.com/watch?v=ud0HhzAK30w

You can see how NeXT builds on many of these concepts on the famous "NeXT vs Sun" marketing piece,

https://www.youtube.com/watch?v=UGhfB-NICzg

Sun also had some ideas along these lines with NeWS,

"NeWS: A Networked and Extensible Window System,"

https://www.youtube.com/watch?v=4zG0uecYSMA

Naturally OS/2, BeOS, Windows, macOS, iOS, and even Android share some of the ideas.

Now, before I proceed note that actually modern Linux distributions have all the tooling to make these concepts happen, but it fails short to have everyone agree on a proper stack.

So basically, the main idea is to have a proper full stack in place for developing a Workstation computer as one single experience, from bottom all the way to the top.

On Xerox's case, they used bytecode with in-CPU execution via programmable microcode loaded on boot, and later on just a thin native glue on top of host OS.

The environments had frameworks / modules for the whole OS stack, supported distributed computing, embedded of data structures across applications (OLE can trace its roots back to these ideas), REPLs that not only could interact with the whole OS (commands, modules, running applications), it was also possible to break into the debugger, change the code and redo the failed instructions.

Linux distributions get kind of close to these ideas via GNOME and KDE, but the whole concept breaks, because they aren't part of a full OS, rather a bunch of frameworks, that have to deal with classical UNIX applications and communities that rather use fvwm (like I was doing in 1995), and use a bunch of xterms, than having frameworks talking over D-BUS, embedding documents, all integrated with a REPL capable of handling structured data, and call any kind of executable code (including .so, because the type information isn't available).

And then every couple of years the sound daemon, graphics stack or whatever userspace layer gets redone, without any kind of compatibility, because it is open source so anyone that cares should just port whatever applications are relevant.

It is quite telling that most Linux conferences end up being about kernel, new filesystems, network protocols, and seldom about how to have something like a full BeOS stack on Linux. Even freedesktop can only do so much.


This:

"there were no humans on board"

Made me think of this:

"The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment."


Stripping Audible DRM is surprisingly easy. Download via the desktop, and run a single docker command:

    docker run -v $(pwd):/data ryanfb/inaudible@sha256:b66738d235be1007797e3a0a0ead115fa227e81e2ab5b7befb97d43f7712fac5
The resulting m4b file has proper chapters, so works everywhere, but I tend to split it further[0],[1]

[0]: https://github.com/captn3m0/Scripts/blob/master/split-by-aud...

[1]: https://github.com/captn3m0/Scripts/blob/master/split-audio-...


If you're using Quartz Debug, you probably want to set these as well:

  defaults write com.apple.QuartzDebug QuartzDebugPrivateInterface -bool YES
  defaults write com.apple.QuartzDebug QDDockShowFramemeterHistory -bool YES
  defaults write com.apple.QuartzDebug QDDockShowNumericalFps -bool YES
  defaults write com.apple.QuartzDebug QDShowWindowInfoOnMouseOver -bool YES
The first one lets the window list work, which Apple in its infinite wisdom has decided you as a non-Apple engineer don't need. The middle two are things you can set from inside the app but show useful things in the dock icon. And the last lets you identify which app a window belongs to (press ⌃⌥ while hovering over it), which is very useful when you have a random thing pop up and you don't know how to get rid of them.

Spotify flashes non-stop (the left sidebar and the bottom control strip, but interestingly not the main app window).

> Quartz Debug: There are some apps that reduce your battery life in an insidious way where it doesn’t show as CPU usage for their process but as increased WindowServer CPU usage. If your WindowServer process CPU usage is above maybe 6-10% when you’re not doing anything, some app in the background is probably spamming 60fps animation updates. As far as I know you can only figure out which app is at fault by getting the Quartz Debug app from Apple’s additional developer tools, enabling flash screen updates (and no delay after flash), then going to the overview mode (four finger swipe up) and looking for flashing. This same problem can also occur on Linux and Windows but I don’t know how much power it saps there.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: