Hacker News new | past | comments | ask | show | jobs | submit | highlights login
If you run across a great HN comment (or comment tree), please tell us at hn@ycombinator.com so we can add it here.

Full disclosure: Principal Software Engineer here on the Scratch backend...

Scratch is not built to be a "teach your kid programming languages" system, it is based on the work and ideas of the Life Long Kindergarten group at the MIT Media Lab (the director of this group is Professor Mitch Resnick, the LEGO, Papert Professor of Learning Research). The Papert part is where the term Mindstorms comes from (https://www.amazon.com/Mindstorms-Children-Computers-Powerfu...) and was used by the Lego Group when branding those products, and our philosophy is heavily influenced by that.

I can say that the https://scratch.mit.edu/statistics/ are real and we have a substantial footprint of backend services and custom software to support it. We handle on the order of 15-20 million comments/month.

The primary design philosophy is:

Passion: You have a strong interest in a subject/problem to solve/explore Projects: Build something based on your passions, gain directly interactive experience with it. Peers: Share your work with folks who are interested and provide feedback to you Play: It should be fun!

Note that there is nothing in there about STEM/STEAM nor application development. We build and support Scratch to provide creative tools for anyone to explore computation in a from that is relatable and has a low floor for understanding/entry. Having said that, the complexity of what Scratch can do rises sharply the more you work with it and the concepts behind "forking" and opensource are built in via the remix ability on individual projects.

A lot of design thinking goes into the frontend of Scratch to build on a creativity feedback loop that is not focused on learning Python or any other specific language (or the syntax of them, i.e. avoid "why isn't my program working... oh, one too many tabs... or maybe this semi-colon, or maybe this .")

Another part I think is worth raising, the Scratch frontend is a sophisticated virtual machine interpreter that has it's own machine code and model that is executing in a Javascript environment in browser and it is still open source. Google's Blockly project was based on the ideas of Scratch 1.4 and when we ported Scratch 2 away from being Flash based, we partnered with the Blockly group to fork their code base and create Scratch Blocks.

Based on the TIOBE index, we're usually somewhere in the top 20 most popular "programming languages". _eat it Fortran!_


Once managed tech for an insurance actuarial department. We ran IBM DB2 for underwriting and claims apps. One day had lunch with the actuarials to make friends and make sure we were supporting them well. At one point in the conversation I foolishly asked whether they also would like to access DB2 to minimize data transfers. They laughed and said: "SQL is like drinking data through a straw. We use APL so we can drink it all at once." I felt like a rookie at spring training.

Manager here. I ask everyone for estimates. Then I track how long shit actually takes. I find that:

- Some folks are scary precise in their estimates. I’d say this is like 10% of engineers. You ask them for an estimate, they tell you, and then it takes exactly that long every single time. In other words, I have confidence that if I ask one of those folks for an estimate, I can take that shit to the bank.

- Some folks always overestimate. This is rare. Maybe less than 10% of folks. When I find that someone overestimates, I know that I can take that shit to the bank as an upper bound, which is still useful.

- Some folks say they don’t know. That’s fine. Those folks are even rarer, so I don’t have to do anything smart for them.

- Most folks underestimate, sometimes hilariously so - they are always one day, or one week, or some other too-short amount of time, away from finishing their multi month effort. That’s fine. Once I know you underestimate, I know that I can take your estimates to the bank as a lower bound. For more than half of the underestimaters, I find that there’s some formula that works: like if Steve says he needs just one more day, he always means he needs five more days. So if suddenly Steve says he needs another week, then I know he probably needs over a month. That’s still useful to me and I’m totally fine if Steve then tells his friends (or HN) how dumb it is that I ask him for estimates. They may be dumb to him but I’ve got my napkin math that turns his BS estimate into something that I can take to the bank. (I don’t actually manage anyone named Steve but I was a Steve as an IC.)

So yeah. When I ask you for an estimate, I expect it to be wrong and then I look for patterns in your wrongness. For most people, there’s a pattern that allows me to turn something that looks like a nonsense estimate at first into something I can then plan around. That’s why developers I work with are “expected” to estimate.


Bob has been an active member of the Austin startup community for 10+ years and I've talked with him many times. As a EE, it was cool meeting him the first time and once I'd chatted with him a few times, I finally asked the question I'd been dying to ask: How'd you come up with "Metcalfe's Law"?

Metcalfe's Law states the value of a network is proportional to the square of the number of devices of the system.

When I finally asked him, he looked at me and said "I made it up."

Me: .. what?

Him: I was selling network cards and I wanted people to buy more.

Me: .. what?

Him: If I could convince someone to buy 4 instead of 2, that was great. So I told them buying more made each of them more valuable.

It was mind blowing because so many other things were built on that "law" that began as a sales pitch. Lots of people have proven out "more nodes are more valuable" but that's where it started.

He also tells a story about declining a job with Steve Jobs to start 3Com and Steve later coming to his wedding. He also shared a scan of his original pitch deck for 3Com which was a set of transparencies because Powerpoint hadn't been invented yet. I think I kept a copy of it..


My mom was a PLATO developer. She wrote computer based learning courses for it.

What I remember about PLATO was the games. I think there was one where you could drop a flower pot on Mickey Mouse's head. Does that sound familiar to anyone?


Ryan and David and team, I am so happy to see this posted!

I'm quoted a few places in it. Yes, the story about the origin of the phrase "fire an event" is true and correct. I could even tell you exactly where I was sitting and which way I was facing when that event fired.

Some things stick in your mind, doobies or not.

Also mentioned is the VBX. In some ways, this may have been the worst API I ever designed. It was so bad that Microsoft eventually replaced it with COM! But it was the most successful.

If anyone has any questions, fire a comment!


I watched all 157 of these: https://www.youtube.com/playlist?list=PLVV0r6CmEsFzDA6mtmKQE... Really interesting.

I spoke to him once at a book signing and asked him about Orion. In summary he said: would it have worked - probably, should it be done - probably not. Although he did make the point that pretty much every big engineering project kills people.


This is awesome! This kind of extreme low-power application was the original reason I wanted Mosh -- I always thought it would be awesome to have a wireless (hand-cranked?) laptop that could run SSH and would only consume energy when (a) a key was pressed [to update the locally generated UI], or (b) it wanted to fire up the radio (on its chosen schedule, e.g. <1 Hz) to retrieve an update from the server of the "ground truth" UI and reconcile that with the local prediction.

I worked on the UIUC PLATO system in the 1970s : CDC-6600, 7600 cpus with 60-bit words. Back then everything used magnetic core memory and that memory was unbelievably expensive! Sewn together by women in southeast Asia, maybe $1 per word!

Having 6-bit bytes on a CDC was a terrific PITA! The byte size was a tradeoffs between saving MONEY (RAM) and the hassle of shift codes (070) used to get uppercase letters and rare symbols! Once semiconductor memory began to be available (2M words of 'ECS' - "extended core storage" - actually semiconductor memory - was added to our 1M byte memory in ~1978) computer architects could afford to burn the extra 2 bits in every word to make programming easier...

At about the same time microprocessors like the 8008 were starting to take off (1975). If the basic instruction could not support a 0-100 value it would be virtually useless! There was only 1 microprocessor that DID NOT use the 8-bit byte and that was the 12-bit intersil 6100 which copied the pdp-8 instruction set!

Also the invention of double precision floating point made 32-bit floating point okay. From the 40s till the 70s the most critical decision in computer architecture was the size of the floating point word: 36, 48, 52, 60 bits ... But 32 is clearly inadequate. But the idea that you could have a second larger floating point fpu that handled 32 AND 64-bit words made 32-bit floating point acceptable..

Also in the early 1970s text processing took off, partly from the invention of ASCII (1963), partly from 8-bit microprocessors, partly from a little known OS whose fundamental idea was that characters should be the only unit of I/O (Unix -father of Linux).

So why do we have 8-bit bytes? Thank you, Gordon Moore!


Olga was a famous doorkeeper at Chalmers University of Technology, Sweden.

Students used to send her postcards from their journeys.

It became so popular that it was enough to write "Olga, Sweden" for her to get the letters [0] (source in swedish).

[0]: https://sv.wikipedia.org/wiki/Olga_Boberg


At age 15 I wrote a pacman clone for the Atari ST and was both impressed by and jealous of Minter's Llamatron for the same platform. My game was 30Hz only rarely, usually degrading to 15Hz, and you could really feel it in the gameplay. Llamatron was always fast (always 60hz?) -- because that's just how Jeff rolls. Respect.

On the plus side, my crappy pacman clone was good enough to convince Andy Gavin to (years later) bring me on as the first developer hire at Naughty Dog. The system works! (I guess?)


John and I were in graduate school together (computer science, U Wisconsin - Madison). He was indeed a remarkable person. He was blind and deaf. He carried around a little mechanical Braille typewriter. To talk with John, you would type, and he would extend his hand into the device and feel the Braille impressions of what you were typing. He was not qualified for a normal seeing eye dog program because of the extent of his disabilities. So, he got a dog on his own and trained her. Her name was Sugar, and I can still hear John talking with and giving instructions to her. He was a living demonstration of the stunning heights that people achieve from time to time. I believe his PhD advisor was Marvin Solomon who was (is) also a remarkable and admirable person.

I used to work for Jimmy Cauty and Bill Drummond's record label. This book looks interesting ... but I really wish the focus was on the art and the music, and not the "the guys who burnt 1 million quid" incident.

One thing that the media have a tough time recognizing is the fact that Bill and Jimmy are legit experimental artists, and still love making art of all kinds. And they also happened to have some amazing musical talent and experience (Jimmy: The Orb; Bill: Big In Japan and early manager of Echo and the Bunnymen).

So, they decided to take their talents into areas where few experimental artists had ever gone before, taking over the pop charts ... and then proceeded to do what experimental artists are wont to do in such a situation.

They gave a huge middle finger to the industry, by barnstorming the big UK music industry award ceremony (the '92 Brit Awards), playing a death metal version of one of their dance hits while Bill fired blanks from a machine gun over the heads of the crowd. Later in the evening they dumped a dead sheep outside one of the after-parties, and shortly afterwards deleted their entire back catalogue.

They proceeded to do lots of other experimental stuff, ranging from writing some excellent books (I recommend Bill's "45") to activities such as Jimmy's model English village a few years back.

And the music, 30+ years later, is still fantastic! Not just the pop hits. Listen to The White Room. Dig up the club singles and experiments like the Abba and Whitney projects and It's Grim Up North.

Yet after all that, what does the media remember them for? More often than not, it's the one-off act of Burning a Million Quid in 1994. Their ground-breaking music, the books, the anti-establishment statements and art ... it's almost an afterthought.

It's like releasing an otherwise interesting book about Ozzy Osbourne - a seminal figure in the history of heavy metal with an unusual groundbreaking role in reality TV - and positioning it around a single sensational incident from 1982, "Crazy Train: Ozzy Osbourne, the Man Who Bit The Head Off A Bat"


If you’re curious about how this sort of brain stimulation works, I just published a fun little explainer in PLOS Biology.

https://journals.plos.org/plosbiology/article?id=10.1371/jou...


(Update: Commenter "homero" mentioned that Twilio's CNAM API response includes the carrier: https://support.twilio.com/hc/en-us/articles/360050891214-Ge... . Twilio's docs make it sound like this API does incorporate mobile number portability, which is what you need, but I haven't personally verified. Can anyone from Twitter confirm that the LNP info is at least near-realtime?)

You'll need either access to an SS7 routing system or, more likely, an HTTP API that exposes 10-digit number routing info. Google for '10 digit OCN lookup' or 'realtime CNAM lookup API' and you'll be on the right track. You need one that handles mobile number portability. Most APIs charge a small per-query fee because it's not static data. Any one number can be ported at any time and the only way to know is to see where (in SS7) it's actually routed.

And be aware that there's a fair number of gotchas. I have a lot of experience in the telco world[1]. The two big gotchas are:

1. Inbound and outbound carriers can be and often are different, and outbound caller ID can be spoofed. The source number on an SMS from a 10-digit number (a "10DLC" SMS) is much, much less likely to be spoofed than the caller ID on a robocall. You can fairly reliably report source numbers on SMSes.

To keep it simple, consider starting by reporting SMSes.

For robocalls, expect that many robocall CIDs are spoofed, and the most interesting robocalls are the ones that ask the recipient to contact them at the same number. Or, where both the CID and the callback/contact number are DIDs from the same carrier.

2. Number portability means that all the old static databases (LERG and NPA-NXX-Y Number Pooling[2] databases) aren't enough. One phone number might be routed to one carrier and the sequentially-next number might be routed to a completely different carrier, and either of them might change the next day.

This is just the start - there are other gotchas and a pretty significant learning curve. Stay polite and professional, assume good intentions, and assume you're wrong about something.

[1]: Back in 2010, I made the first free, public REST API for looking up phone data: https://www.slideshare.net/troyd/cloudvox-digits-phone-api-l..., https://www.prnewswire.com/news-releases/cloudvox-launches-f...

[2]: https://nationalnanpa.com/ and for thousands-block reports, https://www.nationalpooling.com/ -> Reports -> Block Report by Region


Hah! Author of that 20-year-old web page here.

At the time I was attempting to use standard open source image processing software like ImageMagick to manipulate scientific data. I was disappointed to find that it was not suitable, both due to approximations like this one, and because all the libraries I looked at only allowed 8-bit grayscale. I really wanted floating point data.

Here is what I was working on back in those days: https://www.ocf.berkeley.edu/~fricke/projects/israel/project...

I was a summer student at the Weizmann Institute of Science in Rehovot, Israel, processing electron micrographs of a particular protein structure made by a particular bacterium. It's very interesting: this bacterium attacks plants and injects some genetic material into the plant, causing the plant to start manufacturing food for the bacterium. By replacing the "payload" of this protein structure, the mechanism can be used to insert other genetic structures into the plant, instead of the sequence that causes it to produce food for the bacterium. Or something like that.

Here's a random chunk of my research journal from those days: https://www.ocf.berkeley.edu/~fricke/projects/israel/journal...

The work contributed to this paper: https://www.jbc.org/article/S0021-9258(20)66439-0/fulltext

Here's the Wikipedia article about the author of that algorithm: https://en.wikipedia.org/wiki/Alan_W._Paeth

And his original web page that I linked to, now via archive.org: https://web.archive.org/web/20050228223159/http://people.ouc...

If you liked this trick, check out Alan Paeth's "Graphics Gems" series of books.

Kudos and thanks to the OCF at UC Berkeley which has hosted my web page there for more than a quarter century with just about zero maintenance on my part.

And thanks for the trip down memory lane!


Yay, my research field on Hackernews!

Great achievement. The title is of course misleading: The Orbitrap itself (the part that "fits in your hand") was hardly miniaturized, it's about the same size as a regular one [0]. The achievement is to miniaturize "mass, volume and" (IMO especially!) "power requirements" of the box around it (which, even miniaturized, does not fit in your hand). This runs at 41W and weighs 8kg. A commercial instrument runs at a total of ~2kW and weighs >100kg.

(Though in space they have the convenient advantage that no extra vacuum system is needed, which makes up a lot of space and energy consumption of these instruments here on Earth. The atmosphere on Europa is conveniently just about the "natural" operating conditions of an Orbitrap, which is required for its high accuracy.)

[0] https://commons.wikimedia.org/wiki/File:Orbitrap_Mass_Analyz...


Hi there! I work on the TypeScript team and I respect your feedback. Of course I do think TypeScript is worth it, and I'll try to address some of the points you've raised with my thoughts.

i. Dependency management is indeed frustrating. TypeScript doesn't create a new major version for every more-advanced check. In cases where inference might improve or new analyses are added, we run the risk of affecting existing builds. My best advice on this front is to lock to a specific minor version of TS.

ii. My anecdotal experience is that library documentation could indeed be better; however, that's been the case with JavaScript libraries regardless of types.

iii. Our error messages need to get better - I'm in full agreement with you. Often a concrete repro is a good way to get us thinking. Our error reporting system can often take shortcuts to provide a good error message when we recognize a pattern.

iv. Compilation can be a burden from tooling overhead. For the front-end, it is usually less of a pain since tools like esbuild and swc are making these so much faster and seamless (assuming you're bundling anyway - which is likely if you use npm). For a platform like Node.js, it is admittedly still a bit annoying. You can still use those tools, or you can even use TypeScript for type-checking `.js` files with JSDoc. Long-term, we've been investigating ways to bring type annotations to JavaScript itself and checked by TypeScript - but that might be years away.

I know that these points might not give you back the time you spent working on these issues - but maybe they'll help avoid the same frustrations in the future.

If you have any other thoughts or want to dig into specifics, feel free to reach out at Daniel <dot> MyLastName at Microsoft <dot-com>.


Any of her descendants here on HN? She had 11 and it's been more than half a century now, so there should be at least one or two HN readers.

The basic problem, as I've written before[1][2], is that, after I put in Nagle's algorithm, Berkeley put in delayed ACKs. Delayed ACKs delay sending an empty ACK packet for a short, fixed period based on human typing speed, maybe 100ms. This was a hack Berkeley put in to handle large numbers of dumb terminals going in to time-sharing computers using terminal to Ethernet concentrators. Without delayed ACKs, each keystroke sent a datagram with one payload byte, and got a datagram back with no payload, just an ACK, followed shortly thereafter by a datagram with one echoed character. So they got a 30% load reduction for their TELNET application.

Both of those algorithms should never be on at the same time. But they usually are.

Linux has a socket option, TCP_QUICKACK, to turn off delayed ACKs. But it's very strange. The documentation is kind of vague, but apparently you have to re-enable it regularly.[3]

Sigh.

[1] https://news.ycombinator.com/item?id=10608356

[2] https://developers.slashdot.org/comments.pl?cid=14515105&sid...

[3] https://stackoverflow.com/questions/46587168/when-during-the...


I used to work for Sherwin-Williams. The in-store computers run some custom *nix OS. The software that company runs on is a text based ui that hasn't changed since it was introduced in the 90s.

They released a major update in 2020 that allowed you to move windows around the screen. It was groundbreaking.

But let me tell you, this system was absolutely terrible. All the machines were full x86 desktops with no hard drive, they netbooted from the manager's computer. Why not a thin client? A mystery.

The system stored a local cache of the database, which is only superficially useful. The cache is always several days, weeks, or months out of date, depending on what data you need. Most functions require querying the database hosted at corporate HQ in Cleveland. That link is up about 90% of the time, and when it's down, every store in the country is crippled.

It crashed frequently and is fundamentally incapable of concurrent access: if an order is open on the mixing station, you cannot access that order to bill the customer, and you can't access their account at all. Frequently, the system loses track of which records are open, requiring the manager manually override the DB lock just to bill an order.

If a store has been operating for more than a couple of years, the DB gets bloated or fragmented or something, and the entire system slows to a crawl. It takes minutes to open an order.

Which is all to say it's a bad system that cannot support their current scale of business.


As someone who went through western European modernist composition school (and who while entering it fully believed in the supremacy of modernist music) it was a painful process to notice that most of the post IIWW ("classical") music that was composed in Europe is so unconnected to - not just wider audience - but what I'd call a physical aspect of music: pulse and resonance. Most of it just makes the audience feel anxious and confused, if they even are able to pay attention.

It was then like waking up from a nightmare when I discovered the American school of minimalism. It restored my faith into art music and that writing for classical instruments still makes sense in the 21th century. So, it makes me wonder why the article doesn't mention what I think the greatest masterpiece of aleatoric music which was the starting point of minimalist music - Terry Riley's In C [1]. It comes from other continent than the original aleatoric music and it's aesthetic is "bit" different but it has aleatoric structure - and also listening it is joyful experience which I can't say about most of the examples linked in the article.

1: https://youtu.be/DpYBhX0UH04


Amazingly brilliant work, especially given the CPU capabilities at the time. Carmack's use of BSP trees inspired my own work on the Crash Bandicoot renderer. I was also really intrigued by Seth Teller's Ph.D. thesis on Precomputed Visibility Sets though I knew that would never run on home console hardware.

None of these techniques is relevant anymore given that all the hardware has Z buffers, obviating the need to explicitly order the polygons during the rendering process. But at the time (mid 90s) it was arguably the key problem 3D game developers needed to solve. (The other was camera control; for Crash Andy Gavin did that.)

A key insight is that sorting polygons correctly is inherently O(N^2), not O(N lg N) as most would initially assume. This is because polygon overlap is not a transitive property (A in front of B and B in front of C does NOT imply A in front of C, due to cyclic overlap.) This means you can't use O(N lg N) sorting, which in turn means sorting 1000 polygons requires a million comparisons -- infeasible for hardware at the time.

This is why many games from that era (3DO, PS1, etc) suffer from polygons that flicker back and forth, in front of and behind each other: most games used bucket sorting, which is O(N) but only approximate, and not stable frame to frame.

The handful of games that did something more clever to enable correct polygon sorting (Doom, Crash and I'm sure a few others) looked much better as a result.

Finally, just to screw with other developers, I generated a giant file of random data to fill up the Crash 1 CD and labeled it "bsptree.dat". I feel a bit guilty about that given that everyone has to download it when installing the game from the internet, even though it is completely useless to the game.


In 2015 I was working at a "fintech" company and a leap second was announced. It was scheduled for a Wednesday, unlike all others before which had happened on the weekend, when markets were closed.

When the previous leap second was applied, a bunch of our Linux servers had kernel panics for some reason, so needless to say everyone was really concerned about a leap second happening during trading hours.

So I was assigned to make sure nothing bad would happen. I spent a month in the lab, simulating the leap second by fast forwarding clocks for all our different applications, testing different NTP implementations (I like chrony, for what it's worth). I had heaps of meetings with our partners trying to figure out what their plans were (they had none), and test what would happen if their clocks went backwards. I had to learn about how to install the leap seconds file into a bunch of software I never even knew existed, write various recovery scripts, and at one point was knee-deep in ntpd and Solaris kernel code.

After all that, the day before it was scheduled, the whole trading world agreed to halt the markets for 15 minutes before/after the leap second, so all my work was for nothing. I'm not sure what the moral is here, if there is one.


The article describes how Apple included support for the x86 parity flag which comes from the 8080. Parity is relatively expensive to compute, requiring XOR of all the bits, so it's not an obvious thing to include in a processor. So why did early Intel processors have it? The reason is older than the 8080.

The Datapoint 2200 was a programmable computer terminal announced in 1970 with an 8-bit serial processor implemented in TTL chips. Because it was used as a terminal, they included parity for ASCII communication. Because it was a serial processor, it was little-endian, starting with the lowest bit. The makers talked to Intel and Texas Instruments to see if the board of TTL chips could be replaced with a single-chip processor. Both manufacturers cloned the existing Datapoint architecture. Texas Instruments produced the TMX 1795 microprocessor chip and slightly later, Intel produced the 8008 chip. Datapoint rejected both chips and stayed with TTL, which was considerably faster. (A good decision in the short term but very bad in the long term.) Texas Instruments couldn't find another buyer for the TMX 1795 so it vanished into obscurity. Intel, however, decided to sell the 8008 as a general-purpose processor, changing computing forever.

Intel improved the 8008 to create the 8080. Intel planned to change the world with the 32-bit iAPX 432 processor which implemented object-oriented programming and garbage collection in hardware. However, the 432 was delayed, so they introduced the 8086 as a temporary stop-gap, a 16-bit chip that supported translated 8080 assembly code. Necessarily, the 8086 included the parity flag and little endian order for compatibility. Of course, the 8086 was hugely popular and the iAPX 432 was a failure. The 8086 led to the x86 architecture that is so popular today.

So that's the history of why x86 has a parity bit and little-endian order, features that don't make a lot of sense now but completely made sense for the Datapoint 2200. Essentially, Apple is putting features into their processor for compatibility with a terminal from 1971.


I once spent a month blind taste-testing apples with my wife to compare the apples in our supermarket. We learned 4 things:

1) Most of the Honeycrisp varietals that make it to market are good (Wild Twist and Cosmic Crisp being our favorites), but…

2) The time of year makes a HUGE difference. It seems obvious when I say it, but different varietals from different farms are best in different weeks. Honeycrisp has an advantage here because it has so many growers that someone is keeping a batch in good condition for practically every week of the year.

3) You have to go by the apples in your local market. Lists like these are hard to use because there are many more apples on it than you have available to choose from - most grocery stores only stock 3-10 varieties depending on time of year

4) Your use case is critical. Obviously baking a pie requires a different apples from eating, but even if you are just eating the apples raw there are differences. Some apples beat others in texture when cut up, but have the wrong density to eat by biting down on the apple

After all our testing, we mostly went back to Honeycrisp because it’s so reliable.


I've been chasing infrasonic ranges in home audio for over 2 decades. You can't "detect" these frequencies in the normal way. You experience them by way of your physical environment being excited by them. Feeling pressure waves move through whatever you are standing/sitting on can add an entire new dimension to the experience.

I used to run experiments with friends and family using a 800L ported subwoofer tuned to ~13Hz with a 40Hz cutoff. Not one person would mistake it for being on vs off. Certain content makes these frequencies substantially more obvious. Classical music performed in large concert halls is one surprising candidate outside of Mission Impossible scenes. Being able to "feel" the original auditorium in your listening room is a very cool effect to me.


I can't resist saying one last thing about Siegel zeros: number theorists REALLY would like for this result to be correct because the possibility of Siegel zeros is unbelievably annoying. I mean mathematicians are supposed to enjoy challenges / difficulties, but Siegel zeros are just so recurrently irritating. The possibility of Siegel zeros means that in so many theorems you want to write down, you have to write caveats like "unless a Siegel zero exists," or split into two cases based on if Siegel zeros exist or don't exist, etc.

But here is the worst (or "most mysterious," depending on your mood..) thing about Siegel zeros. Our best result about Siegel zeros (excluding for present discussion Zhang's work), namely Siegel's theorem, is ineffective. That is, it says "there exists some constant C > 0 such that..." but it can tell you nothing about that constant beyond that it is positive and finite (we say that the constant is "not effectively computable from the proof").*

So then, if you try to use Siegel's theorem to prove things about primes, this ineffectivity trickles down (think "fruit of the poisoned tree"). For example, standard texts on analytic number theory include a proof of the following theorem: any sufficiently large odd integer is the sum of three primes. However, the proof in most standard texts fundamentally cannot tell you what the threshold for "sufficiently large" is, because the proof uses Siegel's theorem! In this particular case, it turns out that one can avoid Siegel's theorem, and in fact the statement "Any odd integer larger than five is the sum of three primes" is now known https://en.wikipedia.org/wiki/Goldbach%27s_weak_conjecture. But it is certainly not always possible to avoid Siegel's theorem, and Zhang's result would make so many theorems which right now involve ineffectively computable constants effective.

*Why is the constant not effectively computable? Because the proof proceeds basically like this. First: assume the Generalized Riemann Hypothesis. Then the result is trivial, Siegel zeros are exceptions to GRH and don't occur if GRH is true. Next, assume GRH is false. Take a "minimal" counterexample to GRH, and use it to "repel" or "exclude" other possible counterexamples.


I’ve read there’s a push to legalize cocaine. I’m willing to make Universal Paperclips illegal in exchange for cocaine legalization. Overall harm reduction.

I wanted a DAW to record my own fledging attempts at electronic music. I took a vow in the 80s to never use Windows, Apple systems seemed overpriced, and I had years of *nix experience, including a bit of Linux. In the late 90s there was a Linux app called "Multitrack" which seemed capable on the surface, but it turned out not to be. I called Digidesign to ask them if I could port ProTools to Linux for them (for free), and they laughed. So I thought "how hard could it be to just write my own?" ... 22+ years later, here I am.

A longer version is here: https://discourse.ardour.org/t/ardour-20th-birthday/102333


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: