Hacker News new | past | comments | ask | show | jobs | submit | highlights login
If you run across a great HN comment (or comment tree), please tell us at hn@ycombinator.com so we can add it here.

Wow, that's the original release I did ages ago, with my notes and everything! I spent a few weekends extracting proprietary code/libraries from the code base so it could be released. Doing that work changed the direction of my life in a way, leading me from hobby game development work to working on Descent 3.

breakable with a Python script

The traditional, elegant method of a more civilized age:

Last on the program were Len Adleman and his computer, which had accepted a challenge on the first night of the conference. The hour passed; various techniques for attacking knapsack systems with different characteristics were heard; and the Apple II sat on the table waiting to reveal the results of its labors. At last Adleman rose to speak mumbling something self-deprecatingly about “the theory first, the public humiliation later” and beginning to explain his work. All the while the figure of Carl Nicolai moved silently in the background setting up the computer and copying a sequence of numbers from its screen onto a transparency. At last another transparency was drawn from a sealed envelope and the results placed side by side on the projector. They were identical. The public humiliation was not Adleman‘s, it was knapsack’s.

W. Diffie, The first ten years of public-key cryptography, Proceedings of the IEEE, vol. 76, no. 5, pp. 560-577, May 1988


Very beautiful research and thorough documentation. I initially wanted to comment that this looks a lot like time-domain reflectometry on a conceptual level - but as Cindy Harnett seems to be your advisor, you probably know that already :)

Very cool! We're actually doing something quite similar (although on a much smaller scale) at the TU Berlin: https://www.tu.berlin/en/about/profile/press-releases-news/n...

Just recently, we completed a series of correction burns to match the semi-major axis of the orbit of both satellites to stop them drifting away from each other. In a few weeks/months we'll bring them back together and send a software update that will allow them to autonomously maintain formation flight.

Also, we will be doing live satellite operations at the Lange Nacht der Wissenschaften (22. June 2024) :)


I'm at Sourcegraph (mentioned in the blog post). We obviously have to deal with massive scale, but for anyone starting out adding code search to their product, I'd recommend not starting with an index and just doing on-the-fly searching until that does not scale. It actually will scale well for longer than you think if you just need to find the first N matches (because that result buffer can be filled without needing to search everything exhaustively). Happy to chat with anyone who's building this kind of thing, including with folks at Val Town, which is awesome.

Hmm, well, imagine this turning up on HN! I attend 1,2-maybe-3 funerals every year because of this exact project, and while every one of these is very different, the experience is always humbling.

Most of the time, literally nobody will show up. So, you'll pay your respect to the deceased using a short speech that is based on the information provided (which is usually pretty scarce, like: "found abandoned, identified, but no next-of-kin responded") complemented with Google and your imagination, and that's about it.

Then, there are the occasions where one or two people will attend. These are usually the hardest: you have to make clear that you don't actually know the first thing about the deceased, other than what Google told you, and that might be ENTIRELY wrong, but still have to deliver a coherent eulogy. Poems work best for these situations, and sometimes talking to the visitors is quite revealing as well.

And then, there are surprises, like a room full of people turning up, and you being able to elicit stories from family and friends, and basically having a regular funeral. But I admit that happened once in like the past decade or so.

Anyway, I think it's important that nobody is left to their final resting place without witnesses, and I also find avoiding that is a good way to engage with your community...


Back in my aerospace days I worked on an obscure secure operating system, which, unfortunately, was built for the PDP-11 just as the PDP-11 neared end of life. This was when NSA was getting interested in computer security. NSA tried applying the same criteria to computer security they applied to safes and filing cabinets for classified documents. A red team tried to break in. If they succeeded, the vendor got a list of the problems found, and one more chance for an evaluation. On the second time around, if a break in succeeded, the product was rejected.

Vendors screamed. Loudly. Loudly enough that the evaluation process was moved out of NSA and weakened. It was outsourced to approved commercial labs, and the vendor could keep trying over and over until they passed the test, or wore down the red team. Standards were weakened. There were vendor demand that the highest security levels (including verification down to the hardware level) not even be listed, because they made vendors look bad.

A few systems did pass the NSA tests, but they were obscure and mostly from minor vendors. Honeywell and Prime managed to get systems approved. (It was, for a long time, a joke that the Pentagon's MULTICS system had the budgets of all three services, isolated well enough that they couldn't see each other's budget, but the office of the Secretary of Defense could see all of them.)

What really killed this was that in 1980, DoD was the dominant buyer of computers, and by 1990, the industry was way beyond that.


Whenever this topic comes up there are always comments saying that SGI was taken by surprise by cheap hardware and if only they had seen it coming they could have prepared for it and managed it.

I was there around 97 (?) and remember everyone in the company being asked to read the book "The Innovator's Dilemma", which described exactly this situation - a high end company being overtaken by worse but cheaper competitors that improved year by year until they take the entire market. The point being that the company was extremely aware of what was happening. It was not taken by surprise. But in spite of that, it was still unable to respond.


Earthquake waves have several propagation speeds, because there are different types of waves. The fastest is called the P-wave, which is a compressional (longitudinal) wave, similar to a sound wave, with a velocity of ~5-8 km/s for typical continental bedrock. The second fastest is the S-wave, or shear wave, which is about 65% of the P-wave speed. These waves produce relatively little displacement at the surface (except for close to the epicenter of large earthquakes) but are important seismologically. Then, there are the surface waves, which are caused by the interaction of the S-waves with the surface (in a way that I don't 100% understand). These travel about 90% of the S-wave speed, but they have the biggest displacements at the surface and therefore are the main ones that you feel and that cause damage.

The surface wave displacements also get amplified in wet or loose soil, so the ground shaking and seismic damage is also much greater areas on top of sediment rather than bedrock. Areas on a river, lake or coast where the land has been extended into the water by dumping fill dirt are the worst--ground shaking is really bad and they are very prone to liquefaction.

The difference between the arrival times (at any given point on earth) of the different phases of seismic waves is a function of the distance from the earthquake itself (the hypocenter) and the observation site. It is close to linear in Euclidian distance relatively near the earthquake hypocenter, but becomes more nonlinear farther from the earthquake, because the wave speeds are faster at depth (denser rock) so the travel paths of the wave fronts (the ray paths) are nonlinear. These differences in arrival times are one of the main ways of locating the hypocenter of an earthquake given observations from seismometers at multiple sites. It's essentially triangulation, except with time differences instead of angles--this is done through solving a system of equations.

Additionally, S-waves can't pass through liquids, so there is the 'S-wave shadow zone' that occupies a large fraction of the side of the earth opposite an earthquake where there are no primary S-wave arrivals--S-waves are blocked by the liquid outer core. This is how we found out that the outer core is liquid!


My dad worked at SRI for over thirty years and my mom also worked there. Money has always been an issue at SRI. You always had to be on the lookout for the next contract. If some company or part of the government wasn't paying for your work, there was always the chance that you would get laid off. On the other hand, my dad got to work on a lot of different projects over the years, from growing silicon crystals, to working on holograms, laser range finders, and a laser chemical weapons detector (deployed during the Iraq war), something called the Spindt cathode, which I honestly don't understand, LED printing, and many other projects. I think it was a very fun place to work, but also quite stressful. You always needed to be ready to switch to something new if the money started running out. It doesn't sound all that different from the way it is today.

The employee open house was really neat, with different labs showing off whatever they were working on, from early noise canceling tech, to computers with color screens, cell counters, you name it. I know we visited "Doug's Lab" but I have no idea what we saw there. As any aspiring nerd, I was quite impressed that my dad and him were on a first name basis.


Oh boy, this gives me a chance to talk about one of the gems of astronomy software which deserves to be better known: HEALPixel tesselation!

HEALPixels stand for 'Hierarchical Equal-Area Iso-latitudinal Pixels'. It is a scheme that was developed to analyze signals that cover the entire sky, but with variable density.

Like HTM or Hilbert curves, this can be used to organize spatial data.

The tesselation looks kind of funny but has many good features - it doesn't have discontinuities at poles, and is always equal area. And with the "nested" healpixel formulation, pixels are identified by integers. Pixel IDs are hierarchical based on leading bits - so, for example, pixel 106 (=0110 1010) contains pixel 1709 (=0110 1010 1101). This lets you do some marvelous optimizations in queries if you structure your data appropriately. Nearest neighbor searches can be extremely quick if things are HEALPix-indexed - and so can radius searches, and arbitrary polygon searches.

HEALPixels are used today for more than just their original intent. LSST will use them for storing all-sky data and point source catalogs, for example.

More here:

- Original NASA/JPL site: https://healpix.jpl.nasa.gov/

- Popular Python implementation: https://healpy.readthedocs.io/en/latest/

- Good PDF primer: https://healpix.jpl.nasa.gov/pdf/intro.pdf

And an experimental database being built on healpix for extremely large data volumes (certainly many TB, maybe single-digit PB): https://github.com/astronomy-commons/hipscat


The minimum puzzle length for spelling bee is 20 words iirc. The dictionary is also a highly curated list of “common” words. What constitutes a valid word is up to Sam, the NYT editor. It’s designed to make the puzzles doable by the average solver. You’ll notice that a lot of the words in the OP are very esoteric.

Source: helped build SB at NYT.


He thinks it's really amazingly cool! :D

I'm so happy to hear that I had some part to play in inspiring such a marvellous project.


Author here. Yes, Jeremy Howard and fast.ai was one of the inspirations for this! I'd actually be curious what he thinks of the project if he ever sees it.

> As others have noted, exponential smoothing has a different problem, that it asymptotically approaches but never quite reaches its destination. The obvious fix is to stop animating when the step gets below some threshold, but that's inelegant.

This is off the cuff, but you might be able to fix this as follows: Interpret exponential smoothing as a ODE on the distance to the target. Call that distance D. Then exponential smoothing is the Euler update for dD/dt=-C*D. (the constant C>0 being a speed parameter) The issue you bring up is basically the fact that the solutions to the ODE are D(t)=A*exp(-C*t), which is asymptotic to zero as t->oo, but never reaches zero. Now, the fix is to replace the ODE with one that goes to zero in finite time. e.g. dD/dt=-C*sqrt(D). (Solutions are half-quadratic. i.e. they are quadratic for a bit then stay zero once you hit zero.) The Euler update for this is stateless like you wanted.


Huh, then you're one of today's lucky ten thousand!

Apollo 14 had a piece of loose solder in the button triggering abort-to-orbit, so it occassionally triggered itself. This wasn't a problem en route to the moon, but the second the descent phase started it would have been a Poisson-timed bomb that would prevent the landing.

There was a bit of memory that could be set to ignore the state of the abort button (this bit was the reason the abort sequence wasn't triggered en route). The problem was this ignore bit was reset by the landing sequence (to allow aborting once landing started), and they did not believe the astronauts would be quick enough to set the bit again before the button shorted out and triggered the abort.

(Ignoring the abort button was fine because an abort could be triggered in the computer instead. Takes a little longer but was determined a better option than scrapping the mission.)

Don Eyles came up with a clever hack. Setting the program state to 71 ("abort in progress") happened to both allow descent to start and prevented the abort button from being effective. So this program state was keyed in just before descent.

The drawback was that it obviously put the computer in an invalid state so some things were not scheduled correctly but Eyles and colleages had figured out which things and the astronauts could start those processes manually.

Then once the computer was in a reasonable state again the ignore abort bit could be set and the program mode set correctly and it was as if nothing had happened.


> I ultimately tricked it by inserting a real 27C322 first and reading that before swapping over to the chip I actually wanted to read. Once the reader’s recognized at least one chip, it seems happy to stick in 27C322 mode persistently.

My people. I only aspire to be this damn clever. It's why I surround myself by people smarter than me.


Lookup tables are the only reason I was able to get this effect working at all:

https://twitter.com/zeta0134/status/1756988843851383181

The clever bit is that it's two lookup tables here: the big one stores the lighting details for a circle with a configurable radius around the player (that's one full lookup table per radius), but the second one is a pseudo-random ordering for the background rows. I only have time to actually update 1/20th of the screen each time the torchlight routine is called, but by randomizing the order a little bit, I can give it a sortof "soft edge" and hide the raster scan that you'd otherwise see. I use a table because the random order is a grab bag (to ensure rows aren't starved of updates) and that bit is too slow to calculate in realtime.


Recall a project back in the days where the customer wanted to upgrade their workstations but also save money, so we designed a solution where they'd have a beefy NT4-based Citrix server and reusing their 486 desktop machines by running the RDP client on Windows 3.11.

To make deployment easy and smooth, it was decided to use network booting and running Windows from a RAM disk.

The machines had 8MB of memory and it was found we needed 4MB for Windows to be happy, so we had a 4MB RAM disk to squeeze everything into. A colleague spent a lot of time slimming Windows down, but got stuck on the printer drivers. They had 4-5 different HP printers which required different drivers, and including all of them took way too much space.

He came to me and asked if I had some solution, and after some back and forth we found that we could reliably detect which printer was connected by scanning for various strings in the BIOS memory area. While not hired as a programmer, I had several years experience by then, so I whipped up a tiny executable which scanned the BIOS for a given string. He then used that in the autoexec.bat file to selectively copy the correct printer driver.

Project rolled out to thousands of users across several hundred locations (one server per location) without a hitch, and worked quite well from what I recall.


The projection system, you can see in a photo, was made by Vitarama. Vitarama developed a number of interesting projection systems, but their best known was Cinerama, the three-camera widescreen format that was briefly popular in the '50s but had long-lasting influence by popularizing widescreen films. One wonders if Fred Waller, Cinerama's inventor, worked on this project. He was sort of an inventor of the classic type.

In another photo we see what I think is a Teletype Model 15, behind the clerk handling an impressive rolodex. It even appears to have a Bell Canada Trans-Canada Telephone System badge on it. Transmitting orders was a very popular application of teletype service, and a lot of Bell advertising in both the US and Canada focused on customers like Hudson's Bay and Montgomery Ward.


"Who uses APL in production?" is a common question. These days, it's mostly those who haven't stopped using it. The historical use, while never quite mainstream, might be surprising to those who never realized APL had a commercial presence at all!

- APL conferences were held every year from 1969 to 2004, minus '77 and '78 due to some unfortunate logistical issues (certainly not lack of demand). Conferences in the 80s regularly had >1000 in attendance despite both IPSA and STSC holding their own many years. My count (may not entirely rule out duplicates) is >1000 paper authors, and far more papers were submitted than accepted. https://aplwiki.com/wiki/APL_conference

- At least a dozen hardware manufacturers implemented an APL for their systems in the mainframe and minicomputer era (70s and 80s): https://aplwiki.com/wiki/Vendors

- Usage stats from just one university in 1978: 7,300 active accounts and 5,400 workspaces (which would each store an individual project, or something like that). Used for all sorts of administration and even a student ride sharing service. Today of course, much of this would be done in Excel. https://aplwiki.com/wiki/Syracuse_University

- The first serious e-mail system was implemented in APL in 1972, and used by Jimmy Carter in 1976 for his presidential campaign: https://forums.dyalog.com/viewtopic.php?f=30&t=1629

- I've heard but can't verify that one of the APL books (Gilman and Rose maybe) sold over 100,000 copies.


When I was at FastMail I did a lot of very manual work to not just block spammers and other abusers, but to make their life as difficult as possible. That included figuring out how to notify the people running the servers they used (including sometimes finding the IRC chat for the folks on that server and telling them they had an intruder). One of my favorite things was to redirect bounce messages that were targeted at innocent FastMail customers to the actual spammer's email address -- which I found stopped the spam from them very quickly, once their inbox filled up with thousands of bounce messages!

Personally, I think it's reasonable to care about such things, and to try to do something about it. If no-one cares or tries, then sucky people will just suck even more.


Around 2003 I did the art direction (mostly pixel-pushing...) for a game that shipped on a Nokia model. I have no recollection of what the phone looked like, but it was part of the "lifestyle" category described in this article. It wasn't one of the craziest form factors, just a candybar phone in pretty plastic with one of those early square color screens.

Nokia Design sent a massive moodboard PDF, something like 100 pages, with endless visual ideas for what seemed practically like an Autumn/Winter lineup of plastic gadgets. But it was all about the moods. The actual phone's usability and software were a complete afterthought. Those were to be plugged in eventually by lowly engineers somewhere along the line, using whatever hardware and software combination would happen to fit the bill of materials for this lifestyle object.

The game I designed was a "New York in Autumn" themed pinball. There were pictures of cappuccino, a couple walking in the park, and all the other clichés. It fit the moodboard exactly, the game shipped on the device, everyone was happy. Nobody at Nokia seemed to care about the actual game though.

Of course the implication with these fashion devices was that they were almost disposable, and you'd buy a new one for the next season. This would be great for Nokia's business. Unfortunately their design department seemed consumed by becoming a fashion brand and forgot that they're still a technology company. Everyone knows what happened next.


I once commented that HN is the most wonderfully diverse ecosystem and here's my chance to prove myself right! I'm a cork 'farmer' in Coruche, right where this article is situated. I wasn't expecting to read a puff piece about it today. I just did my novennial harvest last year. For anyone not in the know, cork is the cork trees' bark, and it's stripped from the tree without harming it every nine years. Undressing the tree is properly medieval work and you need to be very skilled with a hatchet to do it. Do a poor job and you'll ruin the cork and scar the tree for decades.

The harvest is tough work but it's the only well-paid trade left in agriculture. I doubt it has much future beyond fodder for high peasant magazine articles. Trees are dying left and right from multiple climate-related problems no one has a handle on. Divestment from the traditional montado like mine into intensive production units with better water management and automated extraction is the likely future. The billion-dollar outfits have started experiments with high-density groves, inspired by the olive oil industry's success. It's a finicky tree though, so conclusive results are taking a few decades more than you'd expect to materialise. They're stuck having to buy cork from thousands of traditionalist family farms for now.

But that's assuming the industry even grows enough to justify the investment into better plantations. Legitimate uses for the stuff apart from wine corks are scarce. We're all hoping that our phenomenal ecological footprint will see us grow as an industry into everything from insulation and roofing to shopping bags and umbrellas (hence said puff piece I imagine). We'll see, it really is a phenomenal material and the carbon math makes sense at the source. You can almost see the tree sucking out stuff from the air and soil to build thicker layers of bark. I joke that we've been doing regenerative farming for generations, we just didn't know it until someone told us.

If anyone on HN is ever in Portugal and wants to visit a montado, happy to take y'all on the most boring tour of your life. But we can have a nice picnic! It's lovely country.


Back when my sole internet experience was playing (losing) every match on Chess.com as a "volunteer librarian", I'd often inject awkwardly escaped characters, closing tags, common quirky control strings, and even OLE objects into the live Chess.com games.

Eric (founder) had politely asked me for a more formal audit (to which I declined, not wanting to out myself as an 11 year old script kiddie) but I did explain the RegExp needed for the chat room censor and we tackled the ultimate problem; how to detect cheaters in asynchronous environments.

After consideration I informed him the only way to possibly detect cheaters is to compare every (game-significant/high-mu) move made against the known optimal moves from engines, and use statistical inference to discriminate good humans from cheaters.

Of course, at the time, this was laughably unfeasible - which was the answer we had concluded on. But for a barely out of elementary kid to discuss those kinda nuances with a legit webmaster (Hello Eric!), it is one of my more favorable internet memories.


Hi to All, and warm thanks Denis. More, thank you for the attention to this Stu - Andrea paper now online on OSF and arXiv. We will submit it soon.

The origin of life field is many decades old, wonderful, and wonderfully fragmented. The two main approaches are: i. Template replication. ii. Metabolism first. I am a guilty party with respect to "metabolism first". In 1971 I realized that in a sufficiently diverse and complex chemical reaction system, self reproducing collectively autocatalytic sets would arise as a first order phase transition. Such sets have now been engineered using DNA, RNA, and peptides.

A stunning set of recent results led by Joana Xavier has now demonstrated small molecule collectively autocatalytic sets, with NO DNA, RNA, or peptide polymers, in all 6700 prokaryotes. I am a co-author. Joana did all the work, entirely.

It is not yet certain that these sets actually reproduce in vitro - a critical set tests to be done. If yes, I think this almost rules out a template first view. Such a template system would have to evolve RNA enzymes to catalyze some "connected metabolism" to create the building blocks for the template replicating systems. But there is no reason at all why such a connected metabolism on its own, without RNA polymer enzymes, would be collectively autocatalytic.

Joana's sets create not only amino acids and ATP, but the central rudiments of linked energy metabolisms.

I truly think the on line paper is basically correct. Living cells really are Kantian Wholes that achieve Catalytic, Constraint and Spatial Closure. Via these, cells literally construct themselves. Their very boundary condition molecules constrain the release of energy in many non-equilbirum process into the few degrees of freedom that construct the very same boundary conditions. Entirely new, and due to Mael Montevil and Mateo Missio. I missed it for 15 years. Rather dumb, thrilled that they did it.

The marriage of the TAP process with the theory for the first order phase transition of collective autocatalysis, TAP-RAF really works. The evolving complexity and diversity of the system increases, then the first order phase transition arises with probably almost 1.0. If YES, the emergence of life in the evolving universe really is expected.

Then two major surprises. Due to Constriant Closure, the way a cell reproduces itself is not at all the way von Neumann envisioned in his self reproducing automaton. The familiar distinction between hardware and software vanishes. This must be deeply important, but its meanings are still very unclear to me.

The second major surprise is that Andrea and I are confident we have demonstrated, and punished as "A Third Transition in Science?" J. Roy. Soc. Interface April 14, 2023, that we can use no mathematics based on set theory to deduce the ever - creative emergence of novelties in the evolving biosphere. If correct, as we believe, this takes the evolving biosphere entirely beyond the famous Newtonian Paradigm that is the basis of all Classical and Quantum Physics.

The evolving biosphere is a non-deducible propagating construction, not an entitled deduction. The evolving biosphere is not a Computation at all, it is a non-deducible construction. If so, why do we believe with Turing and AI, that the becoming of the world, mind, everything is algorithmic? It is not. Andrea and I published, "The world is not a theorem". If correct, physicists will have to consider what this means. So do we all.

Warm wishes,

Stu


Author of the book under review here. AMA!

One thing I thought would be helpful is to link to the YouTube video described in the opening paragraph: https://www.youtube.com/watch?v=UMF-cyHAaSs

It's from a 1957 CBS documentary series called "Focus on Sanity" that featured interviews with Aldous Huxley and Gerald Heard, among others. I found it fascinating and my questions about it were actually one of the motive forces for why I wrote the book.

I believe the recording was first brought to public attention by Don Lattin, whose books The Harvard Psychedelic Club (2010) and Distilled Spirits (2012) are both great.


Oooh! I put that string there! It was a request by management, and I still don't know why. This site doesn't store any passwords, it's basically just a nice interface to external account management.

I heard a rumour that some legacy apps have weird validation on their login fields, so students wouldn't be able to log in with passwords containing certain strings. But I don't actually know of any examples.


I married into toaster moguls. When they sold out in '97, domestic toasters had been infeasible for years already. And this was for a company where all the knowledge, equipment, and facilities had beed paid for many decades before. (They invented the electric pop-up toaster for certain definitions of electric pop-up toaster.)

Toasters are refined brilliance, if done right. The concept of "done" is computed using an analog computer programmed by human experts. (Ok, its usually a bimetal strip but it is placed so that the cooling of the moist bread keeps it from going off and your lighter-darker input is biasing when it considers the toast done.)

Tear apart some toasters. There won't be anything in a modern cheap toaster that isn't absolutely required. Ask yourself why everything is the way it is.

Research the UL requirements. I have the corporate 2 pound copper ball that had to be dropped on things from prescribed heights and not cause malfunction. Make sure you can hit this targets with what you think you can build. Also check the CE, they might have more modern rules.

Be ready for litigation. Toasters catch fire. The toaster moguls were horrified whenever they saw someone's toaster under a cabinet. Decades after selling the business they were still being sued by mesothelioma suits for things like a repairman that got lung cancer and repaired home appliances, so he probably might have worked on one of their 1920's models with asbestos insulation. Don't let it stop you, but put the backup insurance into the expenses.


I had a personal brush with Feynman with regards to this experiment.

Circa 1986, I read Surely You're Joking, Mr. Feynman and became mildly obsessed with the fact that Feynman doesn't present the result of the sprinkler experiment (see the first few seconds of the video for context). A number of colleagues (at the software startup where I was working) and I tied ourselves up in knots debating what the answer must be.

I wanted to perform the experiment, but lacking materials, skill, and (frankly) ambition, I settled for a laughably primitive apparatus involving a couple of bendy straws, the bathroom sink, and my mouth. This was enough to reproduce the well-known fact that if you push water out through the sprinkler, it will spin. When I tried sucking water in, it would give a momentary rotational jerk and then stop moving... but perhaps that was due to my mouth tightening up on the straw?!?

After I shared this inconclusive result, one of my co-workers decided to get to the bottom of the matter. He called the Pasadena operator, asked for the home phone of a Mr. Richard Feynman, and – to my utter horror – dialed the number.

It seemed impossible that this was a viable procedure for making contact with a Nobel Prize-winning physicist, but through the speakerphone, we could all hear a gruff voice that certainly sounded like it might be him. All doubt was removed when my friend explained that we were looking for the answer that isn't presented in the book, and the gruff voice said, "Why should I tell you?"

Feynman then asked whether we had tried the experiment, and to my redoubled horror, the phone was handed to me. I stumbled through some explanation of what I had tried and what result I had observed. Fenyman then relented, and explained the entire situation in ten short words:

"The sprinkler cannot rotate, because no angular momentum is transferred."

(Not an exact quote, but something very much like that)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: