Hacker Newsnew | past | comments | ask | show | jobs | submit | mhandley's commentslogin

And if you do get one somehow, it's really hard to spend. Many places won't accept them.


I use them regularly and have never had anyone not accept them.

I have had many places reject $100 bills though.

I’m that weirdo that tries to pay cash for most everything, so sample size is large and across a diverse set of businesses.

Due to what I tend to pay cash for these days (lunches, drinks with a friend, etc.) and prices being what they are, they are rapidly becoming my “go-to” denomination.


It's definitely possible to break the rules. In fact, to give a truly outstanding talk that everyone remembers, you probably have to break the rules (speaking as someone who coded an entire Sigcomm presentation in a 3d game engine). But most early career researchers, for whom this advice is presumably intended, are not good enough at giving talks for that to be a good idea. In fact most tenured professors aren't too. If you do break the rules, you need to have a very clear idea in your head as to how you're going to pull it off, and a good idea of who your audience is and how they'll perceive it, and those are both hard to achieve without a lot of experience.


If you have two pipes of the same height, one filled with fresh water and one with salt water, the pressure will be greater at the bottom of the salt-water pipe because salt-water is denser. Connect them at the bottom with a pipe and water will flow from salt to fresh until the pressures equalize. But connect them with a membrane, and this is countered by the osmotic pressure of fresh water trying to get to salt water, so you don't get any magic flow for free. You have however got a pressure gradient for free - just not enough to desalinate.

If you put this in the ocean, you can remove the salt pipe and get the same effect. But if you want continuous fresh water, you need to further increase the pressure difference across the membrane by continuously lowering the height of the fresh-water column by pumping water up and out of the top. That takes energy, but not as much as it would take if we had to raise the pressure on the salt-water side.


The listing mentions a 30,000 gallon reservior, so I imagine rainwater collecion from the "parade ground".


That'd be my guess also: https://britishlistedbuildings.co.uk/300017169-thorne-island...

The British Listed Buildings site has photos of the fort and boat approach with stairs, picking crane, and sloped ladder lift for getting loads from water to gate.


If we had 9-bit bytes and 36-bit words, then for the same hardware budget, we'd have 12.5% fewer bytes/words of memory. It seems likely that despite the examples in the article, in most cases we'd very likely not make use of the extra range as 8/32 is enough for most common cases. And so in all those cases where 8/32 is enough, the tradeoff isn't actually an advantage but instead is a disadvantage - 9/36 gives less addressable memory, with the upper bits generally unused.


I also started using Jove back when 30 of us shared one PDP 11/44 running BSD Unix, and it was antisocial to use something as heavyweight as Emacs. 40 years later, I'm still using UNIX and Emacs.


It was the same for new CS undergrads at UC Berkeley back in the early 90s. There were still labs full of VT220 or similar serial terminals all hooked up to a shared computer.

On reflection, it probably explains why I've used Emacs for my whole career but never really got into any of the elisp customization or other advanced features. I still base my work in the shell (and filesystem) and launch ephemeral Emacs processes rather than living in it as some folks do. I never got interested in IDE functions like controlling compilers nor debuggers from within Emacs.

I never even wanted Emacs to split a terminal window into smaller "screens". I learned the key combo to abort that, much like I learned only enough vi to kill off an unintended launch. But, I do get a lot of mileage out of the XEmacs "frames", i.e. independent X windows all fronting the same set of editing buffers. But I also have terminal windows alongside that to do all the other things from the shell that some people prefer to do from inside the editor...


I've spent the last few weeks writing a non-trivial distributed system using Codex (OpenAI's agentic coding system). I started by writing a design brief, and iterated with o3 to refine it so it was more complete and less ambiguous. Then I asked it to write a spec of all the messages - didn't like its first attempt, but iterated on it til I did like it. Then got it to write a project plan, and iterated on that. Only then did I start on the code. The purpose of all this is to provide it some context.

It generated around 13K lines of Go for me in just over two weeks. I didn't previously speak Go, but its not hard to skimread to get the gist of its approach. I probably wrote about 100 lines, though I added and removed a lot of logging at various times to understand what was actually happening. I got it to write a lot of unit tests, so that coverage testing is very good. But I didn't actually pay a lot of attention to most of those tests on the first pass, because it generally got all the fine detail stuff exactly right on the first pass. So why all the tests? First, if something seems off, I have a place to start a deep dive. Second, it pins down the architecture so that functionality can't creep without me noticing that it is needing to change the unit tests.

Some observations.

- Coding this way is very effective - the new models almost never make fine detail mistakes. But I want to step it through chunks of new functionality at a size that I can at least skim and understand. So that 13K LoC is about 300 PRs. Otherwise I lose track of the big picture, and in this world, the big picture is my task.

- Normally the big design decisions are separated by days of fine detail coding. Using codex means I get to make all those decisions nearly back-to-back. This is both good and bad. The experience is quite intense - mostly I found the fine-detail coding to be "therapeutic", but I don't get that anymore. But not needing to pay attention to the fine detail (at least most of the time), means I think I have a better picture in my head of the overall code structure. We only have so much attention at any time, and if I don't have to hold the details, I can pay attention to the more important things.

- It's very good at writing integration tests quickly, so I write a lot more of them. These I do pay a lot of attention to. Its these tests that tell me if I got the design right, and if not, these are the place I start digging to understand what I need to change.

- Because it takes 10-30m to come back with a response, I try to keep it working on around three tasks at a time. That takes some effort, as it does require come context switching, and effort to give it tasks that won't result in large merge conflicts. If it was faster, I would not bother to set multiple tasks in parallel.

- Codex allows you to ask for multiple solutions. For simpler stuff, I've found asking for one is fine. For slightly more open questions, it's good to ask for multiple solutions, review them and decide which you prefer.

- Just prompting it with "find a bug and suggest a fix" every now and then often shows up real bugs. Mostly they tend to be some form if internal inconsistency, where I'd changed my mind about part of the code, and the something elsewhere needed to be changed to be consistent.

- I learned a lot about Go from it. If I'd been writing myself, my Go would have looked more like C++ which I'm very familiar with. But it wrote more idiomatic Go from the start, and I've learned along the way.

- Any stock algorithm stuff it will one-shot. "Load this set of network links, build a graph from them, run dijkstra over the graph from this node, and tell me the histogram of how many equal-cost shortest paths there are to every other node." That sort of stuff it will one-shot.

- It's much better than me about reasoning about concurrency. Though of course this is also one of Go's strengths.

Now I don't have any experience of how good it would be for maintaining a much larger codebase, but for this sort of scale of utility, I'm very impressed with how effective it has been.

Disclaimer: I work at OpenAI, but on networks, not AI.


That sounds reasonable for access to actual content, but it produces a huge new incentive to constantly produce vast amounts of AI-generated slop served via Cloudflare. Is there a way to disincentivize this?


Thats a more general problem. As content gets cheaper to produce with AI, how do consumers discriminate between good content and slop. We already have this problem with youtube and twitter and reddit

Its interesting that the AI companies will now be on the other end of this issue


I presume the onus will now be on the AI scrapers to decide whether that AI-slop site is worth paying for. How they will figure this out will be interesting to see.


The ones in towns will mostly disappear. There will be enough chargers at supermarkets, malls, restaurants, anywhere people actually want to go, and most people will charge at home or work. The remaining business won't be enough to keep in-town gas stations in business. Range anxiety will become more of an issue for gas cars.

On highways, it will be a different situation. There will be plenty of gas and diesel still available, as the remaining business from towns becomes more concentrated. You won't find a gas station without a restaurant attached though. Fast chargers will be common, but ultra-fast ones won't be as common as we'd like, as they will want to keep you just long enough to buy a meal, etc.


I came of age in the 8-bit era of the early 80s, rode the Internet wave of the 90s and early 2000s, kind of missed the mobile wave but spent that time developing ideas that would eventually turn out to be useful for AI, and now I'm having great fun on the AI wave. I'm happy to have grown up and lived when I did, but I feel that each era of my life has had its own unique opportunities, excitement and really interesting technical problems to work on. And perhaps most importantly, great people to work with.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: