Hacker News new | past | comments | ask | show | jobs | submit | highlights login
If you run across a great HN comment (or comment tree), please tell us at hn@ycombinator.com so we can add it here.

I launched RustNet in 1992 as "the first ISDN ISP in Michigan." We offered a blindingly fast internet speed of 125k! At the time Silicon Graphics had added an ISDN port to its latest workstation, which I had for the engineering design work we did. The launch of our ISP occurred at an industrial expo at Cobo Hall in Detroit. We were not set up yet so I talked to the product manager of the ISDN feature at SGI and he was kind enough to let me dial into his desktop workstation in his office. All I needed was to get Ameritech to drop an ISDN line into Cobo Hall. One trick we learned: if a regular dial-up customer complained about speeds we would call Ameritech to schedule that customer for ISDN. The phone company would remove something from the line (filter? impedance?) to prepare it for ISDN. Then we would cancel the order and the customer would get the full 56k from their modem.

I created a similar product almost 10 years ago, I had a provisional patent filed and showed it off to some very nice and smart people at Thermo Fisher. I wanted to license the patent to people that actually know how to build laboratory equipment (I don’t hold a higher degree in anything). I should’ve continued trying to license but they convinced me that was the wrong move.

I formed a company and got $50k in grant money from the state of Wisconsin along with incubator space and advisors. I did all the initial software and hardware development, the first one I built took components from a battery powered toothbrush to inductively charge so I didn’t need a charging port exposed to solutions.

I loved working on this project, I had the idea after having frustrating experiences working with 30 year old lab equipment that took up more bench space in the lab than my backpack did. At the end of the day that’s all it was. I had no idea wtf I was doing and the company eventually folded as real life became too much to juggle with my project. I figured I could use the same form factor to build tons of different sensors based on the many different ion selective electrodes already on the market. Eventually I needed to try and manufacture these sensors myself in a form factor that would fit inside flasks in the lab.

Letting this project fall by the wayside is probably the single greatest regret of my life. I often think back about how different things could’ve turned out if only I had made the correct decisions and executed properly and actually found someone to license my patent… so it goes.

https://m.youtube.com/watch?v=RrBhliK1ryY&pp=ygUbUGhpbmRpbmc...


Tim Daly here...

Axiom is alive, well, and under active development. The current effort is merging the LEAN [0] proof technology with the Axiom algebra. This involves some deep restructuring. The target result is proven algorithms, something missing in current CAS work.

(Note that this is project goal F on http://axiom-developer.org)

The effort involves building a parallel architecture to the current Axiom category / domain layout to enable functions to use LEAN's axioms, definitions, and tactics to prove existing algorithms.

In addition, Axiom now uses Common Lisp CLOS to enable work using dependent types, something not currently available in the legacy system. The algebra hierarchy is now a CLOS hierarchy which enables a lot of flexible extensions.

There is no point in publishing the work as open source since it is still in the active research phase. When released it will be announced on the http://axiom-developer.org website and uploaded to github.

One of the Axiom algorithms (Groebner basis[1]) has been proven in Coq.

Wikipedia contains the literate form of Axiom[2] , including the original book restored from the NAG files.

There is an active fork maintaining the legacy code as mentioned above.

[0] The LEAN Theorem Prover https://www.andrew.cmu.edu/user/avigad/Papers/lean_system.pd...

[1] Bruno Buchberger. Bruno buchberger’s phd thesis 1965: An algorithm for finding the basis elements of the residue class ring of a zero dimensional polynomial ideal. Journal of Symbolic ComputationVolume 41, Issues 3-4, Logic, Mathematics and Computer Science: Interactions in honor of Bruno Buchberger (60th birthday),, 2006.

[2] Wikipedia Axiom https://en.wikipedia.org/wiki/Axiom_(computer_algebra_system...


When I originally wrote the LGPL (v1, back around 1991) we could not imagine anything like an App Store or signed binaries. Dynamic linking provided an easy way for users to upgrade the library code.

Since the user doesn’t have the freedom to update the libs on ios etc I don’t see how you could deploy LGPL code on those platforms; since one of the points of using unity is its cross-platform support, that suggests you’d have to find another library unless you were only deploying on real OSes.

But is that Unity’s problem?


The Apple II is rather hard to emulate correctly. While most Apple II software is relatively easily supported, some corner cases are absolutely brutal to get right.

For example, you can mostly treat the video system as a dumb framebuffer. But there are cases where this fails. It is possible to detect the vertical blanking period on an Apple II. [1] This relies on capacitance on the Apple II bus and very strict timing. On a real Apple II the value read from a non-existent memory address is not always 0, sometimes it's the last value read, which is sometimes the last value read by the video system which is overlapped with the CPU. To emulate this right, you need to emulate the whole video system lock-step and cycle accurate with the CPU.

The disk system is another major pain point. All of the timing and track movement is handled in software. Arbitrary disk formats can be created in software. One single spiral track was used by some games as copy protection. To support the Disk II completely requires emulating at a much lower level than that of 256 byte sectors. Some emulators going all the way for accuracy, use a structure, that is basically a map of the flux transitions on disk.

[1] http://deater.net/weave/vmwprod/megademo/vapor_lock.html


original creator of generals.io here (late to the party)

really cool to see people still having fun with this game I made in college! I sold the game a few years back because I didn't have time to properly maintain it, and I'm glad the new owners have kept it running.

Linking to some past HN threads on this:

- https://news.ycombinator.com/item?id=13145781 original generals.io post

- https://news.ycombinator.com/item?id=13562866 launch of the Bot API


As a person who did a PhD in CFD, I must admit I never encountered the vorticity confinement method and curl-noise turbulence. I guess you learn something new every day!

Also, in industrial CFD, where the Reynolds numbers are higher you'd never want something like counteracting artificial dissipation of the numerical method by trying to applying noise. In fact, quite often people want artificial dissipation to stabilize high Re simulations! Guess the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.


We've gone full circle! I originally launched Vagrant here on HN in 2010, which was at the top of HN very briefly for the day. Now here I am 14 years later witnessing my departure post in that very same spot. A strange experience! Thanks for the support over the years. A lot of the initial community for the projects I helped start came from here.

Scott shouldn’t be hard on himself at all for opinions stated 15 years ago. Even over short time spans of a few years, I change my perspective quite a bit.

In the 1980s I was lucky enough to stumble into the opportunity of serving on a DARPA neural networks tools panel for a year and getting lucky applying a simple backprop model to a show-stopper problem for a bomb detector my company designed and built for the FAA. Bonus time!

Decades went by, and then deep learning really changed my view of AI and technology (I managed a deep learning team at Capital One, and it was a thrill to see DL applications there).

Now I am witnessing the same sort of transformations driven by attention based LLMs. It is interesting how differently people now view pros/cons/dangers/this-shit-will-save-the-world possibilities of AI. Short anecdote: last week my wife and I had friends for dinner. Our friends, a husband and wife team who have decades of success writing and producing content for movies and now streaming. I gave them a fairly deep demo of what ChatGPT Pro could do for fleshing out story ideas, generating images for movie posters and other content, etc. The husband was thrilled at the possibility of now being able to produce some of his previously tabled projects inexpensively. His wife had the opposite view, that this would cause many Hollywood jobs to be lost, and other valid worries. Normally our friends seem aligned in their views, but not in this instance.


Syncs with their phone app, to keep a track of bills and expenses. Based on the Espressif ESP32. Quite handy for small shop owners who can't have a full-blown computer+screen cash register.

Disclaimer: I am working as a freelancer for their next version of hardware that will be based on Linux.


I campaigned for this for years when working on fuchsia, to folks in fuchsia and android folks.

We lack a unixy API for “the radio is now on, do your reconnects or send your queues”

The platforms always hoist this stuff higher level and it rarely works that well.

Platform leads insist the platform will do it better, but it’s never true. They also insist that persistent connections are battery killers - which for sure they _can be_ but done properly (and with the aforementioned api) it can work just fine.

Establishing such an API in the Linux and BSD ecosystem would make a good step toward encouraging its exposure in the places we need it.


Hi - This is Chris Barton (founder of Shazam). Sony's TrackID was built by licensing (and later buying) a technology invented by Philips. That tech was invented after Shazam. Shazam was the first to create an algorithm that identifies recorded music with background noise in a highly scaled fashion. My co-founder, Avery Wang, invented the algorithm in 2000. Chris (www.chrisjbarton.com)

For all who wonder why Turbo Pascal was so fast here some insights:

50% is certainly due to the language Pascal itself. Niklaus Wirth designed the language in a way so it can be compiled in a single pass. In general the design of Pascal is in my opinion truly elegant and compared to other programming languages completely underrated. Wirth published a tiny version of his compiler written in Pascal itself in a 1976 book called "Algorithms + Data Structures = Programs".

In the late 70s Anders Hejlsberg took that version and translated it into assembly. He certainly must have changed the codegenerator since Wirth's version emitted bytecode for a tiny VM whereas Anders version produced machinecode, however if you take a closer look especially at the scanner and parser of Turbo Pascal and Wirth's version you can see that they are very similar. Back then Anders was not so much a language guy in my opinion but much more an assembly genius. And that resulted in the other 50% of why Turbo Pascal was so fast:

-) The entire compiler (scanner/parser/typechecker/codegenerator/ and later the linker) was written in assembly.

-) The state of the compiler was held as much as possible in cpu registers. If e.g. the parser needed a new token from the tokenstream, all registers were pushed to the stack and the scanner took over. After the scanner fetched the next token, registers where restored.

-) The choice of which register hold what was also very well thought through. Of course the cpu dictates that to a certain extent but still lots of elegant usage of the "si"/"di" register in combination of non repetitive lodsb/stosb instructions were done.

-) The entire "expression logic" (expression parsing / expression state / code generation for expressions) was kinda object oriented (yes, in assembly) with the "di" register hardwired as the "this" pointer. If the compiler needed to handle two expressions (left expression and right expression), then one was held in the "di" register and the other one in the "si" register. Since the "di" register was hardcoded, you will find lots of "xchg di,si" in the codebase before a "method" (a procedure with the "di" register as a "this" pointer) will be called.

-) Clearly the cpu registers were not enough in order to hold the entire state of the compiler so heavy use of global variables were made. Global variables have the advantage of not needing a register in order to access them (e.g. "inc word ptr [$1234]").

-) Parameter passing was done through registers and were possible stack frames were avoided (too expensive), meaning no local variables (still heavy usage of push/pop within a procedure, does this count as a local?)

-) Parameter passing done through registers allowed procedure chaining: instead of "call someOtherProc; retn" at the end of a procedure just "jmp someOtherProc" was used to a great extent.

-) Jump tables everywhere. In general the compiler was quite table driven.

-) Avoiding of strings as much as possible and if needed (parsing identifiers / using filenames) then avoiding to copy the strings around as much as possible, meaning all strings were held in global variables. The big exception here was of course the copying of the identifiers from the global variable into the symbol table.

-) Starting with Turbo Pascal 4.0, hash tables were used as symbol tables. Arena allocator for memory management.

I am sure I forgot a lot, I reverse engineered Turbo Pascal back in the late 90s. Most of the above applies to Turbo Pascal 7.0, but lots have not changed in earlier versions over time.

It is a shame that such a wonderful codebase is buried under the "closed source, proprietary software" label. It is clear that today nobody would write a compiler the way Turbo Pascal was written, not even in a high level language but the codebase has some many tricks, so many elegant solutions, that it is a real pity that this is not open source. Of course the codebase is on the web, just not the official one.

Thank you Anders Hejlsberg for such a wonderful piece of software.


Interesting to see this on HN. I currently work for the company that redesigned the HMI/UI following this incident. Or rather, it's how my company was founded. In the aftermath, the US Navy Command in San Diego contacted several UCSD professors in the Cognitive Science and Psychology department who specialized in high-impact decision making under stress and cognitive load. The Navy was apparently impressed with the detailed analysis and recs provided by these faculty and continued to collaborate with these folks on this an other projects. Eventually they were getting so much work from the Navy they founded a company focused on human factors engineering and interface design for complex systems.

The two original founders recently retired and our new CEO is a former Captain of the USS Zumwalt.


Not my submission, but I am a cryo-electron microscopist if anyone has any questions about what's in the article, or more general. (and have worked with some of the people in the article).

I will comment that the major expensive most facilities face is the cost of the service contracts, which are partially parts, but also partially the need to pay multiple talented service engineers to be available to fly in on a moment's notice, and troubleshoot and fix the microscopes. Electron microscopes break constantly, and most users are not skilled enough to even troubleshoot, let alone fix them.

I will also point out that this part of the article:

>Levels of 100 kiloelectronvolts (KeV)—one-third as high—suffice to reveal molecular structure, and they reduce costs by eliminating the need for a regulated gas, sulfur hexafluoride, to snuff out sparks

Is wildly inaccurate. Relative to the cost of a microscope, SF6, and a high tension tank are absolutely pennies. Frankly, the cost savings are primarily in two areas:

1) The fact that Thermo Fisher isn't involved (the Tundra is a joke and a move for market monopolization)

2) Going from 300 kV (or even 200 kV) drastically reduces the needed tolerances for parts. 100 kV microscopes have been around forever, though, and almost none are going to the resolutions of 200 and 300 kV microscopes, although like Russo and Henderson, I agree that's a solvable problem. It's worth noting that the resolutions they are describing, while encouraging, are not great. 2.6 Å on Apoferritin, which is a best case scenario never seen in the "real-world" is quite a ways behind even the cheaper 200 kV scopes that have gotten down to 1.6 Å. This is still firmly in "screening and learning" territory for most flexible samples, which is not without value, but not the answer to the 5 million dollar Krios that we all so desperately want.

Re: the national centers in the article, it depends which one you go to. NCCAT is fantastic, in my experience, but S2C2 is in the costly bay area and they just can't afford to pay their staff scientists enough. So what happens if you get tossed in with a fresh PhD that is underpaid and uninterested in your project. I've seen, in general, a lack of caring by the staff there, and no desire to understand specific problems each user is trying to solve. That results in lots of wasted iterations, especially if you are starting from scratch with no experience.


WhatsApp is no longer running FreeBSD. Prior to acquisition, everything was bare metal managed hosting at SoftLayer and we had all FreeBSD except one Linux host for reasons I can't remember (maybe an expirement for calling?). After acquisition, there was a migration to Facebook hosting that included moving to Facebook's flavor of containerized Linux.

Not because Linux is better, but to fit better within Facebook's operations[1], and Erlang runs on many platforms, so it was a much smaller effort to get our code running on Linux than to get FB's server management software to work for FreeBSD. Server hardware was quite a bit different, so we had no apples to apples comparisons of which OS was more efficient or whatever else. During initial migration, BEAM support of kqueue was much better than epoll, but that got worked out, and I feel like Linux's memory usage reporting is worse than FreeBSD's, but it's a weakness of both. I was never comfortable in the FB server environment, so I left in late 2019, when the FreeBSD server count was reduced to a small enough number that I ran out of things to do.

[1] Much of the server team had experience with acquisitions at Yahoo! and the difficulties of making an operations team focused on one OS support acquired teams on another OS. With the many other technical and policy differences between WA and FB, eliminating the OS difference was an easy choice to reduce friction. Our host count, which was large at SoftLayer, was small at Facebook, even after factoring in increased numbers because the servers were smaller and the operations less stable.


As a former chemist and all-around maker, I love this and this bugs me all at once. I love the concept, but as others have pointed out, this is less a "periodic table" and more "grab bag of related things". You see these all over the place: foods, drinks, cars, etc. All table, no periodicity. Why are wrenches and drills strewn across three groups? Put the wrenches in one group, drills in the other. Impact drivers somewhere in between.

There's some vague grouping, but it's pretty hodge podge.

The way I would do it, is use electronegativity (tendency to give or remove electrons) as a proxy for additive/subtractive. Atomic weight is a proxy for actual weight/scale. Group I would be like clay forming (the OG additive process), concrete, FDM, SLS, injection molding, casting. Group II is a bit less additive, more bonding: hot glue, soldering, brazing, welding. Halogens hog out material: thermic lance, plasma cutter, laser, waterjet. Chalcogens: hand router, (power) router, lathe, mill.

Metrology doesn't add or subtract, so obviously that's your Noble group.

Transition metals are all the fasteners. Lanthanides/Actinides are all the weirdos. I'd also add a group for just the simple machines. I think it's more important to have groups, periodicity, and trends, than sticking to the exact shape/size of the periodic table of elements.

This is Theodore Gray too! Author of a bunch of books and posters on chemistry.

Well, you know what they say, if you want something done right...


I'm pretty sure Ken Haase has a copy on a tape someplace so I never would have imagined the code might be missing.

More significant* is that his work at Xerox is likely unavailable due to how Interlisp-D worked. When I started to work for Doug on Cyc he had a three-year old band (checkpointed image) he'd been working on continuously on the Dolphin in his PARC office. As you can imagine it was full of lost fossils in memory with completely unpredictable effects on the code. I'm certain that memory image has been gone fro any backup tape for almost 40 years.

Even though we'd overlapped at PARC, when I got to MCC (Cyc), for technical reasons I refused to even use the D machine version and started out with a blank zemacs buffer on a symbolics machine. One major technical reason is that you actually had source code which could be loaded into a fresh memory image. I made sure from the start that always worked.

* what makes it significant, to me, is the technical/cultural implication of the PARC "band" model of Smalltalk and Interlisp-D. I don't mean the early Cyc code base is particularly interesting in and of itself. Interlisp on the -10 usually used files of source code, as you referenced and you would be used to from CommonLisp.


Author here, thank you for having this project on the front page! I want to explain why this bot is in C, while, for instance, another project of mine that implements a LoRa driver/protocol for the ESP32 (a much lower level project, in theory) is in Python!

This code started as the Telegram bot Stonky, it's on Github, and you'll see the bot does a lot of financial analysis things, including Montecarlo simulations. I really needed the speed. Later my daughter was diagnosed with scoliosis, and requires to wear a corset. To take track of the time she wears it was a pita, so I wrote another bot (not released so far) and put it in our family channel.

Then... I needed another bot. And I finally extracted the common parts from the two. So:

1. For Stonky, speed was essential, but not for my next bot.

2. Once I needed to write the new bot for the corset I tried the Python lib. And... I started to have the first code compatibility issues (accross versions) of the modern software stack (context here: https://twitter.com/antirez/status/1726305869032570992).

3. Meanwhile Stocky was no longer able to work because I did the unescapable error of basing it on the Yahoo Finance API, now retired :( So I told myself, I don't want to relay on a software stack that will make this bot unable to work or obsolete in 2/3 years because all the libraries will change fast (and, if you check, you will see there was a lot of work that went into Stonky back in the couple of weeks when I worked at it constantly).

On top of that, with Stonky I decided to serve each request with a thread, and not in multiplexing. In the specific case of a Telegram bot this is basically able to simplify many things: many bot requests are long living, because you want to call an LLM API, or Whisper to transcribe, or do a Montecarlo simulation on a stock history, (btw the Whisper thing will be one of the official examples of this library) and so forth. Now in Stonky this approach worked so well because it's written in C, so each thread consumes only minimaul resources associated with the OS thread creation.

Finally: With SDS lib, my JSON and Sqlite3 wrapper (the wrappers are the interesting part of the project IMHO) it is not so different than writing Python code. So, I said, why not?

P.S. this library is just a work in progress, hacked while I'm doing many other stuff. I hope it will improve in the next weeks. I use Telegram for more than 10 years now, with all my friends and family. So it is very likely I'll require to write more bots in the future :D


I'm the organizer of the YAY UK competition, and so glad Euan's work has got such wide recognition!

The completion is judged by professionals from UK Animation & VFX Studios (including ILM) and we were all blown away by the quality of the entrants - Blender and Ian Hubert are doing amazing things for the next generation of talent!

I thought people would like to hear Euan's description he entered as part of the competition submission:

"I used Blender for the animation and Davinci Resolve for the colour grading (I also used the Film Convert plugin), all animations were rigged and keyframed by me with exception of the people walking in the first shot (those were from mixamo). The TV and advertisment footage were from previous projects.

The humans in the first and second shots are free photoscans I downloaded online and then rigged, there are a few small mechanical parts that were included in a library that I used, but the majority of them are mine.

I used Quixel megascans for some of the rubbish seen at the bottom of the second shot.

Most textures are photos sourced from textures.com or taken by me in real life, but have been modified by me to include procedural grime and dirt buildup in crevasses.

Some sound effects were from purchased sound libraries or found online copyright free. The rest I recorded myself. "


I was involved in poker for most of my adult life - first as pro player and then as a poker software developer. I was very lucky to be able live off poker winnings as a player and then become financially independent thanks to the success of my software (I am PioSOLVER founder and one of the two original programmers along with my close friend).

I agree with about everything the author of the post said. It's a lonely and destructive game. Success if meaningless and comes at expense of others. It's bad for your health both physical and mental. Gambling is fine as an entertainment in moderate amount for people who don't get addicted to it. I don't see any value in professional gambling though. I think almost everyone is worse off in that world. There is a lot of potential lost creating winners and losers in a negative sum game while game organizers make out like bandits luring addicts to their games (both online and live).

Professional poker is a very effective trap for smart analytical people. You can find success there faster and easier than in other areas and then you face a dilemma the author talks about: money is great but you feel like your are not building anything and opportunities to become involved in productive and rewarding endeavours slowly drift away. Switching becomes more difficult (and costly) with every passing year.

I am one of the big winners of the poker world - not only I made enough money to never worry about it again but I've met a lot o interesting people and learnt a lot of interesting things during work on my poker related project. I have one advice for smart people, especially those similar to me (ADHD, maybe slightly on autistic spectrum who can't imagine working a 9-5 job) - don't get involved it will lure you and chew you out. If you feel you have trouble holding a job or completing your degree which is not caused by your intellectual potential - seek help and maybe medication. After more than 15 years in the industry my biggest professional dream is to one day have enough discipline and energy to start a project in an unrelated area and create something more useful than a tool to help winners beat losers at a gambling game.


I played back during the glory days of 2000-2004 and through the net-teller shutdown for America. Back then the money was absolutely absurd, because the game was very new online and very few people understood basics. This combined with the massive amount of marketing that was going on for television, tons of people were addicted to the thrill of the game.

Every single year the game became more difficult as more people studied and I continued to move up into higher and higher stakes. At one point I looked at how much money I had made/compared to how much I hated what I was doing and decided that I was done. People around me never understood why I quit because they knew me and the amount of income I had made, but the thing is no one can understand what it is to live on a grinders schedule like that.

I would get up at 10-11AM study the hands from the previous day to make sure I was playing correctly mentally, read over new material, review a partners hands as we both did each others to confirm playstyle and decisions were correct, around 3PM I would start scouting tables across various levels looking for soft players that I had datamined information about. Around 4-5PM I would start playing and continue to add tables with soft players and immediately leave any tables that didn't contain any soft players or they had busted. This table hopping and monitoring is absolutely exhausting, but critical to being as profitable as possible.

Repeat this until about 11PM. Then go to sleep until 4AM, get up and play against the players on the other side of the world as they were starting to play loose. Play against them until about 7AM, then go to sleep and get up around 11. Repeat. Do this for 4 years and almost anyone will decide enough is enough even with the amount of income I was making back then. The only rest I took was on weekends, just to remove myself mentally from the exhaustion.

Of course poker is much tougher now then it was then. I wouldn't even dream of trying to play now. I was maybe upper 85-90% player, now I would be in the lower 50%.


Seems like the content is still there? Here's me getting smacked down as a kid for asking for warez: https://groups.google.com/g/alt.games.doom/c/RrzQBjHIa6k/m/Q...

Hi Peter! You did my E3 visa a while back. When I was standing in line at the US consulate in Sydney, the person in front of me was really nervous, visibly shaking. I saw that their papers had your letterhead, so I was able to calm them down a bit by showing them my papers, which also had your letterhead.

Thanks for all your hard work!


Partially responsible for this. (Sold Lockitron to Chamberlain in 2017 which became the basis for Amazon Key integrations.)

Contrary to the popular sentiment in a lot of the comments here, there’s not much value in the analytics. As we all painfully found out in the 2010’s, there are only two viable recurring revenue streams in the IoT space - charging for video storage and charging for commercial access. Chamberlain does both with the MyQ cameras and with the garage access program to partners like Amazon and Walmart. Both retailers have a fraud problem (discussed here https://news.ycombinator.com/item?id=38176891). “In garage delivery” promises dropping delivery fraud to zero - ie users falsely claiming package theft. That solution is worth millions to retailers, naturally Chamberlain would like a cut but only if they can successfully defend that chokepoint.

For historical reasons having to do with the security of three or four generations of wireless protocols used in garage doors they can’t (and products like ratgdo and OpenSesame exploit this.) Other industries such as automotive have a more secure chain of control over their encryption keys so one has to (for instance) go to the dealer to buy a replacement key fob for your Tesla for $300 and not eBay for $5.

Given the turnover in leadership there I’m not surprised the new guy needs to put their hand on the plate to see it’s hot, but there’s a reason this wasn’t implemented before and it wasn’t because of lack of discussion. I can see the temptation in going for monetization given their market share but I think this approach was ill conceived rather than fix foundational issues which would allow home users to integrate with 3rd party services and still charge industry partners for reducing incidences of fraud.


Great subject, poor video. Too much 'wow' and travel photos, not enough explanation of what's going on in those things.

I've seen the Jaquet-Droz automata[1] in Neuchatel on the one day a month they run them. They're demoed by a watchmaker who understands and maintains them.

The three automata are the Musician, the Artist, and the Writer. These were made between 1764 and 1778. The Musician and the Artist are just playing back pre-recorded motions from a set of cams. To increase the length of the recording up, there's a stack of cams, and after one turn, the stack moves vertically to play the next cams. So there are two clockwork trains taking turns - playout, and cam selection. It's a beautiful piece of work, especially when you realize someone made all those cams by hand, with a file.

The Writer, which writes text with a quill, is programmable. There's the stack of cams that move vertically to switch cams, as with the others. But with the Writer, the cam selection is programmable. There's a programming wheel made of little screw-on sections of different heights, and a supply of cam sections which indicate what letter to print next. It's an encoding with at least 26 different levels, probably more. I'm not sure if letter case is encoded on the main cam.

It's all very compact, fitting inside the bodies of the dolls. There's no huge mechanical box hidden away somewhere. Even today it would be tough to make that mechanism work, although there are still watchmaking companies that could do it.

Better video of the Writer.[2] You can see the cam stack and the programming wheel working.

[1] https://en.wikipedia.org/wiki/Jaquet-Droz_automata

[2] https://youtu.be/ux2KW20nqHU


I'm a huge fan of Logo! I wrote a Logo Adventure for Terrapin Logo that they shipped on the C64, the point of which was to show off Logo's list processing and functional features, because most of the other examples were focused on turtle graphics. So I appreciate what you say about teaching the functional and abstract parts of Logo, as well as the procedural and graphical parts like turtle graphics.

https://donhopkins.medium.com/logo-adventure-for-c64-terrapi...

Blockly is actually a JavaScript library for rolling your own visual blocks based visual programming languages, from Google. They use it for App Inventor, but that's not all it's used for. It's not one particular language or execution model, but different applications can interpret or compile it into JavaScript or even shaders or WebAssembly.

Here's a wonderful example of a "Falling Sand Game" called "Sandspiel Studio" that uses Blockley for a specialized cellular automata oriented visual programming language, that let you define and play with your own sand/air/water/lava/seed/stem/leaf/flower/whatever particles:

https://studio.sandspiel.club/

Making Sandspiel HN Discussion (Max Bittker):

https://news.ycombinator.com/item?id=34555913

https://news.ycombinator.com/item?id=34561910

https://maxbittker.com/making-sandspiel

Making Alien Elements (with Todepond):

https://www.youtube.com/watch?v=48-9jjndb2k

The point that Sandspiel Studio so beautifully illustrates and Blockley so practically addresses is that there are many possible domains, execution models, data models, vocabularies, and user interfaces for visual programming languages: there is no universal VPL that's good for everything, but they're great for many different domains, so you need powerful flexible tools for rolling your own special purpose VPLs.

That's also why Blender supports generic "nodes" you can subclass and customize into all kinds of different domains like shader programing, 2d image composition, 3d content generation, animation, constraints, etc. There are third party and built-in Blender plug-ins that customize nodes into specialized visual languages for generating maps and cities, particle systems, and all kinds of other specialized uses.

Everything Nodes UX #67088:

https://projects.blender.org/blender/blender/issues/67088

Function & Particle Nodes:

https://wiki.blender.org/wiki/Source/Nodes

So Blockley and Blender Nodes are the visual equivalent of "yacc", for defining custom visual programming languages.

I'm also a huge fan of Snap!, which has all the advantages of Logo (Lisp without parenthesis) and Scratch / eToys / Squeak / App Inventor family of block based visual programming languages, but all the power of Scheme.

If you know Scheme, then it's easy to think about Snap!: it's just Scheme with a visual block syntax, but with some functions renamed to make them easier to learn, plus all the stage and turtle graphics stuff from Scratch, running in a web browser!

I didn't realize until watching in amazement as Jens Mönig used his own creation, that it also has full keyboard support, so you can create and edit programs without using the mouse!

It's much easier to teach Scheme to kids by teaching them Snap!, because the user interface is so much better than a text editor.

I attended Snap!Con2023 in Barcelona recently, and we discussed some interesting possible extensions to Snap:

Grammar defining blocks. Right now you can create your own custom vocabularies of blocks that fit together in particular constrained ways, by writing JavaScript Snap! extensions. Develop a set of blocks for visually defining new grammars and vocabularies of custom parameterizable blocks.

For example, a grammar for representing plants with seeds, roots, stems, leaves, flowers, petals, etc. You can assemble and edit them manually by dragging and dropping from a palette, or write programs that generated and interpret and transform them, and pass them around as data, for example as instructions to the embroidery machine to sew, or logo turtle to draw.

Turtlestitch - Coded Embroidery:

https://www.turtlestitch.org/page/about

Ken Kahn led a discussion about integrating LLMs like ChatGPT with Snap!. He's the developer of eCraft2Learn for teaching kids AI programming. Ken recently made some cool Snap! extensions for integrating LLMs with the speech synthesis and recognition system, and orchestrating conversations between different characters.

Snap!Con2023: Creative uses of Snap! blocks using large language models like GPT:

https://www.youtube.com/watch?v=d2rNGsbzkXI

Enabling children and beginning programmers to build AI programs:

https://ecraft2learn.github.io/ai/

But when it comes to LLMs, code generation, and code understanding, JavaScript has two huge insurmountable advantages over Snap! or any other block based visual programming languages:

1) First of all it's extremely well known, by both humans and LLMs.

2) And second of all, there's typically no efficient and faithful way to textually represent block based programs in a way that ChatGPT (or humans) can easily understand and generate.

Of course you could just dump out the XML or JSON save file, but that wastes your token budget, and doesn't work well, because the LLM doesn't inherently understand the syntax and semantics of save files the way it deeply groks JavaScript.

You need to define some equivalent text based language to serialize and deserialize your visual programs, or define some equivalency to an existing language, so you can translate back and forth without loss.

Like Relax/NG has an XML syntax and also a simple concise human readable syntax, both which can express the same things.

But no matter what equivalent language you come up with to serialize your block programs into, it'll never be as well known as JavaScript (unless it IS JavaScript).

I think Snap! could take advantage of its equivalency with Scheme, and you could just parse Scheme into Snap! blocks, and the other way around. And ChatGPT knows scheme pretty well, though it's not as ubiquitous and standard as JavaScript.

Logo would not be as good as Scheme, since it has ambiguities, because you need to know the number of parameters a function uses in order to parse it, since it's essentially Lisp without parens.

Back in 1989 I designed a visual PostScript programming and debugging interface for NeWS, that took the approach of not trying to change or redefine the PostScript language, just to present a visual directly manipulatable interface to what it actually was.

PSIBER Space Deck Demo:

https://www.youtube.com/watch?v=iuC_DDgQmsM

PSIBER Space Deck and Pseudo Scientific Visualizer Demo

https://www.youtube.com/watch?v=_fqCeuue5Ac

The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989

https://donhopkins.medium.com/the-shape-of-psiber-space-octo...

>Abstract: The PSIBER Space Deck is an interactive visual user interface to a graphical programming environment, the NeWS window system. It lets you display, manipulate, and navigate the data structures, programs, and processes living in the virtual memory space of NeWS. It is useful as a debugging tool, and as a hands on way to learn about programming in PostScript and NeWS. [...]

>There is a text window onto a NeWS process, a PostScript interpreter with which you can interact (as with an “executive"). PostScript is a stack based language, so the window has a spike sticking up out of it, representing the process's operand stack. Objects on the process's stack are displayed in windows with their tabs pinned on the spike. (See figure 1) You can feed PostScript expressions to the interpreter by typing them with the keyboard, or pointing and clicking at them with the mouse, and the stack display will be dynamically updated to show the results.

>Not only can you examine and manipulate the objects on the stack, but you can also manipulate the stack directly with the mouse. You can drag the objects up and down the spike to change their order on the stack, and drag them on and off the spike to push and pop them; you can take objects off the spike and set them aside to refer to later, or close them into icons so they don’t take up as much screen space.

>NeWS processes running in the same window server can be debugged using the existing NeWS debug commands in harmony with the graphical stack and object display.

>The PSIBER Space Deck can be used as a hands on way to learn about programming in PostScript and NeWS. You can try out examples from cookbooks and manuals, and explore and enrich your understanding of the environment with the help of the interactive data structure display.

I like the way Kodable takes a similar approach of making an easy to use visual user interface to an existing, well defined, universal general purpose language, instead of trying to invent and teach something weird that nobody else uses and students will never see again.

Not to the exclusion of other block languages like Snap! and Blender nodes -- they all have their uses. Snap! is implemented in JavaScript, and lets you call JavaScript functions and integrate libraries with it, like eCraft2Learn.

There's a certain honesty and practical utility about starting simple but working your way towards an actual programming language that kids will encounter in the real world.

Of course I'm disappointed that language isn't PostScript, Lisp, Scheme, Logo, or ScriptX, but JavaScript won the popularity contest, and that's the world we live in.

Despite all its tragic flaws and questionable orientations, I think JavaScript is really a great language to teach kids early on. It's not going away, and it's not purely functional or object oriented, and there's nothing that comes close to it in terms of universality and LLM friendliness.


There are a bunch of considerations. As resolution and color depth goes up it becomes harder to throw a lot of graphical detail on the screen through traditional illustration, so games that went down that route increasingly became flat and cartoonish, while 3D games could be filled with textures, lights and "greeble" architecture. It was also a way to enforce style consistency. Lucasarts didn't have "art direction" as a role until DOTT, and their earlier games show a lot of style drift between assets. Early 3D enforced a style through the constraints - often not a good one, but definitely something that could look consistent just through the model/texture/light process separation.

The biggest one is actually animation. Animation gets expensive as you add more detail, and when you add resolution, you discover a need to add more frames of animation to make it still look smooth, so your art costs can explode. The use of 3D here is motivated by having camera-independent animation, and being able to use it for every minor environmental effect: think of every Myst-style game where you pull levers and push buttons and open doors. Character animation in early 3D was bad, but it was also "enough" to look representative, so it ended up beating traditional or live action approaches.


Stringref is an extremely thoughtful proposal for strings in WebAssembly. It’s surprising, in a way, how thoughtful one need be about strings.

Here is an aside, I promise it’ll be relevant. I once visited Gerry Sussman in his office, he was very busy preparing for a class and I was surprised to see that he was preparing his slides on oldschool overhead projector transparencies. “It’s because I hate computers” he said, and complained about how he could design a computer from top to bottom and all its operating system components but found any program that wasn’t emacs or a terminal frustrating and difficult and unintuitive to use (picking up and dropping his mouse to dramatic effect).

And he said another thing, with a sigh, which has stuck with me: “Strings aren’t strings anymore.”

If you lived through the Python 2 to Python 3 transition, and especially if you lived through the world of using Python 2 where most of the applications you worked with were (with an anglophone-centric bias) probably just using ascii to suddenly having unicode errors all the time as you built internationally-viable applications, you’ll also recognize the motivation to redesign strings as a very thoughtful and separate thing from “bytestrings”, as Python 3 did. Python 2 to Python 3 may have been a painful transition, but dealing with text in Python 3 is mountains better than beforehand.

The WebAssembly world has not, as a whole, learned this lesson yet. This will probably start to change soon as more and more higher level languages start to enter the world thanks to WASM GC landing, but for right now the thinking about strings for most of the world is very C-brained, very Python 2. Stringref recognizes that if WASM is going to be the universal VM it hopes to be, strings are one of the things that need to be designed very thoughtfully, both for the future we want and for the present we have to live in (ugh, all that UTF-16 surrogate pair pain!). Perhaps it is too early or too beautiful for this world. I hope it gets a good chance.


Back around the turn of the millennium, there was a company called AllAdvantage. They paid you to install spyware/ad injection software and watch you browse, and sold the add space and analytics to corporations. They'd pay you for... I think it was 48 hours of ad-injected spied browsing per month, and then stop paying you (but keep injecting ads and spying on you). There was also a pyramid aspect where you'd get something like 10% of all of the amount earned by your direct referrals, with no monthly cap. Also, 48 hours of browsing wasn't enough to hit their minimum threshold for AllAdvantage to mail you a cheque.

Edit: maybe there wasn't actually spyware and it just injected extra banner ads in your browsing. I never looked into installing it myself.

A /16 subnet was routed to our fraternity house, licensed to house up to 22 people. 65,536 (minus broadcast, gateway, and network address) IPv4 addresses for 22 people. My roommate bought 1 GB of RAM (about $4k at the time) and a VMWare student license for his Linux desktop. He cut down Win95 to be able to run in 32 MB of RAM (including his COM scripting bot, Internet Explorer, and the AllAdvantage spyware). I seem to remember him configuring the VMs to run 16-bit color to save memory footprint. He scripted the Win95 boot process to read a CSV file off of NFS, remove the top line, and write the file back. The CSV file contained fake name, fake address, etc. The VM would register itself with AllAdvantage, with my roommate as the referrer, and then randomly click on links in Internet Explorer until hitting the payout limit, and then shut down the VM. A Perl script (remember the late 90s?) on the Linux host would re-launch a clean VM every time an old VM shut down, and keep the CSV populated with fake account details.

30 VMs were browsing 24x7 for ALlAdvantage. My roommate set up a caching proxy on his Linux box, so he didn't hose the house's T1 connection. 10% of the payout (the referral fees) over something like 4-5 months paid for the whole desktop. AllAdvantage never got returned cheques from the fake addresses because they never paid out. I think he ran his system for over a year before AllAdvantage went out of business, for a total of something like $12k in profit.

He ran his own DNS server that hopped randomly all over the /16 to reduce the probability of detection. He's pretty convinced AllAdvantage's fraud people noticed him as an extreme outlier. He suspects they ignored him because the data he was generating for them cost 1/11th as much as most of the other data they were selling to customers.

Edit: a quick search shows the AllAdvantage rate was maybe $0.40/hr. 10% of this was $0.04 x 30 VMs = $1.20/hr 24x7. 8766 hours/year works out to about $10,000 per year. $12k in profit, $4k in RAM, and $1k for the rest of the machine works out to a bit under 2 years of running the system, if the rest of my memory is roughly accurate.

A few years later, our school kept the /16 allocated to us, but only routed the first /24 to the house. I'm sure my roommate wasn't the only one to get up to shenanigans with so many IP addresses.

Edit: He also found some online casinos that didn't explicitly forbid bots and he set up some poker bots that would keep track of its winning percentages against all other players. He set up some monitoring/control software for his feature phone (or was it a PDA?) so he could watch his losses from class and shut it down if necessary.

He kept records of every card seen in every game his bots played. I asked on at least 3 occasions for access to that data, to check for (1) naive shuffling (2) using a linear congruential generator instead of cryptographic quality random numbers and (3) seeding with time instead of a true random seed. He told me at least 3 times that he would give me FTP access to card histories, but never did. A couple years later, a paper came out detailing a code review of the most common online poker software finding (1) naive shuffling (2) using a linear congruential generator (3) seeded using only the time the game started and (4) containing an off-by-one error in the naive shuffle. The off-by-one error might have prevented me from figuring it all out from the poker bot histories, but there's some alternate history where we made millions in online poker, fully within the published rules of the sites. (Unfortunately, the millions would have come entirely from other players, the online casinos not bearing any of the costs of the shoddy coding.)

He mused several times that it would be fun to create a cardboard box with one of those see-through windows for a shipping label... and two subtle slits allowing a continuous roll of various shipping addresses and an advancement mechanism to be hidden within the package. He'd use a battery and/or inertial energy harvesting weight to power a device to change the sipping address every 4 hours. He wanted to send such a package with tracking information and watch it ping-pong around the country until someone realized something was fishy with the package.

He eventually dropped out of school and was living off of his poker bots until (without health insurance) his appendix burst and he was forced to get a day job to pay off his medical debt.

I hope he gets elected to Congress someday (though he's not very political) just to make a great epilogue to a biographical film.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: