> Portable Genera, an official port of the VLM to Intel and ARM done under contract for existing customers. While this version isn't publicly available as of this writing, it's still actively developed.
As the person ultimately responsible for the Minecraft Wiki ending up in the hands of Fandom, it is great to see what Weird Gloop (and similar) are achieving. At the time of selling out, the Minecraft Wiki and Minecraft Forum cost tens of thousands of dollars per month to run and so it didn't feel too much like selling out, because we needed money to survive[1]. 15 years later, the internet is a different place, and with the availability of Cloudflare, running high-traffic websites is much more cost effective.
If I could do things over again, on today's internet, I like to believe Weird Gloop is the type of organisation we would have built rather than ending up inside Fandom's machine. I guess that's all to say: thank you Weird Gloop for achieving what we couldn't (and sorry to all who have suffered Fandom when reading about Minecraft over the years).
[1] That's a bit of a cop out, we did have options, the decision to sell was mostly driven by me being a dumb kid. In hindsight, we could have achieved independent sustainability, it was just far beyond what my tiny little mind could imagine.
Sorry time: years ago I worked on a telemedicine web app before telemedicine was nearly as popular as it is today. Part of the application had patients filling out questionnaires online to show answers to the doctors. We were onboarding different parts of a large healthcare system throughout all this (cardio, GI, etc.) and each had questionnaires that required different logic for when and how to display the questions, so the application had a fairly powerful system for driving the conditional logic of when questions do and do not show up.
Well one day I am working on a new set of features to help support the new clinic that’s coming online and for whatever reason the question that should by all rights show up, does not. As I am getting deeper into debugging why, I pepper the code with nonsensical and slightly angry debug statements that show up alongside the questions. After solving the problem I happily clean up and commit the fixed code and move onto the next thing.
Well, it turns out I didn’t clean up all the debug statements. The statement I left in said I SEE YOU!!! in big red letters if you answered a particular set of questions in a particular way. This was discovered by a patient. Of the psychiatric clinic that just came online. On the questionnaire meant to evaluate paranoia.
Since then I have started using things like aaa and 111 as my debug markers.
Always cool to see new mutex implementations and shootouts between them, but I don’t like how this one is benchmarked. Looks like a microbenchmark.
Most of us who ship fast locks use very large multithreaded programs as our primary way of testing performance. The things that make a mutex fast or slow seem to be different for complex workloads with varied critical section length, varied numbers of threads contending, and varying levels of contention.
(Source: I wrote the fast locks that WebKit uses, I’m the person who invented the ParkingLot abstraction for lock impls (now also used in Rust and Unreal Engine), and I previously did research on fast locks for Java and have a paper about that.)
> I am certain they never intended to inspire a 12 year-old kid to find a better life.
i can't speak for everyone, but as one of the people writing tutorials and faqs and helping people learn to do things with free software during the period miller is talking about, that is absolutely what i intended to do. and, from the number of people i knew who were excited to work on olpc, conectar igualdad, and huayra linux, i think it was actually a pretty common motivation
as a kid on bbses, fidonet, and the internet, i benefited to an unimaginable degree from other people's generosity in sharing their learning and their inventions (which is what software is). how could i not want to do the same?
underwritten by the nsf, the internet was a gift economy, like burning man: people giving away things of value to all comers because if you don't do that maybe it's because you can't. the good parts of it still are
Dang, hits home. When I was a senior in high school, I was lucky to able to volunteer under Dr. Eric Brown De Colstoun at NASA Goddard, checking error rates for tree cover estimates using Landsat data^. Many hours that fall spent trudging around parks and forests, looking at the sky through a PVC pipe. It still kind of blows my mind at how much is able to be gained from images where each pixel is 15mx15m of ground-level area (and, I believe, with an important component of Landsat 7's imaging system broken for most of its lifespan).
I also wasn't aware that Landsat program imagery had been made free to access a few years later. Nice.
^(A massive thank you to him, since I wouldn't have graduated without being able to participate in that project. And a massive apology for going on to get a fine arts degree.)
I think I was the person who originally proposed to implement the crew control UI in a web browser, and I participated in a week-long retreat in beautiful Bend, Oregon where we implemented the first prototype.
At the time, some very good flight software engineers had been working diligently on a new UI framework that was written in the same code style and process as the rest of our flight software. However, I noticed a classic problem - we were working on the UI platform at the same time that we were trying to design and prototype the actual UI.
I made some observations:
1) We can create a prototype right now in Chrome, with its incumbent versatility.
2) The chip running the UI can actually reasonably run Chrome.
3) Web browsers are historically known for crashing, but that's partly because they have to handle every page on the whole Internet. A static system with the same browser running a single website, heavily tested, may be reliable enough for our needs.
4) We can always go back and reimplement the UI on top of the space-grade UI platform, and actually it'll be a lot easier because we will know exactly functionality we need out of that platform.
The prototype was a great success; we were able to implement a lot of interesting UI in just a week.
I left SpaceX before Crew Dragon launched, so I'm not sure what ended up launching or what the state of affairs is today. I remember hearing some feedback from testing sessions that the astronauts were pleasantly surprised when we were able to live edit a button when they commented it was too hard to reliably press it with their gloved finger.
As for reliability, to do a fair analysis you need to understand the requirements of the mission. Only then can you start thinking about faults and how to mitigate them. This isn't like Apollo where the astronauts had to physically reconfigure the spacecraft for each phase of the mission -- to an exceptionally large extent, Dragon flies itself. As a minor example of systemic fault tolerance, each display is individually controlled by its own processor. If a display fails, whether due to Chrome or cosmic radiation, an astronaut can simply use a different display.
Also, as a side note regarding "touchscreens" -- I believe some (very important) buttons did launch with Crew Dragon, but buttons and wiring are heavy, and weight is the enemy. If you're going to have a screen anyways, making it a touchscreen adds relatively trivial weight.
Guy here, who programmed the C++ implementation of Operator: It was a pleasure to build the instrument together with Robert, and I learned a ton from him.
In the 2009 upgrade I replaced the aliasing wavetables with bandlimited ones, generated using IFFT, one per octave. With 2x oversampling, it became aliasing-free as long as you didn’t use FM. When adding the IFFT, the feature of drawing harmonics also became obvious.
Fun fact: The four oscillators were calculated in parallel using SSE intrinsics. It’s the only time I’ve ever been able to improve the performance of something using that particular technology.
For me personally, Operator is a pinnacle of my engineering career - It is one of the most-used synthesizers in the world, though of course, there are much better ones out there.
Several years ago I came across the first issue of "Television" magazine from 1928 and reading it blew my mind in a couple ways. First, the overall tone is remarkably similar to a 1970s homebrew computer club newsletter, including defining what "television" even is (and isn't). For example, We learn on page 10 that "television is not tele-photography."
It's clear from this magazine that early television was the domain of home tinkerers and hackers. On page 26 is a detailed tutorial on how to construct your own selenium condenser cell from scratch, including which London chemist had appropriately high-quality selenium, where to buy copper sheets, mica insulator (.008 thick) and brass bars. Well worth a read: https://comicbookplus.com/?dlid=37097
That analog television not only was prototyped nearly a hundred years ago but then began being deployed at vast consumer scale ~75 years ago is still just so amazing. It's worth understanding a bit about how it works just to appreciate what a wildly ambitious hack it was. From real-time image acquisition to transmission to display, many of the fundamental technologies didn't even exist and had to be invented then perfected for it to work.
I met my Zapier co-founder bryanh through HN 15 years ago when someone made a similar service to OP called "hacker newsers". We were the only two people in Missouri at the time which led to a meetup. https://news.ycombinator.com/item?id=1520916
That's a good article. The whole history is there.
The commercial side has made huge progress, too. Look up "diamond making machine" on Alibaba. You can buy a high-pressure, high temperature six sided press for about US$200,000. A chemical vapor deposition machine is about the same price.
De Beers, the diamond cartel, has an R&D operation, Element Six. They sell synthetic diamonds for lasers and other exotic applications. The technology is good enough to achieve flaw levels in the parts per billion range, and to make diamond windows for lasers 10cm across.[1]
This is way above jewelry grade.
Over on the natural diamond side, there's been a breakthrough. The industry used to break up some large diamonds during rock crushing. Now there's a industrial X-ray system which is used to examine rocks before crushing to find diamonds. It's working quite well. A 2500 carat diamond was found recently.[1][2] TOMRA, which makes high-volume sorters for everything from recyclables to rice, has a sorter for this job. This is working so well that there's now something of a glut of giant diamonds too big for jewelry.
The finishing processes of cutting and polishing have been automated. The machinery for that comes mostly from China and India.
Diamonds are now something you can buy by the kilo, in plastic bags.
A sad day. My buddy and I were the original developers of anandtech when it went live running on cold fusion and oracle as the backend. I started a hosting company and hosted anadtrch for a few years. Lots of memories there.
Ah, this brings back memories. Reddit was one of the very first users of EBS back in 2008. I thought I was so clever when I figured out that I could get more IOPS if I build a software raid out of five EBS volumes.
At the time each volume had very inconsistent performance, so I would launch seven or eight, and then run some each write and read loads. I'd take the five best performers and then put them into a Linux software raid.
In the good case, I got the desired effect -- I did in fact get more IOPS then 5x a single node. But in the bad case, oh boy was it bad.
What I didn't realize was that if you're using a software raid, if one node is slow, the entire raid moves at the speed of the slowest volume. So this would manifest as a database going bad. It took a while to figure out it was the RAID that was the problem. And even then, removing the bad node was hard -- the software raid really didn't want to let go of the bad volume until it could finish writing out to it, which of course was super slow.
And then I would put in a new EBS volume and have to rebuild the array, which of course it was also bad at because it would be bottlenecked on the IOPS for the new volume.
We moved off of those software raids after a while. We almost never used EBS at Netflix, in part because I would tell everyone who would listen about my folly at reddit, and because they had already standardized on using only local disk before I ever got there.
And an amusing side note, when AWS had that massive EBS outage, I still worked at reddit and I was actually watching Netflix while I was waiting for the EBS to come back so I could fix all the databases. When I interviewed at Netflix one of the questions I asked them was "how were you still up during the EBS outage?", and they said, "Oh, we just don't use EBS".
> Forgings have the added advantage of variable grain direction which generally can be tailored to the stress patterns of a specific design.
This is a super underappreciated fact! It's often repeated that forging is just stronger, but just squishing steel does NOT make it stronger. Forging a part is so much more than just smashing it into a shape.
Steel cable is made of pretty ordinary steel which is stretched 100s of times its original length. That process alone makes it 2-4x stronger in that direction. You stretch steel and it gets stronger in that direction.
Do you see how complicated that optimization process becomes? The process steps are not just trying to take it to the final shape. Your piston rod needs to be strong lengthwise, so you actually want to start with a short fat ingot and stretch it out instead of one that is near-final size.
Think of making an I-beam. You could hammer out the middle, making it thinner. That would give you a bit of strength there but very little on the edges. If you instead pull the edges out, you create a long continuous stretch that will be very strong against bending. Where, how, and in what order you stretch makes all the difference. You may want to leave extra material and cut it off later, so that your grains are all oriented together instead of tapering to a point.
For any moderately complex part, this process is as complicated as modern engineering problems. With poor steel you genuinely need to understand how to foster and bring out those continuous lines or your corkscrew will unwind like playdough. Blacksmiths had a legitimately intellectual job back in the day!
I worked on this experiment as an undergrad ~10 years ago during my freshman year! We built a Cherenkov radiation detector, focusing magnets, and did tons of simulations.
This is all from memory, but I remember the beamline setup was to get protons from the accelerator there, smash them into a target, which produced various charged particles which could be focused with the magnets, sent down a long pipe where they would decay into neutrinos et al. Then, there's a near detector and a far detector (far detector deep underground in South Dakota). The aim is to measure the neutrino flavors at both detectors to better understand the flavor oscillations (and look for asymmetries between neutrino/anti-neutrino oscillations, hopefully to help explain the matter/antimatter asymmetry in the universe).
The particular bit I worked most on was studying the effects of adding an additional solid absorber at the end of the beamline, which was needed to absorb all the particles that didn't decay in the pipe. It would produce more neutrinos that were unfocused, so would affect the near-far flavor statistics (since these would be detected at the near detector but not the far since they were unfocused, ruining the statistics). It was a great intro to doing physics research :-)
I worked on this movie, I was at DNEG at the time. One of the standout things that I remember is that this particular simulation was toxic to the fileserver that it was being stored on.
From what I recall, I don't think that it was running on that many machines at once. Mainly because it required the high memory nodes that were expensive. I think it was only running on ~10 possibly 50 machines concurrently. But I could be wrong.
What it did have was at least one dedicated fileserver though. Each of the file servers at the time were some dual proc dell 1u thing with as much ram as you could stuff in them at the time (384 gigs I think). They were attached by SAS to a single 60 drive 4u raid array. (Dell PowerVault MD3460 or something along those lines. They are rebadged by Dell and were the first practical hotswap enclosure that took normal 3.5" SAS drives, that didn't cost the earth)
The array was formatted into 4 raid6 groups, and LVM'd together on the server. it was then shared out by NFS over bonded 10gig links.
Anyway. That simulation totally fucked the disks in the array. By the time it finished (I think it was a 2 week run time) it had eaten something like 14 hard drives. Every time a new disk was inserted, another would start to fail. It was so close to fucking up the whole time.
I had thought that the simulation was a plugin for houdini, or one of the other fluid simulation engines we had kicking around, rather than a custom 40k C++ program.
This was such important and transformational work and I remember at the time being quite dismissive of it.
I knew Orr’s and Suchman’s work (they worked in a physically adjacent area, but completely different group, though we were all under John Seely Brown and because they were nice people). Thankfully I was grown up enough to be polite, but really I was such a techno-determinist that I figured user problems came from ignorance.*
To be fair, I was not the only one: the insights described in this book draft surprised a lot of people, not just how they improved the copiers but how those two even approached the problem (starting with the sociology of the repair workers). It sure surprised Xerox management. But I’ve heard it said many times that this work led to restructuring the paper path in a way that justified (paid for) everything spent on PARC.
I did grow up of course and now do see my work (machines, chemistry, etc) as a small part of a large social system. A successful company has to base its product plans starting this way.
To choose an example of failure to appreciate the social scope (but not pick on it) the crypto folks spend their time on technology, based on a social model they want to exist rather than the one that currently does. I think it’s a big reason why it’s barely impacted the world in, what, 15 years? Xerox was the same, and it helped them sell a lot of copiers, but didn’t make them as ubiquitous as they could have been. Another example: everybody laughs at Google for launching “products” that go nowhere and are quickly forgotten. We all know it’s because of a screwed-up, internally-focused culture. But sometimes a product succeeds without marketing (e.g. gmail, at the time) because it happened to be matched to the actual, external need. It makes this kind of continuous failure even more damning.
* TBH, 40 years later I have not 100% shed this view — e.g. my attitude towards complaints about git. Maybe this means I’m still a jerk.
I'd forgotten how close to printing machines the old photocopiers were. You would basically either have a crap one you could operate yourself, or take your stuff to the printery to have professionals (a subset of librarians I think, or the logical join over librarians and computer operations staff) do it for you. Printing machines had a fleet of maintainers, craft unions who walked off the job if you touched a dial.
They were amazing at doing things which really mattered: shrinking an A0 architectural drawing down but maintaining aspect ratio. Adjusting offsets for the print for binding signatures, so the 1st and 16th page was not too far out because of wrapping around the other 8 pairs of pages. Even just working out how to rotate the pages for N-up printing. But the GUI sucked. I think they called ours "the bindery" because it's main gig was doing PHD from soup to nuts, binding included.
The repair techs had the most amazing flight cases, packed with tools which served one specific purpose.Like, A doohickey to adjust the corona wire, without dismantling the imaging and toner roller, with a tonne of equipment hovering over your head on a gas-lift. Screwdrivers with very very carefully chosen lengths. Torque wrenches. It was high tech meets motor racing meets.. IBM.
I am told they were paid better than many computer techs. The IBM guy was paid IBM scale to fix it on IBMs timescales. the xerox guy did more random shit, with more devices, more often.
They had a very corporate look. that amazing briefcase or six. Suit, tie. Very acceptable.
I know a guy who worked for a paper-folding-and-envelope-stuffing company and it was very similar culturally: can-do, fix anything, but working on giant multi-million dollar machines which were used twice a year to do tax mailouts, and election materials, and the rest of the time rented to the original spam merchants for 10c per thousand mailouts. The secondhand value of these machines were like photocopiers: Really significant. He was brought out of retirement to help take one apart into TEU equivalent chunks to be shipped to Singapore from Brisbane. His retirement gig at one point was repairing Espresso machines, he said it made him feel familiar and useful.
The era which was the end of the typing pool was fascinating. All kinds of arcane roles which only make sense in the absence of email and tiny printers everywhere. Some of those jobs had been there from the days of hand-copying, Dickens-era and before.
I was a college swimmer, qualified for Olympic Trials in 2012 and 2016. There are absolutely slow and fast pools. It basically comes down to two things:
1. The depth - which is only 7ft in Paris, unusually shallow for a competition pool.
2. The sides. Does the water spill over the sides into the gutters, or smash into a wall and bounce back, creating more chop.
A trained eye can see all the swimmers in Paris struggling in their last 10-20 meters (heck, an untrained eye can spot some of these). Bummer that it makes the meet feel slow but at least it generally affects all the swimmers equally
I went to an engineering school, and one of the stories the old boys told was that at some point the city had built a new bridge, and tendered the destruction of the old bridge, and we'd put in the winning bid.
The scheduled day came, but only an hour or two after the scheduled time an urgent messenger came from the city: the neighbours were complaining, could they please just destroy the bridge all at once with the next explosion?
It turns out the civil engineers had been enjoying themselves in the interval, checking their modelling by seeing how many parts of the bridge they could blow off of it, while leaving the majority of the structure still standing...
“The Heavy Press Program was a Cold War-era program of the United States Air Force to build the largest forging presses and extrusion presses in the world.” This ”program began in 1944 and concluded in 1957 after construction of four forging presses and six extruders, at an overall cost of $279 million. Six of them are still in operation today, manufacturing structural parts for military and commercial aircraft” [1].
$279mm in 1957 dollars is about $3.2bn today [2]. A public cluster of GPUs provided for free to American universities, companies and non-profits might not be a bad idea.
Wow, this hits close to home. Doing a page fault where you can't in the kernel is exactly what I did with my very first patch I submitted after I joined the Microsoft BitLocker team in 2009. I added a check on the driver initialization path and didn't annotate the code as non-paged because frankly I didn't know at the time that the Windows kernel was paged. All my kernel development experience up to that point was with Linux, which isn't paged.
BitLocker is a storage driver, so that code turned into a circular dependency. The attempt to page in the code resulted a call to that not-yet-paged-in code.
The reason I didn't catch it with local testing was because I never tried rebooting with BitLocker enabled on my dev box when I was working on that code. For everyone on the team that did have BitLocker enabled they got the BSOD when they rebooted. Even then the "blast radius" was only the BitLocker team with about 8 devs, since local changes were qualified at the team level before they were merged up the chain.
The controls in place not only protected Windows more generally, but they even protected the majority of the Windows development group. It blows my mind that a kernel driver with the level of proliferation in industry could make it out the door apparently without even the most basic level of qualification.
I was not obsolete. A big company like Apple, there are always things that need taken care of.
I assumed with iOS, Swift, etc., maybe the guys on the Cocoa team were obsolete? Of course not. That code is still there, still needs maintaining, interoperability with the new languages, frameworks, etc.
I'm more surprised they want to stay on.
And that is in fact why I left Apple: the job had changed, the "career" had changed. The engineers were no longer steering the ship. It had been that way when I started in 1995 though. A "team", let's say the graphics team, would figure out what API to revisit, what new ones to add — perhaps how to refactor the entire underlying workflow. The "tech lead" (who would regularly attend Siggraph since we're talking about the graphics team) would make the call as to what got priority. Marketing would come around after the fact to "create a narrative" around all the changes to the OS. I hate to say it, but many, those were the good ole' days.
(And let's be clear, in the 90's, Apple's customers were more or less like the engineers, we also loved the machine for the same reasons they did — so we did right by them, made changes they would like because we wanted them too. You can't say that as convincingly for the phone, being a mass consumer device.)
Marketing took the reins long ago though — especially as Apple began to succeed with the iPhone (which, someone can correct me if I am wrong, but I think was an engineer driven project initially — I mean most things were up to that point).
I stuck around nonetheless though because there was money to be made and kids still to raise.
When the last daughter flew the coop though, so did I.
I once bought a vacation home that was a century-old English cottage that went through 7 different owners over time. It once belonged to a US state senator. Another time it belonged to a prominent local businessman who went to jail for white collar crime, and went through a nasty divorce. Anyway, the house had a TL-15 Star Safe embedded in the wall in the master bedroom. The previous owner did not know the combination. Neither did the owner before him. Some unknown person at some point had attempted to open it, as the safe had 3 drill holes on the face plate.
There was a very old sticker on the safe bearing the name of the company who apparently installed it. The phone number was so old it did not have an area code. Fortunately the company still existed after multiple decades. I called them and asked if they could open it in a non-destructive way. One of their technicians came, looked at it and probed it for a couple hours, but determined he could not open it. And the combination had been changed from the manufacturer's default. He gave me the contact info for a reputed safe technician who could help.
Later I called this safe technician, but he was incredibly difficult to get a hold of. I had to leave multiple voicemails and send multiple emails. We chatted briefly one time and he said he would get back to me later to schedule an appointment. But he seemed half-retired and not interested in the job, as I never heard back, despite multiple contact attempts and my offer to pay handsomely. Eventually I became frustrated with his non-responsiveness and stopped caring about the safe.
Fast forward a few years later, I was going to sell the vacation home, but I really wanted to open the safe before selling. Curiosity had gotten to me. I searched online for another safe technician, and found a supposedly reliable guy. I arranged an appointment. He showed up a few days later. I asked him to open it any way he could, even if he had to destroy the safe. He started drilling, making multiple holes over the course of 2 hours. Eventually he came to me and said he ran out of drill bits as they all got worn out. He had to leave and promised he would be back.
It took one week for him to eventually come back early one morning with more drill bits. He spent another couple hours drilling. Then he put a camera scope in the holes and claimed he could see 3 of the 5 wheels spin while the other 2 were broken. He spent an entire day trying to manipulate the wheels. But after a whole day of work, he came to me with a defeated look and apologized saying he was sorry but he doesn't think he is able to open the safe.
I went back online to find yet another professional who could help. I learned that what I really needed to look for is a professional who is a member of SAVTA (Safe & Vault Technicians Association). So I found a SAVTA tech who on the phone told me a TL-15 safe in a residence is unusual as it is normally made for businesses like a jewelry store. Unfortunately he said his next availability would be about a month from now, and I was going to sell the house in the coming weeks.
Eventually I found another SAVTA tech who was available on a short notice. He and a colleague both arrived a morning, and it took them 3 hours to do more drilling and more manipulation to FINALLY open the safe.
Guess what was in it?
Nothing. It was empty! I closed the sale of the house literally 2 weeks later. I was still very relieved to have gone through this hassle to open it. The unsatisfied curiosity if it had not been open would have eaten me alive :) Also I decided in my next house I wanted a safe rated TL-15, as clearly they can withstand a lot.
I worked in the snowmaking industry at ski resorts for more than a decade before getting into tech. Many ski resorts have a snowmaking reservoir at elevation and a pumping system to fill it (usually off peak) and then use gravity to actually feed the snowmaking guns (at least partially). Almost every snowmaking manager (that I talked to) has had the idea at some point to try some sort of pumped hydro offset, but I'm unaware of anyone who has actually tried it. It would be fairly small scale (reservoirs can be ~20 million gallons, usually less) but it would be interesting to see the economics of it because the infrastructure is already there (pumps, pipes, reservoirs, etc). The systems generally even sit unused for 7-8 months of the year.
I think some of the challenges are that while most resorts have a fairly massive pumping system, it's usually geared towards slowly filling the reservoir, with the rest direct feeding the snow guns. Not many places have the need to fill a 20 million gallon reservoir in a couple of days.
There's also the probability that the head pressure's wouldn't work out. Gravity feeding from an upper reservoir near the top of a large mountain can result in thousands of PSI at the bottom if not passed through a series of pressure relief valves. I'd imagine ideally you would have to build a generating station and a new catch reservoir at the perfect elevation because if you are pumping a lot higher than needed the efficiency is going to drop significantly.
Based on my understanding, some of the details he gave about the Spyglass/Microsoft situation are not quite right, but I don't think it would appropriate for me to provide specific corrections.
However, since I was the Project Lead for the Spyglass browser team, there is one correction I can offer: We licensed the Mosaic code, but we never used any of it. Spyglass Mosaic was written from scratch.
In big picture terms, Marc's recollections look essentially correct, and he even shared a couple of credible-looking tidbits that I didn't know.
It was a crazy time. Netscape beat us, but I remember my boss observing that we beat everyone who didn't outspend us by a favor of five. I didn't get mega-rich or mega-famous like Marc (deservedly) did, but I learned a lot, and I remain thankful to have been involved in the story.
I remember being underwhelmed by the www before the graphical browser. Gopher I felt was superior. I would read about the graphical web browser in magazines but it required a slip
Connection which may not have existed at this point.
One day I read about a guy in brooklyn who had a website at www.soundtube.com and was selling music on the internet
. I got in touch and went to his office in brooklyn to look at his website in a graphical browser.
I than followed his lead in getting setup.
The logo for the site was a half squeezed tube of toothpaste with the word sound tube on it.
I don’t remember his delivery mechanism. The last time I visited the site it was the same logo but with the subtext that “what could have been”.
I occasionally look for more information about sound tube.
> Portable Genera, an official port of the VLM to Intel and ARM done under contract for existing customers. While this version isn't publicly available as of this writing, it's still actively developed.