The problem is that it costs almost nothing ($150-180 over 10 years) to keep it registered vs the anticipated payday from selling it (over $9000!).
A former friend has been doing a ”trick” for a few years with godaddy domain auctions. He finds a domain that is quite good on Godaddy auctions whos registration is expiring soon (most people set the auction end date to when it expires) and uses a back order service or two to capture it if it were to expire. Using one of a number of alternate fake Godaddy accounts, he bids absolute bonkers amounts that he has no intention to pay for. When the auction ends, he ignores payments demands, the next guy in line won’t pay their bid because they’ve already moved on. Domain expires without a winning bid, owner doesn’t notice that it wasn’t paid before it gets released. His backorder provider will then snatch it up. He’s made about $50,000 from reselling domains he got for cheap this way, even though it costs his about $1000 per year to maintain all these accounts. As scummy as it is, his rule is that if it doesn’t sell during the next two registration years, he will let it expire.
I interviewed with Doug Lenat was I was a 17 year old high school student, and he hired me as a summer intern for Cycorp - my first actual programming job.
That internship was life-changing for me, and I'll always be grateful to him for taking a wild bet on a literally a kid.
Doug was a brilliant computer scientist, and a pioneer of artificial intelligence. Though I was very junior at Cycorp, it was a small company so I sat in many meetings with him. It was obvious that he understood every detail of how the technology worked, and was extremely smart.
Cycorp was 30 years ahead of its time and never actually worked. For those who don't know, it was essentially the first OpenAI - the first large-scale commercial effort to create general artificial intelligence.
I learned a lot from Doug about how to be incredibly ambitious, and how to not give up. Doug worked on Cycorp for multiple decades. It never really took off, but he managed to keep funding it and keep hiring great people so he could keep plugging away at the problem. I know very few people who have stuck with an idea for so long.
As Roger Schank defined the terms in the 70's, "Neat" refers to using a single formal paradigm, logic, math, neural networks, and LLMs, like physics. "Scruffy" refers to combining many different algorithms and approaches, symbolic manipulation, hand coded logic, knowledge engineering, and CYC, like biology.
I believe both approaches are useful and can be combined and layered and fed back into each other, to reinforce and transcend complement each others advantages and limitations.
Kind of like how Hailey and Justin Bieber make the perfect couple: ;)
"We should take our cue from biology rather than physics..." -Marvin Minsky
>To get around these limitations, we must develop systems that combine the expressiveness and procedural versatility of symbolic systems with the fuzziness and adaptiveness of connectionist representations. Why has there been so little work on synthesizing these techniques? I suspect that it is because both of these AI communities suffer from a common cultural-philosophical disposition: They would like to explain intelligence in the image of what was successful in physics—by minimizing the amount and variety of its assumptions. But this seems to be a wrong ideal. We should take our cue from biology rather than physics because what we call thinking does not directly emerge from a few fundamental principles of wave-function symmetry and exclusion rules. Mental activities are not the sort of unitary or elementary phenomenon that can be described by a few mathematical operations on logical axioms. Instead, the functions performed by the brain are the products of the work of thousands of different, specialized subsystems, the intricate product of hundreds of millions of years of biological evolution. We cannot hope to understand such an organization by emulating the techniques of those particle physicists who search for the simplest possible unifying conceptions. Constructing a mind is simply a different kind of problem—how to synthesize organizational systems that can support a large enough diversity of different schemes yet enable them to work together to exploit one another’s abilities.
>In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 70s and was a subject of discussion until the middle 80s.[1][2][3]
>"Neats" use algorithms based on a single formal paradigms, such as logic, mathematical optimization or neural networks. Neats verify their programs are correct with theorems and mathematical rigor. Neat researchers and analysts tend to express the hope that this single formal paradigm can be extended and improved to achieve general intelligence and superintelligence.
>"Scruffies" use any number of different algorithms and methods to achieve intelligent behavior. Scruffies rely on incremental testing to verify their programs and scruffy programming requires large amounts of hand coding or knowledge engineering. Scruffies have argued that general intelligence can only be implemented by solving a large number of essentially unrelated problems, and that there is no magic bullet that will allow programs to develop general intelligence autonomously.
>John Brockman compares the neat approach to physics, in that it uses simple mathematical models as its foundation. The scruffy approach is more like biology, where much of the work involves studying and categorizing diverse phenomena.[a]
[...]
>Modern AI as both neat and scruffy
>New statistical and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as mathematical optimization and neural networks. Pamela McCorduck wrote that "As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms."[6] This general trend towards more formal methods in AI was described as "the victory of the neats" by Peter Norvig and Stuart Russell in 2003.[18]
>However, by 2021, Russell and Norvig had changed their minds.[19] Deep learning networks and machine learning in general require extensive fine tuning -- they must be iteratively tested until they begin to show the desired behavior. This is a scruffy methodology.
Walter Benjamin wrote about this all the way back in the 1930s. He observed that early art like frescos painted on walls and sculptures in temples require the viewer to travel to them, but they gave way to paintings on canvas and busts that could travel to cities to meet audiences where they were.
Technology continued to push this trend, reproducing art through photography and printing in books and newspapers let it move even further to meet people in their own homes.
These current patterns you are seeing are an extension of this, the relationship between art and viewer has inverted, art is now expected to come to us, the focus has moved to within ourselves.
Marshall McLuhan also expanded on this and the idea of technology as extensions of us with his work "Understanding Media: The Extension of Man" if you'd like to read more.
I've flown hydrogen balloons before (just because it was cheaper than helium--although you do need different fittings for the tank). I've also lit them on fire just to see what happend.
I don't think they're as dangerous as people think. If they ignite they go up in a whoosh, not a bang. The only debris is your payload, now falling. So as as your payload is not also made out of flammable material (as was the case with the Hindenburg) then I don't think it's any more of a fire threat than having power lines near trees is in the first place. Up is conveniently the right direction for a ball of flame.
Of course all of this goes out the window if you let it become entangled in a tree...
I am one of the co-founders of the fortran-lang effort. I did it while I was at LANL, where I worked as a scientist for almost 9 years. I think the report is overly pessimistic. Here is my full reply on the report: https://fortran-lang.discourse.group/t/an-evaluation-of-risk....
It has really helped although obviously it took surgery and then also nine months of slowly tweaking the settings.
Before the VNS they could (for example) not go on a trampoline for more than a few minutes without having a seizure, but now they're fine all day. They did still have seizures at night after the VNS but we tackled those with a different treatment.
The Sentiva 1000 sends regular soft pulses (for one minute every 3.5 minutes) and can also react to heart rate rising suddenly (which might mean a seizure) by automatically increasing its pulses. During a seizure if we want to manually activate the device we swipe over its location with a strong magnet and that activates it to send stronger pulses for a minute or so.
Batteries last about eight years. A few times a year we go to check the battery, the nurses have an ipad and a wand-type thing that they hold over the implants location, it uses some sort of low power NFC to read data and diagnostics from the implant. When we do need to change the battery that will be an operation. But less complicated than the initial operation (and even that was in-and-out in one day)
This was a nice surprise to see today as an Ex-Excel developer who worked on trying to bring Python to Excel (and, I guess, failing ;)).
7+ years ago I had the option of leaving the Excel team. My then boss’s boss knew I had an interest in bringing Python to Excel and offered me a chance to tackle it if I chose to stay. What was meant to be a 6 month project turned into a ~3 year project, the Python part faded away and we ended up enabling JavaScript Custom Functions in Excel instead.
For Python we were also running ‘in the cloud’ (AzureML v1), although there was some back-and-forth on if we should run locally. I think what made the Python part disappear was our partner AzureML team re-orged, re-released, re-hired, we lost a PM and our work caught the attention of another partner team who realised they could use our code to execute their JavaScript out-of-process. And so I spent a lot of time ensuring that feature was successfully shipped at, I guess, the detriment of Python.
I had a lot of help from some strong engineers and learnt a lot. The core of the work was modifying the calculation engine of Excel to allow functions to compute asynchronously, allowing the user to continue working on other parts of their spreadsheet while the remote endpoint (be it JavaScript, Python or something else) was computing. Previously the spreadsheet would lock up while calculations were running, and that wouldn’t be cool for long-running unbounded calculations. Have to wonder if any of the stuff we built made it into this new feature.
Super great to see this and look forward to trying it out.
Mr. Rogers was my actual neighbor in Pittsburgh in 1999-2000, while I was at CMU. He would really go out of his way to have social interactions. He would always say hello and ask how you were doing in a way that felt like he actually genuinely wanted to know the answer. Case of the person in real life being exactly like what he seems like on TV.
I was working on Distributed Services for AIX in 1986 and 1987, a distributed filesystem to compete with NFS. As this was being developed by a dev team, my colleague and I pondered how to test the system that we had architected.
There were so many possible states that a system's file system can be in. Were the conventional tests going to catch subtle bugs? Here's an example of an unusual but not unheard of issue: in a Unix system a file can be unlinked and hence deleted from all directories while remaining open by one or more processes. Such a file could still be written to and read by multiple processes at least until it was finally closed by all processes having open file descriptors at which point it would disappear from the system. Does the distributed file system model this correctly? Many other strange combinations of system calls might be applied to the file system. Will the tests exercise these.
It occurred to me that the "correct" behavior for any sequence of system calls could be defined by just running the sequence on a local file system and comparing the results with running an identical sequence of system calls against the distributed file system.
I built a system to generate random sequences of file related system calls that would run these on a local file system and a remote distributed file system that I wanted to test. As soon as a difference in outcome resulted the test would halt and save a log of the sequence of operations.
My experience with this test rig was interesting. At first discrepancies happened right away. As each bug was fixed by the dev team, we would start the stochastic testing again, and then a new bug would be found. Over time the test would run for a few minutes before failure and then a few minute longer and finally for hours and hours. It was a really ingesting and effective way to find some of the more subtle bugs in the code. I don't recall if I published this information internally or not.
I have taught a course on quantum computing a few times, mostly to CS students who have no background in quantum mechanics. The way I proceed is to
* First introduce classical reversible computation. I model it using linear algebra, meaning classical n-bit states are 2^n length binary vectors, and the gates are 2^n x 2^n binary matrices acting on theses states. Exponential, yes, but a faithful model. The critical feature here is that you already need the tensor product structure. Rather than some unique feature of quantum.
* Introduce probabilistic classical computation. Now the states/vectors have real entries in [0,1] and obey the L1 norm (the critical feature). Similarly, the gate matrices.
* Now, argue that quantum computing just requires the same linear algebriac structure but we (1) work over the complex number field, (2) norm is L2.
The reason I like this development is that it takes at least some of the mystery out of quantum mechanics. It is not a strange model of computation, completely divorced from classical. Just a variant of it, that happens to be the one the universe runs on.
Peter Shor does discuss classical computation in two lectures, but from just the notes it seems detached from the rest of the course.
Stan and I worked together at DreamWorks Animation and I kept in touch with him after we both moved on. At DreamWorks, Stan was dealing with some difficult problems such as moving our code from 32 to 64 bits.
It is inevitable that Stan will be remembered for the C++ Primer. When being introduced to Stan, some percentage of people would recognize his name and ask about the book or some other aspect of C++ history. Stan would kindly respond, but I always got the sense that there were other things he would rather discuss than time spent with Bjarne in front of a whiteboard or dealings with various standards bodies.
Stan was complicated and complex. I feel that he would much rather be remembered as a father, an artist, a dancer and a lover of beauty. We usually can't determine how we will be most remembered, but Stan's work on the C++ Primer, while important, is low on my list of memories of him.
No idea why this was posted today, but I'm one of the two people who put the Mooninites up that night.
(Also we had put another 20 or so up two weeks prior)
It was incredibly stressful as my friend/roommate Zebbler and I kept calling the people who'd hired us once we saw something went wrong. They said they had it under control, and not to call the police.
It took them from 10am (our first call) until around 4pm to notify the Boston police. By which point the city was shut down, and lots of people were pissed off. The police needed someone to blame for people's frustrations around traffic etc.
Zebbler and I cooperated fully, but we were still arrested and thrown in jail (without being offered any dinner or blankets even, in January in Boston).
To us clearly this was post 9/11 over-reaction. We wanted to draw attention to how absurd it was for the police to be accusing us of intentionally trying to scare people by planting "hoax devices".
There was a lot of press outside, and we knew we wouldn't be able to talk about the case. There was also a good 100+ people who had organized to protest in support of us on Livejournal :)
We knew this moment (of police over-reach and culture-deafness) deserved more than "no comment".
We brainstormed for a moment when we were brought back together in the courtroom and decided on our topic for the press:
Hairstyles of the 1970's.
Something neither of us know anything about.
We're both pretty good at thinking on our feet.
Couldn't have imagined the situation would have become so big.
I wanted to go to court and ride it out, but Zebbler was in the USA on political asylum from Belarus. They strongly suggested if we both didn't go along with community service, he'd lose that and any hope of citizenship, quite possibly be deported. (He's now a full US Citizen :)
Someone at a nearby hospital saw what was happening and requested we do our community service there, and they made it as pleasant as possible.
Surreal experience!
Many things made it feel like fun was illegal in Boston.
I've since moved to San Francisco Bay Area.
I've started Momentum Infinity, a non-profit to help people amplify their creative abilities with technology. Link in profile.
There's a classic question of "what happens when you load a website?", but I've always been more interested in "what happens when you run a program?". About 3 months ago, I was really annoyed at myself for not knowing how to answer that question so I decided to teach myself.
I taught myself everything else I know in programming, so this should be easy, right? NOPE! Apparently everything online about how operating systems and CPUs work is terrible. There are, like, no resources. Everything sucks. So while I was teaching myself I realized, hey, I should make a really good resource myself. So I started taking notes on what I was learning, and ended up with a 60-page Google Doc. And then I started writing.
And while I was writing, it turned out that most of the stuff in that giant doc was wrong. And I had to do more research. And I iterated and iterated and iterated and the internet resources continued to be terrible so I needed to make the article better. Then I realized it needed diagrams and drawings, but I didn't know how to do art, so I just pulled out Figma and started experimenting. I had a Wacom tablet lying around that I won at some hackathon, so I used that to draw some things.
Now, about 3 months later, I have something I'm really proud of! I'm happy to finally share the final version of Putting the "You" in CPU, terrible illustrations and all. I built this as part of Hack Club (https://hackclub.com), which is a community of other high schoolers who love computers.
It was cool seeing some (accidental) reception on HN a couple weeks ago while this was still a WIP, I really appreciated the feedback I got. I took some time to substantially clean it up and I'm finally happy to share with the world myself.
The website is a static HTML/CSS project, I wrote everything from scratch (I'm especially proud of the navigation components).
I hope you enjoy, and I hope that this becomes a resource that anyone can use to learn!
I worked on Google Maps monetization, and then on Maps itself.
Monetization was a dismal failure. I don't know how well they're doing now, but Maps was a gigantic money-loser, forever. I'd be a little surprised if it didn't still lose money, but maybe less. I don't what those "pin ads" cost, but I'd bet it's way less than a search ad.
If you don't believe that, that's fine. "What about indirect revenue?" you ask? Google consciously does not want to estimate that, because such a document could be discovered in patent litigation. As it is, there are tons of patent lawsuits about Maps, and the damage claims always tried to get at Ads revenue, because Maps revenue was nil.
Caveat: I could be way out of date here. I've been retired a while now.
As for the UX: "enshittification" and big-company bureaucracy describe it pretty well.
About 15 years ago there was a cute flash game where you had a little cube world that was a puzzle to grow into a bigger fancier environment by clicking on trigger points in the correct order.
I have no idea what it was called, and can’t describe it well enough to search for it if it still exists. Every couple of years I try.
Resources like this give me hope that little gems and works of art from the past will live on, even if the underlying tech is gone.
@reset here. Always reply when I get @‘d because I love the laugh and also genuinely find the joy in looking at what people are building afterwards. Never stop, it’s a great part of my day
I actually was the sole developer who wrote all the software that “networked” the custom hardware together. This project was so ahead of its time and yet required some rather arcane programming knowledge. So much fun though. AMA!
"Overview of SHARD: A System for Highly Available Replicated Data" it's the first paper to introduce the concept of database sharding. It was published in 1988 by the Computer Corporation of America.
It is referenced hundreds of times in many classic papers.
But, here's the thing. It doesn't exist.
Everyone cites Sarin, DeWitt & Rosenb[e|u]rg's paper but none have ever seen it. I've emailed dozens of academics, libraries, and archives - none of them have a copy.
So it blows my mind that something so influential is, effectively, a myth.
I'm responsible for this character being supported in Iosevka, JetBrains Mono, 3270, and Cozette, looks like. For arguments I wanted to stick to mathematical convention like f(x) without looking like a regular variable. While the lowercase 𝕩 (subject role) is more common, uppercase makes it a function and is useful in functional programming. More visibility for the character is helpful if it means wider font support, although the real sticking point has been lousy UTF-16 handling on Windows. Like most emoji, these characters need to be represented as a surrogate pair in UTF-16, and terminals in particular often don't handle it.
Helium leaks are a nightmare. During my PhD I worked with a self-built dilution cryostat that would often have leaks in the custom-built Indium seals. To find the leaks you'd have to pump the cryostat to high vacuum, hook up a portable mass spectrometer tuned to Helium to the pump circuit and then spray different parts of the cryostat with Helium from a regular gas canister. The He would then get sucked in through the leak and show up on the mass spectrometer, which was coupled to a loudspeaker so it would cause a sound whose pitch increased with the measured He density. Once you found the leak you had to vent the entire system, remove the faulty seal and replace it with a new one. All seals were handmade, i.e. you took a small filament of Indium, placed it between the flange and the housing (after carefully cleaning everything with Acetone) and carefully screwed it shut, turning each screw only a tiny bit at each turn and going around all the screws until you could see the Indium squeeze out of the edges.
Even worse, some leaks would only show up when the system got cooled down to liquid Helium temperature. When that happened you were out of luck as you can't cool the system down to 4K before spraying it with Helium, so you had to just guess where the leak might be and replace all seals in that area until you found the right one. Going even deeper in temperature would eventually turn the He4 suprafluid, which means that it loses all internal friction. In that state it would squeeze through even the tiniest molecular cracks, so again if that happened you just had to redo all the seals and hope they would hold.
Bill Thurston! He did amazing topology - hyperbolic knot compliments, Foliations, eversion of the sphere, geometrization conjecture... A brilliant man.
I had the honor of meeting him several times here in Berkeley; he encouraged me to make Klein bottles and other topological shapes.
I was working on an old old old "ERP" system written in D3 PICK. It's a database, programming language and OS all in one with roots in tracking military helicopter parts in the 1960's. I was working on it in the mid-2000s.
It had SQL like syntax for manipulating data, but it was interactive. So you would SELECT the rows from the table that you wanted, then those rows would be part of your state. You would then do UPDATE or DELETE without any kind of WHERE, because the state had your filter from the previous SELECT.
It has a fun quirk though - if your SELECT matched no rows, the state would be empty. So SELECT foo WHERE 1=2 would select nothing.
UPDATE and DELETE are perfectly valid actions even without a state...
Working late one night, I ran a SELECT STKM WHERE something that matched nothing, then before I realised I realised my state had no rows matched, I followed up with DELETE STKM.
Yep, the entire Stock Movements table four the last 20+ years of business were gone.
The nightly backup had not run, and I didn't want to lose an entire day of processing to roll back to the previous night.
I spent the entire night writing a program to recreate that data based on invoices, purchase orders, stocktake data, etc. I was able to recreate every record and got home about 9am. Lots of lessons learnt that night.
I overlapped at LinkedIn at the same time as the author. While there, I wrote my first (and to date only) FactoryFactory.
LinkedIn replaced its uses of Spring with a thing called Offspring. Offspring explicitly disavowed being a dependency injection framework, but it did a similar job for us. I rather liked it. Notably, you just wrote Java with it. Invariably, in Offspring, you'd have to write a FooFactory to construct your Foo object to inject it into some (other class. By convention, all of the factories ended in Factory.
Well, I had a use case for a runtime class that needed to make a per-request factory to make little objects. So to make my Bars, I needed a BarFactory; and to construct the BarFactory, I needed an Offspring factory, thus BarFactoryFactory. There it was. I felt a little weird after that.
I suspect the EventFactoryFactoryFactory code here was such an Offspring factory being used for dependency injection, but I can't explain why it produced a FactoryFactory.
I worked with Ward Cunningham for about a year, and he said once that he regretted coining the phrase “technical debt.” He said it allowed people to think of the debt in a bottomless way: once you’ve accumulated some, why not a little more? After all, the first little bit didn’t hurt us, did it?
The end result of this thinking is the feature factory, where a company only ever builds new features, usually to attract new customers. Necessary refactors are called “tech debt” and left to pile up. Yes, this is just another view of bad management, but still, Ward thought that the metaphor afforded it too easily.
He said he wished instead that he’d coined “opportunity,” as in, producing or consuming it. Good practices produce opportunity. Opportunity can then be consumed in order to meet certain short-term goals.
So it flips the baseline. Rather than having a baseline of quality then dipping below it into tech debt, you’d produce opportunity to put you above the baseline. Once you have this opportunity, you consume it to get back to baseline but not below.
I’m not convinced that the concept phrased thus would have the same traction. Still, I love this way of looking at it, like I love much of Ward’s POV on the world.
That comment is not the reason for the function’s existence, but the details of how the function interacts with GC.
The reason for the function’s existence is that it allows typed arrays to dynamically switch between a fast/compact representation for the case that the JSVM owns the data, and a slightly slower and slightly less compact version for when the JSVM allows native code to share ownership of the data.
This function, slowDownAndWasteMemory, switches to the less efficient version that allows aliasing with native code.
Of course the name is sarcastic. The actual effect of having this is that JSC can handle both the owned case and the aliased case, and get a small will if you’re in the owned case while being able to switch to the aliased case at any time. Since there’s no way to support the aliased case without some slightly higher memory/time usage, we are sort of literally slowing down and wasting memory when we go aliased.
Source: I’m pretty sure I wrote most of this code.
Porphyrios harassed ships in the waters of Constantinople for over fifty
years,[7] though not continuously since it at times disappeared for lengthy
periods of time.[4] It most frequently appeared in the Bosporus Strait.[1]
Porphyrios made no distinctions in regard to which ships it attacked,
recorded as having attacked fishing vessels, merchant ships and warships.[1]
Many ships were sunk by Porphyrios, and its mere reputation terrified the
crews of many more; ships often took detours to go around the waters where
the whale most commonly swam.[4] Emperor Justinian I (r. 527–565), perplexed
by the whale attacks and wishing to keep sea routes safe,[11] made it a
matter of great concern to capture Porphyrios, though he was unable to devise
a means through which to do this.[1][4][12]
SBCL is an implementation I love working with because updates are steady and the software is stable.
But the real superpower, in my opinion, is that, because the compiler and standard library are written in Common Lisp, you can reach in into the internals of SBCL for your own projects—as if SBCL were just another Lisp library. Is it advised to use unsupported APIs? Definitely not. But it's nice to be able to have seamless access to the same facilities and optimization tools (e.g., DEFTRANSFORM, DEFINE-VOP) that the SBCL uses for its own implementation. You can build impressively clear and highly efficient code this way, essentially by extending the compiler "in userspace".
A former friend has been doing a ”trick” for a few years with godaddy domain auctions. He finds a domain that is quite good on Godaddy auctions whos registration is expiring soon (most people set the auction end date to when it expires) and uses a back order service or two to capture it if it were to expire. Using one of a number of alternate fake Godaddy accounts, he bids absolute bonkers amounts that he has no intention to pay for. When the auction ends, he ignores payments demands, the next guy in line won’t pay their bid because they’ve already moved on. Domain expires without a winning bid, owner doesn’t notice that it wasn’t paid before it gets released. His backorder provider will then snatch it up. He’s made about $50,000 from reselling domains he got for cheap this way, even though it costs his about $1000 per year to maintain all these accounts. As scummy as it is, his rule is that if it doesn’t sell during the next two registration years, he will let it expire.