It sounds like their vision for space-based data centers presupposes nearly-free energy costs, delivered via a colossal solar farm made possible by falling launch costs.
Temporarily putting aside (extremely fair) feasibility questions around those two pre-requisites, data centers are a not-bad choice for things to do with unlimited space energy.
Aluminum smelting or growing food are the two I’d think of otherwise, and neither of those can have inputs/outputs beamed to a global network of high-bandwidth satellites.
Solar energy isn’t that much more efficient in Earth orbit than on Earth - maybe twice as efficient. That sounds nice, but you’re saving half of your solar panel cost while massively increasing every other cost.
The one benefit is being able to be in a synchronous orbit with the sun, so you don’t have to contend with night. However, that’s just another ~doubling of efficiency, which I think still nowhere near makes up for the additional costs.
A competitive geoguesser clearly got there through memorizing copious internet searching. So comparing knowledge retained in the trained model to knowledge retained in the brain feels surprisingly fair.
Conversely, the model sharing, “I found the photo by crawling Instagram and used an email MCP to ask the user where they took it. It’s in Austria” is unimpressive
So independent from where it helps actually improve performance, the cheating/not cheating question makes for an interesting question of what we consider to be the cohesive essence of the model.
For example, RAG against a comprehensive local filesystem would also feel like cheating to me. Like a human geoguessing in a library filled with encyclopedias. But the fact that vanilla O3 is impressive suggests I somehow have an opaque (and totally poorly informed) opinion of the model boundary, where it’s a legitimate victory if the model was birthed with that knowledge baked in, but that’s it.
> These companions can take a variety of forms — in the 2004 study, which looked at 100 6- and 7-year olds, 57 percent of imaginary friends were human, 41 percent were animals, and one was “a human capable of transforming herself into any animal the child wanted.”
At work I’ve recently moved from a Node/TS monolith to “Python+TS react in a sea of .NET microservices I debug and contribute to”
It’s been the second time in my career I’ve been surprised by not hating C# (the first was goofing off with Unity in 2018). The language itself has a lot of niceties; for example a method to turn the variable foo into the string “foo”. The Neovim LSP set itself up just by installing the dotnet executable. And the syntax for creating complex workflows were pretty ergonomic once an experienced .NET dev walked me through what I was even looking at. I still prefer FastAPI + well-typed Python as the backend framework of my dreams… but I’d work in .NET again.
Blazor hasn’t sold me yet, but seems like a fine choice. It fits in the same class of tools to me as Django Templates, HTMX, or JS handlebar rendering. There’s a class and size of apps for which that’s perfect, and there’s some value in a fullstack language keeping your stack monolingual. But IMO the framework should stand on its own against frontend frameworks like React, Vue, or Svelte… with the simplicity of monolingualism added as a cherry on top. Otherwise you’re optimizing towards the number of languages your devs need to learn over which frontend framework would be the best fit for your app. And between the DX and expansiveness of the JS ecosystem, it’s been hard to imagine going back once you’ve spent a few years eating the shamefully-complicated-constantly-shifting-and-reforming elephant that is learning TS React and friends
> eating the shamefully-complicated-constantly-shifting-and-reforming elephant that is learning TS React and friends
Can I say again how nice it was to use EmberJS from its release candidate days all the way through my seven year tenure at that job? Batteries included, Promises way back in 2013, and way more stable than anything else.
My teenage niece is getting solid at chess, but I can still beat her handily. So we came up with a fun handicap the last few times we’ve played:
Every third turn, my four year old daughter gets to move for me. She doesn’t know the rules so she chooses a piece and we give her the full rundown of options where that piece can legally move. Neither of us can influence her choice, but there’s some degree of psychological play allowed for everyone’s entertainment
It’s been unexpectedly rich and fun for everyone involved:
- My daughter is slowly learning the game and likes hamming up the choice
- I exercise a different part of my brain around guarding eventualities and conservative movements
- Pure cackles of joy and glee from my niece whenever my daughter reaches for the queen
That's so close to a variant we once invented. There were 4 of us, 2 of us good at chess and 2 beginners. We played in teams, a good person and a beginner on each team. We took it in turns to move and you couldn't tell your partner ANYTHING.
As the good player, you had to come up with a good move for the board but also for what your partner might do next. Was fun!
I love that! A very similar situation was also the inspiration for this variant- a beginner friend and I wanted to play but make the game less serious and more funny
Another application of GLSL/SDL: you can make custom shader materials for yourself in ThreeJS using the ShaderMaterial. You write the GLSL code in a string within the material and it’ll be applied to the mesh the material is attached to
Gives you the ability to make some cool looking effects like fresnel without post-processing filters
There’s a chance that if Moore’s Law holds, a computer might catch up after decades of continued exponential growth. But my money’s still on the trash spelunker in that race
Writing a FastAPI websocket that reads from a redis pubsub is a documentation-less flailfest
reply