My favorite TI graphing calculator story to tell was back in Algebra II class in high school, while studying polynomial expansion, I wrote a program on my TI-85 that would not only solve them, but also showed the work, so I literally only had to copy the exact output of the program and it looked exactly like I had done it by hand. I asked the teacher if using it would be cheating, and she said "If you know the material so well that you can write a program that actually shows the work, then you're going to ace the test anyway, so go ahead and use it, just don't share it with any of your friends."
The joke was on her, of course, because I didn't have any friends. :-(
Later I wrote a basic ray tracer for my TI-89. I even made it do 4x anti-aliasing by rendering the scene 4 times with the camera angle slightly moved and had a program that would rapidly move between the 4 rendered pics so that pixels that were dark for only some of the pictures would appear grey because of the screen's insanely slow response time. A basic "reflective sphere over a checkered plane" in that super low TI-89 resolution still took like 90 minutes and drained half the battery.
Asianometry has a good ~2 month old video on TI that goes into it's history as a chip maker, how it got into calculators and consumer products, and where it stands today.
I can attest that persistent storage is the hard part! Full disclosure, I work for a company[1] who makes a persistent storage solution for containers/Kubernetes. We are absolutely seeing that our large customers (folks like GE, Verizon, Dreamworks, Comcast, etc) are running "cloud native" applications on-prem as well as in the public cloud so this is a really smart move for Google.
NIO, the self-driving car company is doing this. They did a pretty detailed interview on their use case which includes a 120 PB data lake, and Cassandra, Kafka, TensorFlow, HDFS. You can read here: https://portworx.com/architects-corner-kubernetes-satya-koma... . (Disclosure, I work for Portworx the solution they use for container storage, but hopefully the content speaks for itself).
ClusterHQ was a fantastic learning experience. I'm proud of what we achieved and the many strong relationships that were built in the team.
Ultimately the reason that ClusterHQ failed, I think, was that we believed we had product-market fit before we really did, and we started scaling too soon.
When we started, it wasn't possible to connect storage to containers at all, and so we had to put a lot of work into making that possible. And by the time we'd got Flocker working reliably across AWS, GCE, OpenStack & a dozen or so storage vendors, we'd been commoditized by Kubernetes.
Our premature scaling then made it harder to adapt as fast as we needed to. Many lessons learned!
Michael from Portworx here. Thanks for the shout out. Just for some context, we just announced a partnership with Mesosphere today to help accelerate adoption of DCOS for stateful services [0] in fact. We handle the automation of all the state management mentioned above, not just volume provisioning. Our customers include big companies like GE and Dreamworks but also a lot of smaller companies. You can use PX-Dev[1] for free up to 3 nodes. Would love feedback.