Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Oxide Cloud Computer. No Cables. No Assembly. Just Cloud (oxide.computer)
149 points by vmoore on March 24, 2024 | hide | past | favorite | 111 comments


It’ll probably be a while before i get a chance to work on one of these machines, but I had a chance to meet a couple employees, Steve Klabnik and Travis Haymore, at a “Beers and Boards” meetup Oxide put on after a conference in Raleigh last year. They were really cool, as were a lot of the local folks that showed up. Would highly recommend going if they ever do one in your city!


steve is so cool. i wish we had more steves. we need more people like steve


Aw shucks, both of you, thanks :D


Thanks for what? I don’t get it!


I am the Steve that the two posts above refer to. They both said nice things about me, and that deserves a thank you.


Now I get it. You ARE a nice guy!

I also realized I never read usernames.


Wishcasting a future Framework [1] laptop, bundled with Oxide rack for local and remote management:

  AMD Pro CPU with SKINIT and SEV
  AMD OpenSIL + OSS coreboot firmware
  Motherboard with Infineon 9672 (or newer) TPM for DRTM secure launch
  ECC memory
  Add-on modules for OcuLink [2] (external PCIe) and Nitrokey (2FA, HSM) with OSS Rust firmware [3]
  OS support for QubesOS (with Oxide management VM) or Oxide custom OS
This could be used in the following business contexts:

  High-integrity client workstation within Oxide manufacturing supply chain(s)
  Customer local admin of Oxide rack
  Customer remote admin of Oxide rack, with mutual attestation
  Oxide remote troubleshooting of customer Oxide rack, with mutual attestation
Plus demand-generating use cases from buyers of the equivalent Framework laptop model, who can install their preferred OSS components, including but not limited to the above business contexts.

[1] Framework, https://oxide.computer/podcasts/oxide-and-friends/1632642. Lenovo and other OEMs may follow Framework's lead.

[2] OcuLink expansion module, https://community.frame.work/t/oculink-expansion-bay-module/...

[3] Nitrokey Rust firmware, https://github.com/Nitrokey/nitrokey-3-firmware


I'm trying to make sense of what this is. I don't get what the unique selling point is. I understand that this is about server hosting. And from context I gather that this is about Rust. It seems that special/custom hardware is involved. And they advocate for buying instead of renting servers. But I can't figure out more than that.

Who should be interested in this product? Does it make sense to compare this to AWS, Google Cloud or Azure?


I don't understand why this exact comment is posted on every thread about them - it's not very complicated.

"Cloud computing" style systems are nice in some ways - you can just ask a computer to give you some virtual computers and virtual storage and it gives it to you. Whoever owns them can put quotas or pricing or whatever on you, but you can self-serve, and you don't have to care about replacing DIMMs or NVMe sticks or whatever.

Having some random American megacorp host things in a datacenter is good for some people, bad for others. You might not want to be in their legal jurisdiction, or you're legally not allowed to, or you just don't want to, or their prices for your volume are too high, or you don't want to be locked in to whatever future bad choices they make.

So, Oxide made racks of machines you can buy, plug in, and then have a cloud-style (virtual machine, virtual storage, virtual network) system at home.

I really really don't understand what is hard to understand.


Cloud is a nebulous term (intended), it can mean many things (SaaS? IaaS? PaaS?) and of course your own cloud is a self-contradictory term to boot so leaves you wondering what parts will be different and how. It might not be complicated but it's not easy to communicate if you start with that word.


Also, they provide end-to-end attestation on the entire software stack up to each workload. They can tell you exactly what firmware is running on each chip on their stack, etc. The hypervisor they use is pretty cool, too.


> I really really don't understand what is hard to understand.

I mean, not every single person on HN is a 10x developer that knows 300 programming languages known to man and 45 more known only to catgirls.

I'm a daytime Windows admin, this isn't stuff I normally work with, especially because it's targeted at a specific stack of things that I don't touch.

I really really don't understand what is hard to understand.


It seems like they're vertically integrating everything from hardware to the hypervisor/orchestration layer (something that serves the same function as Kubernetes?) along with their own developer tooling for deploying and managing workloads.

edit: And it seems like it's aimed at companies that don't want to pay cloud margins, but don't (yet) have the expertise to set up a production-worthy Kubernetes (or similar) cluster from scratch. An opinionated appliance vs DIY approach.


The idea of a company buying an out of the box k8s because they’re not able to set it up themselves sounds insane to me. What’s the plan when it breaks? Send the server back?


> out of the box k8s

Just to be clear, though this did seem to get cleared up below, the level of abstraction you're working with on an Oxide rack is VMs, not k8s. If you wanted to run your own k8s on top, you could.

> because they’re not able to set it up themselves sounds insane to me

It is not about ability. It's about quality, and what you want to spend time on vs what you want to spend money on. (and of course time is money...)

There's a lot that goes into building and maintaining a private cloud. Some would prefer to build it themselves, some would prefer to focus on their core business and buy something that works well out of the box.

> What’s the plan when it breaks? Send the server back?

Building a robust product is very important to us, but so is supporting it. If something breaks, you contact support, and it gets sorted.

An advantage here is because we have created almost everything ourselves, under the same roof, we have fantastic insight into how the system works. No pointing the blame at some other vendor's firmware!


You can check out Simplivity from HP that does something similar, but in a more traditional enterprise setup. We use them as local VMware and storage for factories and warehouses where there is need for compute and storage that needs to be close physically and we don’t want to bother with hardware maintenance.


I imagine the expectation is that the company obviously needs to know how to perform regular hardware maintenance, but the hypervisor/orchestration layer should be so well integrated that it doesn't require an expert team to operate (like you would need with a self-hosted Kubernetes cluster, which is notoriously difficult to understand and operate in production).

If that illusion breaks and you need to get into the weeds in the same way you do with self-hosted k8s, then the value proposition of their product goes poof. I'm just speculating, of course.


> self-hosted Kubernetes cluster, which is notoriously difficult to understand and operate in production).

Yeah that was my reference point - cluster at home. Breaks often & hard and usually end up wiping it. Good for home use but I’d not want to rely on it for prod


What distribution are you using? I’ve been running k3os (with etcd) for about two years without issues, other than Rancher Labs dropping support for it so it’s well past EoL.


Vanilla K3s on a mixture of node types. Mostly ARM

It is a practice cluster though so I’m not exactly careful in my experimenting


    > buying an out of the box k8s because they’re not able to set it up themselves
As I understand, this is exactly the purpose of Red Hat's OpenShift. It is a layer over k8s with a friendly GUI. I use it at work, and I don't have a clue about k8s.


Why would you think they are limiting themselves to Kubernetes?


I don't, I was just using it as a point of comparison against Oxide's own software stack.


It seems this is mainframe (a complete, supported, vertically integrated solution), just not from IBM.


It's like Openstack-in-a-box except with a promise that it'll behave more like AWS than Openstack. And you buy hardware.


Companies that run large on-prem workloads and want to have similar hardware/software to what the likes of Google, FB etc. have in their data centers


Does that mean that Oxide sells computers that are optimized for on-premise cloud deployments?


I think they sell something like mini and on prem gcp / aws. You just connect the thing to power and ethernet and now you got your own locally running Webinterface / API which can start VMs, networks etc


Oh neat, I can have my own DigitalOcean with one of these?!


Not quite. They don't offer out of the box managed services like DigitalOcean. No one-click MongoDB or stuff like that. No out of the box Kubernetes.

You basically get Networking, Compute and Storage. Everything else you have to build on top.

You will also get stuff like Terraform provider, SDKs and so on.

> I can

I guess if you are millionaire you could.


Ah okay, so you get the foundation. Still, pretty cool.


This is correct, yes.


That’s exactly what this is advertising, yes.


Thanks. But as someone who isn't familiar with Oxide, this was not clear to me at all. The "Just Cloud" threw me off and I was reminded of shuttle.rs which also advertises easy deployment of (Rust) services. I could not see that this was supposed to be about on-premise hardware.


Bunch of discussion four months ago regarding their official blog post:

https://news.ycombinator.com/item?id=38023891


Curious if oxide is considering a 1-2U product.

Feel like there is a larger potential customer base there but it also seems like they would lose the edge they built by owning the full rack. (I.e. integrating with customer TORs and network fabric is a nightmare.)


Not at the moment, but never say never.


There needs to be a dev kit of sorts. I’d be happy to recommend Oxide to some of our customers but not before I try it first. And I’m not buying a whole data centre just to play around?


It's still early days for us obviously, but we have some of our equipment in a cage in a regular colocation facility, on the Internet. We're generally able to provide access to systems there so that folks can kick the tyres as part of a pre-sales engagement. If you or your customers are interested, you're always welcome to reach out to our sales folks and have a chat!


Maybe they could rent a rack with a somewhat direct access on a per-month basis or something so we can POC around, but that could turn them into a cloud company which is probably not what they want.


I hear you. We currently have space on a rack allocated for potential customers to kick the tires, but it's not more broadly available than that.


Yeah definitely. I used to work for an AI hardware company that only sold $150k systems to "POA" customers. I think part of the reason they didn't do very well is it was completely inaccessible to normal people.


OxidePad Laptop product based on Helios with NeWS Window System. When?


I know Don Hopkins hangs out here sometimes, but the big question is did NeWS source code ever get liberated from the depths of Mordoracle?


Naw, Sun never liberated it.

Sun did announced "NeWS is Free (only $1000)" which was kind of a kick in the balls and abuse of the word "free".

"Free (only $1000)" discussion on usenet:

https://groups.google.com/g/comp.windows.news/c/-G1l_372AP8/...

https://groups.google.com/g/comp.windows.news/c/YtZzYMZ587M/...

https://groups.google.com/g/comp.org.sug/c/IXcXK7hKgUQ/m/nf8...

A couple of my flames about the situation:

https://www.donhopkins.com/home/archive/NeWS/Explanation.txt

https://www.donhopkins.com/home/archive/NeWS/sevans.txt

And to cleanse the palette, DSRH's epic "Sun Deskset == Roy Lichtenstein Painting on your Bedroom Wall" flame (David SH Rosenthal was one of the original authors of NeWS, with James Gosling, and also wrote the X11 ICCCM):

https://news.ycombinator.com/item?id=38768724


I don't really know. I was just trying to come up with some obscure references. Solaris on desktop wasn't crazy enough.

I think a modern take on NeWS style system would use Webassembly style system with capability based access rather then PostScript and no security. Basically a modern browser with Canves/WebGL and without a lot of other stuff a browser does.

Would have been a interesting alternative to the Wayland approach.

Kind of sad that Scott McNealy didn't have the balls to open it up. Having some real competition to X in the 90s would have been a cool. Specially if Sun had pushed it at least somewhat.


That would be kind of cool.


A 6U product kind of like the blade server enclosures could be interesting too. That said, I haven't worked in a datacenter for 14 years, so don't listen to me too seriously...


I'd love a homelab sized version of the oxide system. Just looks so amazing


I recently watch:

Unplugging the Debugger - Live and postmortem debugging in a remote system - Matt Keeter [1]

The talk was at the Open Source Firmware Conference.

Pretty cool look into how their system works under the hood.

[1] https://vimeo.com/877092565


I am unspeakably biased, but I love this talk from Matt!


Its a great talk. I had to laugh when the guy asked 'why not gdb'. Matt answer was very diplomatic, I could imagine you would have gone on some kind of rant :)


Would love to know what the minimum buy in is on one of these


The specifications page [1] gives a bit more context. I think minimum buy is about a half rack, which includes at least 16 64-core CPUs, 16 TiB of RAM, and 465.75 TiB of NVMe SSD storage. Playing around a bit with the Dell server configurator tool, it seems like that is going to come in a rough ballpark of $1MM as stated in a sibling comment.

[1]: https://oxide.computer/product/specifications


I do not purchase hardware, but $1MM is way above what I would have expected. Going to Dell, the most expensive pre-built rack mount starts at ~$30k. Assuming 16 of those only gets you to $480k. Throw in an extra premium for the rack itself + small company margins still leaves me reaching to get to that price point.


Price delta is out of the box cloud orchestration value (imho). Most large enterprises would struggle to build this themselves (Mesos->OpenShift->Kubernetes/Tanzu/etc), so you’re paying for turnkey cloud on prem. Probably save in the long run considering public cloud margins.

Enterprise CIO doesn’t want a hobby project (attempting to cobble together internal cloud orchestration and infra), they want to be able to show immediate business value. You charge what the market will bear. I’ve seen many companies with thousands of employees and spending millions, even tens of millions a month, on public cloud providers and just flail, unable to get to steady state post transformation (even after years of trying). This is made for those folks, especially with Broadcom having VMware self inflict harm on itself with recent strategy decisions.

“Write check. Cloud up.”

(no affiliation)


I mean…you could also just get a z/VM system and have a few LPARs on it and just use Ansible for orchestration. Why wouldn’t an enterprise CIO just go for a mainframe system?


"just" is doing a lot of heavy lifting here, I'm not the target customer base for one of these, but if they can deliver a server rack that teams can plug in, turn on, and start deploying workloads to it in the same way they currently deploy to public clouds with familiar tooling, that seems extremely valuable to me.

It's going to depend on how well they manage to pull off the magic trick of "little or no configuration and maintenance required". If things start breaking in hard to diagnose ways, it's going to be just another broken appliance that requires expensive maintenance, and companies will be questioning why they didn't DIY it in the first place.


If there is one company that has made 'make it easy to debugging issues' their core philosophy, its them.

Its almost all open software, that helps a lot. They add a minimal amount firmware, rather then the many, many million lines of firmware that is usually around. And most of the stuff they added is Rust on a micro-kernel. (Check out the talk I linked top level to see some of their low-level debugging infrastructure).

To bad they can't (yet) get open firmware in the NIC, the SSDs and some of those other places (Time for an Oxide like company that makes P4 driven NICs). But nobody else can really offer that either.

The only real issue for them is that Illumos is the host OS. Its open source and stable of course, and has good debugging tools. But in terms of industry experience, the amount of people with deep knowlage of the system are harder to find compared to Linux.

The of course also add some complex software on top that will have to work properly, moving VMs, distributed storage and so on.

Full DIY is pretty damn hard, you need a serious team to pull that off. The Dell VxRail/VMWare is the more reasonable competition. I think VMWare going full Broadcom mode will make them more interesting. Buying into that ecosystem isn't that appealing right now.


Getting the same performance and feature out a mainframe will be considerably expensive I would guess. And in addition to that you are buying into an incredibly closed ecosystem where prices only go up from there.

You are also paying for a bunch of stuff you don't need. Most people just don't need to hot swap a CPU or turn these single socket 128 core machines into a gigantic 4096 machine either.

Simply moving virtual machine off and restarting or replacing a sled is enough for the waste majority of use-cases.

This is still pretty much commodity single socket server platforms, just with more sane and open firmware and a sane open source software stack.


Are you sure you're comparing equivalent memory and storage specs? I needed to go into the customization menus in the Dell configurator to spec something equivalent, where prices started going up quite rapidly.

For example "3.2TB Enterprise NVMe Mixed Use AG Drive U.2 Gen4 with carrier" is $3,301.65 each, and you'd need 10 of those to match the Oxide storage spec -- already above the $30k total price you quoted. Similarly, "128GB LRDIMM, 3200MT/s, Quad Rank" was $3,384.79 each, and you'd need 8 of those to reach the 1TiB of memory per server Oxide provides.

With just the RAM and SSD cost quoted by Dell, I get to $60k per server (x16 = $960k), which isn't counting CPU, power, or networking.

I agree these costs are way way way higher than what I'd expect for consumer RAM or SSD, but I think if Oxide is charging in line with Dell they should be asking at least $1MM for that hardware. (At least compared to Dell's list prices -- I don't purchase enterprise hardware either so I don't know how much discounting is typical)

Edit: the specific Dell server model I was working off of for configuration was called "PowerEdge R6515 Rack Server", since it was one of the few I found that allowed selecting the exact same AMD EPYC CPU model that Oxide uses [1]

[1]: https://www.dell.com/en-us/shop/dell-poweredge-servers/power...


> For example "3.2TB Enterprise NVMe Mixed Use AG Drive U.2 Gen4 with carrier" is $3,301.65 each

That’s the pricing for people who don’t know to ask for real pricing — it’s an absolute joke. I don’t know how might extra margin gets captured here, but it’s a lot.

Even in teeny tiny volumes, Dell will give something closer to real pricing, and a decent heuristic is that it’s at least 2x cheaper.

This is a real SSD. Dell likely buys this brand and others:

https://www.serversupply.com/SSD/NVMe/30.72TB/KIOXIA/KCMY1RU...

Yes, that is almost an order of magnitude cheaper per TiB. If you buy from a sketchier vendor, you’ll get all the way to 10x :)


Doesn’t Oxide also negotiate pricing?


And they keep the margin to have money for R&D. I kind of get it because it’s low volume for now but I don’t necessarily see the appeal of being an early adopter here.


The appeal is that your a buying into an open infrastructure system, rather then closed firmware and VMWare (now owned by ... Broadcom).


Thanks for the link, and very good to know. I've always struggled to find component prices for Kioxia drives and higher-capacity RAM sticks so it's good to see I can finally look these up on serversupply when I'm curious.


$480k + switches + management + support + virtualization licenses + integration - it adds up. It will also probably take you at least 3x as long. I can think of lots of examples where this premium for apple-like ux is totally worth it


If you’re thinking about it with the mindset of what the minimum is, you’re not a prospective customer.


iirc they mentioned on one of the pods that some configs go for a million. Im assuming it’s the 16-sled one but they didn’t share the actual specs


Not making a value comparison here, but reminds me strongly of the "engineered systems" of the early 2000s (where you buy a box+database all in one go from HP)...and most recently of the new Nexus stuff coming out of Microsoft's acquisition of the ATT cloud people.

See https://learn.microsoft.com/azure/operator-nexus/azure-opera...

AIUI Microsoft will ask you to buy several racks worth of (oem?) server gear and switch fabric, configure it to load up their version of kubernetes, and then leave you to run whatever workloads you like (or they approve of? Not sure) with the hook being that you can manage it all from azure.

Pointed strongly at telcos, and I imagine that you cant get this without spending at least a quarter mil on hardware. Plus whatever azure fees there are? I wonder how many msft expect to sell, especially as telcos with spare cash are like unicorns.


Are there any videos or screencasts of one of these in operation? I’d love to see a fresh out of box to up and running walkthrough similar to what VMWare produced for Tanzu. It’s such a niece thing that lots of tech people who’d be really interested will never get to play with one but it seems there’s not much material out there.


The console is open source, and you can try a demo of it in your browser

https://console-preview.oxide.computer/

https://github.com/oxidecomputer/console

I also wrote up a blog post walking through me setting up a server by hand: https://steveklabnik.com/writing/using-the-oxide-console

You can also use the API, there's a terraform provider, etc.


Poking through Oxide's OSS a few months ago made me want to work with the company so badly. But I am honestly just not (yet) a strong enough candidate - perhaps in a couple of years! In the meantime I will keep rooting around in hubris[0] :)

[0] https://github.com/oxidecomputer/hubris - fascinating little kernel


I feel very similarly. I have 0 rust experience and 0 Osdev experience but i love unix-like OSes and I am super super interested in what there guys are doing.


>The console is open source, and you can try a demo of it in your browser >https://console-preview.oxide.computer/

That's pretty cool! The design language is a nice touch for sure.


Is this a rebrand? I don’t remember it being called a “cloud computer”. (Or why else is this on HN?)


I am... also a bit surprised that it's on HN right now? (We introduced the term when we launched last year.[0][1])

[0] https://oxide.computer/blog/the-cloud-computer

[1] https://oxide-and-friends.transistor.fm/episodes/launching-t...


I agree. I remember Oxide was a balls-to-wall hardcore server "rack" soup to nuts baked by Oxide so they could provide exceptional performance and software stack. This "cloud" thing is new.


I think the core idea hasn't changed (it's generally expected that hardware like this is usually divvied up into VMs), just the branding went from 'hyperscalar for the rest of us' (which is a bit opaque to non-techies), to 'it's cloud, but in a box'.


That's correct, yes.


Yeah, I didn't remember this kind 'branding' message before either...(it was servers?).... but the marketing changed slightly when they hit a milestone 4 months ago....

Bunch of discussion then: https://news.ycombinator.com/item?id=38023891


Probably makes sense when teams have to sell that kind of buy to their managers.

We are not just buying servers, we are buying 'cloud'.


No they called it that for a while.

Why its on HN? Maybe somebody discovered it for the first time?


I get the hardware side.

I don't get the platform side.

What guest OS's does it support? Can you create "bare-metal" applications that run in some kind of container on it? Does this resemble a re-invented ESXi?

How does the performance and redundancy of their storage layer compare to something like GRAID?


Timing could’t be better. VMWare is actively firing and pissing off large swats of their customer base and basically Nutanix is the only serious alternative for onprem.

What is the total overhead (in terms of cores, memory) of the management layer with Oxide (incl. block storage, vmm, etc.)?


I didn't think oxide was gonna make sense when they first announced it. What was gonna be their competitive advantage? I thought.

I'm seriously impressed at how much they improved the on prem experience


> Get In Touch

> Contact Sales

Nope, hard pass. If you don't list your prices on your website I'm never going to be a customer.


You were never their target audience. There is never one price off the shelf pricing for this kind of hardware


That's what they told Michael Dell, too.


...

dell famously doesn't have one price either.


Of course not. The idea was that you'd get a base price from the ad, then call Dell's 1-800 number to order and customize the system. That way you didn't have to waste time haggling with salesdroids if the price was 10x what you could afford.

They didn't just "tell him," they laughed their heads off at him. Advertising prices and selling direct Just Was Not Done. Until it was, leaving a trail of bankrupt competitors that looked like that road in Iraq.


but they do generally have a price. Even if it's the super-high no-one-should-actually-pay-this price, it really helps with evaluating whether something should even be on the list to contact sales about.


Grift pricing denotes a grift. If there's no price then the price is made up.


All prices anywhere are "just made up". It's cost + taxes + profit margin you want to make on this deal. The market elasticity determines how often you want to move that price up or down.


All prices are made up, except maybe “efficient” commodity markets..


Enterprise pricing is absolutely a grift, but one where the grifted and grifters are promoted when they fall for the grift.


I'd bet money there's thousands of b2b deals that look exactly like this.


Considering most of the B2B deals are a huge PITA I’m not sure this is a good thing?


It is not a consumer item. The people who would consider this product are well acquainted will involving sales people.


That's great as far as they're concerned. This is a seven digit purchase with a lot of moving parts. They need to know that you can actually pay them and you need to get an opportunity for your lawyers to grill them and get them on the hook for as much as possible. The big leagues aren't for everybody.


You were gonna drop $2.5M on one of these racks?


On Visa card.


You joke but apparently a lot of startups are putting their cloud bill on a credit card for the points. Sometimes even a personal card.


It's (unofficially) a half million to a million depending on configuration.


It’s a great filter. If the price is a "contact us" it means they target businesses with more money than IT skills.


Welcome to b2b.


Is this like a nutanix? What is this?


I wrote a bunch of comments about this in the past. Happy to answer questions though! https://news.ycombinator.com/item?id=30678324


I’m curious to know if this is a good business model.

Is the a market for these?


Yeah, private cloud racks are an established market. Nutanix, VxRack, UCS, etc.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: