Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have done similar although 10s seems a bit far, were you using K8s? With our custom VMM could get similar instances up and routed within 2s w/o K8s.


I'm copying a 4GB root filesystem (on ext4!) per connection, so that's a couple of seconds, more like 10s on my laptop. The boot time for the kernel is about 2s to userspace initialisation. Then the userspace is running node.js to bring OpenVSCode up (2-3s?), starting a couple of other processes, and then the front-end is polling only every 1s.

It's all on 1 server, just a Go https proxy launching firecracker on demand, no k8s.

If I were optimising, I'd first make sure the root filesystem copy ops happed on a CoW filesystem, then change the readiness polling for a "long" poll that's under the server's control. And next buy a faster server :) But I'm using a relatively heavyweight userspace app, so I'm curious if you can see there might be other gains to be had.

Our application is using VS Code as an environment for programming exercises. So even 30s would have been fine when people are using it after that for 1-2 hours.

If I could get it down to 2s I'd certainly enjoy testing it more!


Ahh fair. I was thinking along the lines of storing the 'base' VM as a memory snapshot and CoWing it per connection to save some initialization overhead perhaps...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: