Hacker Newsnew | past | comments | ask | show | jobs | submit | plantinthebok's commentslogin

What's the actual win here? Avoiding relay latency in the rare cases Tailscale can't punch through NAT? If that's it, a $3 VPS running Headscale seems simpler. The complexity feels like you're optimizing for the 5% case while adding permanent vendor lock in. What am I missing?


Tailscale has what they call Peer Relays now to help solve this problem:

https://tailscale.com/blog/peer-relays-beta


$3 VPS running Headscale is not simpler since you won't be able to run both headscale and tailscale on your end user machines, I don't recommend it.

The solution we've found is running a white IP container (or VPS) which looks like regular Wireguard outside, while inside it "forwards" to your existing tailscale network.

I don't remember if we use https://github.com/gravitl/netmaker or https://github.com/juhovh/tailguard

Also see: https://tailscale.com/blog/peer-relays-beta


For many homelabbers, just being cheap and avoiding the $3 VPS, that's it


Exactly, just today I set up a cloudflare tunnel to a docker compose service running on my home server. I didn’t want to expose the server directly to the internet, and I want to share this service on a certain domain with broader family.

I have a server at home that works well. I don’t reaaaally want to pay an extra $30-$40/yr and have an extra thing to manage when the CF tunnel works fine for free. I like Tailscale more, but I want to share this with family who won’t install TS and also want to use a specific domain.


I dont even pay anything, my tiny homelab is completely covered by the free tier


What provider still has decent free tier?


Oracle. 4 vCPU, 24 GB RAM, 200 GB SSD. It’s arm64 but nowadays that doesn’t really matter.


That can't possibly be free?


https://docs.oracle.com/en-us/iaas/Content/FreeTier/freetier...

Only caveat I see is they reserve the right to delete underutilized/ idling instances


I guess they require a credit card before accessing that free tier?


If they do: create a virtual one, create an account with it and delete the card right after.


An honest advise being down-voted?

Thanks


I upvoted you.


That’s generous but Oracle is very generous.


Which region were you able to create this in? They seem to be out of capacity all the time in EU.


What worked for me was handing them a credit card and transitioning myself out of the free tier. (I'd use the free credits they offer prior to doing this - they give you something like $300 immediately on signup.)

The always-free infra remains free, you just have the chance of incurring a bill if you make selections that aren't free or exceed block storage/egress (200GB/10TB) limits of the always-free tier. Leaving the free/trial tier gives you access to a much larger pool of instances. I never successfully deployed an A1 instance prior to becoming a "paying" customer - now I've done it hundreds of times without ever having an issue.

I've been running a small k0s cluster and a standalone webserver for months while incurring about $2.50 - $3 in spending each month, primarily from being slow to remove instance snapshots sitting in block storage.

Even things that are oddly expensive on AWS - like NAT - are free on Oracle. There are zero gotchas.


I hit the same roadblock as the above user and it never occurred to me to just cross the barrier with cash and then scale back to free. Thanks for this.


It doesn't actually charge you anything. You just have to put a card down to be considered a priority because now you potentially can spend money & therefore are more important then the other free-tier losers. /s It's still free tier & still free.

The free tier is also based on capcity usage, and not instances. If you want 3 cores on 1 machine & 1 on another, they're cool with that. I personally run Pangolin on a 1 core & self-hosted github runners on a 3 core.


I have read that you need to write a script to constantly bombard their API in order to get one. I presume you'd be fighting other scripts.


What does Headscale have to do with NAT hole punching? I believe what you actually mean is setting up a relay, see the link in https://news.ycombinator.com/item?id=45948806 .


I don't think you are missing anything. They have a bunch of half baked features like this that aren't as robust as real security vendors and lock you in just like you said.


Maybe I’m misunderstanding something…

But are you accusing someone of promoting vendor lock-in (cloudflare) while at the same time promoting vendor lock-in (tailscale)?

If you’re ok with vendor lock-in, shouldn’t you in theory be ok with any vendor?


Headscale is the not-vendor-login version of Tailscale.


Sort of. Many tailscale clients you would use with headscale are closed source.


No. Not all vendors are equal. We can treat ProtonMail differently then Gmail, for example. Looking at what's gone down with VMware, definitely don't get in bed with Broadcom.


Did Quinn's flow control play nicely with OpenRaft's replication backpressure, or did you need custom coordination layers?


They mostly play nicely together because they operate at different layers. Quinn's flow control handles transport level backpressure (receiver can't consume bytes fast enough), which naturally surfaces to OpenRaft as slower RPC responses. OpenRaft then handles consensus level backpressure by tracking replication progress and adjusting accordingly (e.g., switching to snapshots for lagging peers), it just works. The main benefit is that QUIC's built in flow control means I didn't need manual buffering logic that you'd typically implement over raw TCP.


What was the most surprising timing constraint you had to meet for the V20 bus controller? The 8088's multiclock cycles were always under specified in original datasheets and I'm curious what reality looked like.


The bus controller was fairly straight forward. I found the timing diagrams in the V20 datasheet to be pretty easy to follow. The clock for the CPU is generated by the FPGA which I derived from a clock running at twice the speed, that made it easy to pick which clock edges I wanted to sample from or assert data for the CPU on. I did have a bug or two in there early on which meant that memory writes would not store the data I was expecting as I sampled the data bus at the wrong time. That was a tricky one as the bug would only become visible when that location was loaded from again. The CPU i'm using is rated for 16Mhz, but i've only tried to push it up to 10Mhz so far so I have a bit of margin. Some more issues might pop up if I push the speed higher I suspect. Its very stable as it is now and i've not seen any issues I could attribute to the bus handling.


>I derived from a clock running at twice the speed

aah, so CFG_ENABLE_20MHZ runs cpu at 10. I was surprised you could run the bus at 20MHz with this FPGA devboard, 10 makes more sense.

As I touched on that in HaD comment this is not a good devboard :( It has terrible pinout https://github.com/wuxx/icesugar-pro/tree/master/schematic all 100 signals huddling together in the middle with only 9 ground pins among them. Very bad for signal integrity. Whats worse someone in China actually decided to sell it commercially :( Luckily you made it work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: