They are an alternative to the tailscale operated DERP servers, which are cloud relays.
Even with the much touted NAT punching capabilities of tailscale, there are numerous instances where tailscale cannot establish a true p2p connection. The last fallback is the quite slow DERP relay and from experience it gets used very often.
If you have a peer in your tailscale network that has a good connection and that maybe you can even expose to the internet with a port forward on your router, you now have this relay setting that you can enable to avoid using the congested/shared DERP servers. So there is not really a new use-case for this. It's the same, just faster.
The explanation that I think wasn't entirely clear in the post is how it actually works/why that's better.
From what I can tell, the situation is this:
1. You have a host behind NAT
2. That NAT will not allow you to open ports via e.g. uPnP (because it's a corporate firewall or something, for example) so other tailscale nodes cannot connect to it
3. You have another host which has the same configuration, so neither host can open ports for the other to connect in
The solution is to run a peer relay, which seems to be another (or an existing) tailscale node which both of these hosts can connect to via UDP, so in this circumstance it could be a third node you're already running or a new one you configure on a separate network.
When the two NAT'ed hosts can't connect to each other, they can both opt to connect instead to this peer node allowing them to communicate with each other via the peer node.
Previously this was done via Tailscale's hosted DERP nodes; these nodes would facilitate tailscale nodes to find each other but could also proxy traffic in this hard-NAT circumstance. Now you can use your own node to do so, which means you can position it somewhere that is more efficient for these two nodes to connect to and where you have control over the network, the bandwidth, the traffic, etc.
Officially you are right, they release it as a stable OS after a few weeks of beta's.
Unofficially any serious user knows to stick to LTS for any production environment. This is by far the most common versions I encounter in the wild and on customer deployment from my experience.
In fact I don't think I ever saw someone using a non-LTS version.
Canonical certainly has these stats? Or someone operating update mirror could infer them? I'd be curious what the real world usage of different Ubuntu versions actually are.
Microsoft did port some versions of Windows to Itanium, so they did not reject it at first.
With poor market demand and AMD's success with amd64, Microsoft did not support itanium in vista and later desktop versions which signaled the end of Intel's Itanium.
Microsoft also ships/shipped a commercial compiler with tons of users, and so they were probably in a position to realize early that the hypothetical "sufficiently smart compiler" which Itanium needed to reach its potential wasn't actually possible.
It uses pgvector extension for search (?) so it's not as easy as changing the db engine. Using the provided docker compose file it's very manageable and the default/recommended layout keeps all data files in a single directory.
Keep in mind that they probably use it or at least discovered it explicitly because it's open source. So either you don't release it and they use something else, or you release it and they use it. Option 2 sounds like giving you more exposure and more opportunities in the long run.
Sure, it would be hard to monetize and while it took countless hours to iron out many of its bugs, it is definitely not rocket science. I contribute to open source software expecting nothing in return because all software I use is also open source. It's my way of giving back and I love the knowledge that it is useful to people and hearing about their projects. So far I did not have any benefits from it but continue doing it anyways. It makes me happy to see more and more people using it.
I don’t really know what you’re talking about or how it’s relevant (I know you’re referring to an incident that happened outside of HN).
The guidelines clearly ask us to “be kind” and to not “fulminate”. Your comment was flagged by several community members so there’s a pretty strong consensus that it’s not what we want on HN.
That's...wow. What an absolutely disappointing reply.
If "be kind" is an actual value of HN I have yet to see it in almost 15 years of being active on this forum. This community is generally speaking not kind and fulminating is de rigeur as far as I've been able to surmise.
But whatever. I won't bother you or anyone else on here anymore. Good riddance.
You’re disappointed to discover after 15 years that we expect people to try to be kind and to avoid fulmination? I’m not sure what kind of forum you want this to be, but the guidelines have been pretty consistent about those things for much or all of that time, as has the approach to moderation. Yes people breach the guidelines all the time. That doesn’t stop us reminding people of them and continuing to aim for better. You’re always welcome here if you respect the community and want to make the effort to respect the guidelines.
(So far as I can tell I have never said the words "die slow" in my life. I think you have me confused with someone else; my hip hop conversance begins and ends with Tribe and De La Soul. It's "Thomas", by the way.)
I'm not advocating in doing it for the exposure as a primary reason. And absolutely not to be paid in exposure. 100% agree with the comic there.
I should not have used that word. It is clearly charged with negativity.
Of course I wish everyone would be compensated for their work. I feel that for some types of project, publishing as open source is a great way for people to find and use it. This can give new opportunities.
Exactly which kind of project and under which conditions is up to debate.
I have worked on a few projects that I regret not being able to open source. Mainly not my choice, stakeholders wanted traditional go to market strategies and failed/ran out of money trying to make sales. I can't help but thinking what other opportunities could have arisen have we chose another strategy.
Audio usb offloading. Only supported on Qualcomm soc's.
Article is misleading. Audio offloading is probably only useful to avoid waking the main CPU too often, so better battery management... CPU can remain sleeping a few microseconds longer, not all the time.
Pixel 8a has 8gb ram. It's not a low end number for Android phones.
Probably more the fact that the memory and battery management logic assumes all user apps which are not in focus can be killed aggressively, which makes the system unsuitable for any background task by design.
Yeah I have machines with 8Gb and I am popping VMs like there is no tomorrow without problem.
Now I am not very knowledge about memory management on Android but I just noticed the 8Gb of the Pixel 8a are almost all used just after reboot. So this is very different than a Linux desktop...
Android is not GNU/Linux, yes it may use the Linux kernel, however it has a complete different userland, complete different driver model microkernel inspired with Android IPC, and the kernel itself is heavly customised, more so the OEMs ones.
Expecting Android to be just another Linux distro is exactly the root cause of many Termux developers frustations.
Yes, all of the background google services that don't come with desktop Linux chew up a big chunk. You could look into disabling all non-critical services on the phone, there's probably lots of low-hanging fruit.
> Pixel 8a has 8gb ram. It's not a low end number for Android phones.
I mean... linux runs fine of a Gen 1 raspberry pi... I have several machines around me that are running linux with 4GB or less. Hell, my 3D printer only has 128MB and my car has under a gig and run Linux fine. A low end phone *should* be just fine...
They are using BGP and routing nodes (backbones), recreating a mini IP (layer 3) network I think.
I've used raw wireguard in a p2p fashion to interconnect LANs. I run wireguard on each segment directly inside the network routers.
Just make sure all LANs are using a different subnet. A /24 is standard. Then configure all the peers and you get a fully peer to peer network. No relays. You only need one side of every peer "pair" to be reachable from the internet.
I do have a small management script to help peer discovery (dynamic IPs) and key exchange, but it's not strictly required. With a dozen nodes or so, it's maintainable manually. Wireguard supports roaming natively, as long as one peer can reach the other.
I have my own Wireguard mesh network between my home network and a couple of VPSes. I configured it all manually, too. I'm basically running a virtual public network and have it routing a /24 (BGP announced at the VPSes) back to my home.
A little morbid, but have you considered setting up a beneficiary for the allocation or detailing this asset in a will? That's some special, virtual real estate you have there.
That is correct. IPSec sucks but we have already paid the price of being forced to figure it out in big organizations, so, not much motivation to figure out another thing.
I do the same, it's easy enough and doesn't require a ton of hosting logic.
Out of interest, how do you run your migrations in production, deploy the service then run an ad-hoc job with the same container again? That was one thing I was never super happy with.
In an ideal world your code-base is always compatible with the previous migration state.
So new version can work with the previous version's DB schema.
Then, yes, simply run the migration in a transaction once the new code is deployed. Postgres has fully transactional DDL changes which helps.
Of course, it heavily depends on the actual change being made. Some changes will require downtime or must be avoided if it is too heavy on the db.
Another approach if the migration can be applied quickly is to just run the migrations as part of the deployment script. This will cause a downtime but can be short.
Easiest is just to do runmigrations in your docker image start commands, so DB is always migrated when the container starts.
tl;dr: It depends on the complexity of the migrations and your uptime requirements.
Even with the much touted NAT punching capabilities of tailscale, there are numerous instances where tailscale cannot establish a true p2p connection. The last fallback is the quite slow DERP relay and from experience it gets used very often.
If you have a peer in your tailscale network that has a good connection and that maybe you can even expose to the internet with a port forward on your router, you now have this relay setting that you can enable to avoid using the congested/shared DERP servers. So there is not really a new use-case for this. It's the same, just faster.
reply