Hacker Newsnew | past | comments | ask | show | jobs | submit | EnigmaCurry's commentslogin

Sent from my Ipad


Same. Here's how I scratch the NixOS itch on Proxmox and/or libvirt[1]. One interface for both targets.

[1] https://github.com/EnigmaCurry/nixos-vm-template


That feature list looks really good. It would actually be really nice to standardize the guest operating systems in such a way.

I actually have a few hosts that only run docker. I might be able to test with those.


> Many home routers try to preserve the source port in external mappings. This is a property called “equal delta mapping” – it won’t work on all routers but for our algorithm we’re sacrificing coverage for simplicity.

It is precisely this point that has flummoxed me when connecting my p2p wireguard config[1] with a friend that uses a pfsense router, no matter what we tried, pfsense always chooses a random source port.

But in the simple case this blog outlines, if both ends use the same source port, this method punches through 2 firewalls effortlessly:

[1] https://blog.rymcg.tech/blog/linux/wireguard_p2p/


In my experience, Cisco ASA does source port persistence by default (when it can’t do it then it falls back to random), fortigates can do it (in various ways depending on version, although fallback method in the map-ports doesn’t work), juniper SRXs can’t, unless you guarentee a 1:1 map.


Does your friend setting up port forwarding on their pfSense not help in your scenario?


Yes, that solves it completely. But the exercise we were trying to do was to do it without that.


You’re getting into birthday paradox territory, throw a few hundred packets in each direction and one will get through

This hs a good diagram to understand the options

https://rajsinghtech.github.io/claude-diagrams/diagrams/netw...


This is easily solved in your source NAT configuration on pfSense. It's a single checkbox to not randomize ports on outbound flows. This will enable full cone NAT.

You can scope it to just your IPsec service, or whatever it is your hosting, or you can enable full cone for the whole subnet.

It is not DNAT, nor is it port forwarding. If you host a SIP proxy, SBC or peer to peer gaming, it will enable these use cases as well.

https://docs.netgate.com/pfsense/en/latest/nat/outbound.html


[flagged]


This is against the HN guidelines:

> Don't post generated comments or AI-edited comments. HN is for conversation between humans.

https://news.ycombinator.com/newsguidelines.html


We can all run this through our LLM if choice, why post this?


Did you validate this solution yourself?


No, hence the all caps ai disclaimer. But seems plausible


Lord, we're how many years into using LLMs, and people still don't understand that their whole shtick is to produce the most plausible output - not the most correct output?

The most plausible output might be correct, or it might be utter bullshit hallucinations that only sound correct; the only way to tell is to actually try it or cross-reference primary sources. Unless you do, the AI answer is worthless.

The reason why they're getting so good at code now is that they can check their output by running and testing it; if you're just prompting questions into a chatbot and then copying their output verbatim to a comment, you're not adding any meaningful value.


Exactly! This is what LLMs do: they bullshit you by coming across as extremely knowledgeable, but as soon as you understand 5% of the topic you realise you've been blatantly lied to.


Even if you get 70% blatant lies and 30% helpful ideas, if you can cheaply distinguish the two due to domain expertise, is that not still an extremely useful tool?

But to the point of this thread: If you can't validate their output at all, why would you choose to share it? This was even recently added to this site's guidelines, I believe.


But then why make this comment at all, even despite the disclaimer? Anyone can prompt an LLM. What's your contribution to the conversation?

To be clear, I use LLMs to gut check ideas all the time, but the absolute minimum required to share their output, in my view, is verification (can you vouch for the generated answer based on your experience or understanding), curation (does this output add anything interesting to the conversation people couldn't have trivially prompted themselves and are missing in their comments), and adding a disclaimer if you're at all unsure about either (thanks for doing that).

But you can't skip any of these, or you're just spreading slop.


You didn't even provide the exact model you pulled that out!

"Seems plausible".... Can you please read up about the ways LLM generate their output?


Use APIs for building things (and preparing for things to be built upon). Production.

Use CLI for exploration, coding, and learning. Greenfield.

In general, Declarative > Imperative.


GPD Pocket 4


> Anybody have something better?

I wrote this to publish Org docs to S3 - https://github.com/EnigmaCurry/s3-publish.el - I wanted something extremely lightweight, not even wanting to commit things to git like I normally would and waiting for CI to build something. Uploading html directly to S3 means it gets published in <1s from push.


That's neat! For org, if it had an option to generate the HTML file name from slugifying the org file name instead of the salted hash, it could be fantastic for rapid lightweight blogging.


o-blog in the parent comment reminded me that I use to use a similarly named SSG called blogofile, and what do you know. Small world.


yes I used to use it too :) I think I've written three or four SSG since then haha.


This is at least a limitation in Android itself:

https://developer.android.com/reference/android/net/VpnServi...

"There can be only one VPN connection running at the same time. The existing interface is deactivated when a new one is created."

Note this is not about routing some traffic to the VPN and other traffic to the clear net. This is about running two VPN connections simultaneously.


Fair enough. I haven't used an Android device since 2017... Do people have these issues on iOS too?

On Linux, I have no problem running either bare wireguard or tailscale alongside Forticlient. On Windows and macOS it's a bit more janky, specifically the DNS resolution, but I don't daily drive these platforms so I may be missing some kind of knowledge to fix this.


On a linux box, is it possible to run tailscale/wireguard as an exit node along with Forti vpn?

Aka what I want to achieve is (my-machine + tail/wireguard) --> (server with tailscale/wireguard + forti vpn) --> Corporate network. So wireguard or tailscale to receive traffic and forward it through forti.

Or another option (my machine fortivpn over tail/wireguard) --> (server as exit node) --> corporate network

Rather than using the official forticlient I am using https://github.com/adrienverge/openfortivpn. It has some options to configure custom pppd/routes/dns etc if necessary, which I have not touched as I don't know enough :P DNS resolution is not important for my usecase, only traffic.


I have heard not so great things about Forti VPNs, sorry to hear you have to work with those.

In theory, as long as the Forti VPN does not overlap with the Tailscale IP address range, the simplest solution is to just run Tailscale and openfortivpn on a single node. You can then advertise the Forti VPN subnets within Tailscale, that's effectively what my image does as well in a nutshell, except that it's parsing the WireGuard config and setting up firewall rules for convenience.

Tailscale does NAT automatically by default, so it will look like all traffic is coming from the openfortivpn client itself.


When I just try to run tailscale and forticlient together naively, tailscale does not like it very much heh. Looks like I'll need to study what your image is doing in depth


I don't know about FortiClient specifically, it's a sorry piece of crap that's more often borken than not.

With openfortivpn, you can usually ignore whatever routes you receive and set up your own. I haven't tried the specific set up you talk about, but I don't see why it wouldn't work. However, you would most likely need to NAT on the machine running the Fortinet client.


Sounds like I'll need to learn how to setup custom routes and it's syntax. I have tried to run away from it all my professional life but maybe now I need to.

> However, you would most likely need to NAT on the machine running the Fortinet client.

Could you please elaborate a little more here? NAT from where to where?


Yea on Linux I can run 10 different VPNs (or 10 wg peers) no problem, this limitation of Android is super annoying to me. I think OPs solution is quite a good one for Android users.


Yeah you're exactly on point here, and this limitation exists on both iOS and Android alike. I got very frustrated with switching between VPNs and connections breaking every time I did that.



I feel like my Rust code takes 3x as long to write as my Python code, but the gpt results for rust are about 10x better, because the tooling is a backstop against hallucinations.

I really like the Rust tooling and I like exhaustive pattern matching. Python post-dev debugging time is probably 10x vs Rust. That's why I choose Rust.


How does this compare with iSCSI?


Like an iops improvement of 30%+ and latency improvement of 20%+[1], ish.

[1]: https://www.reddit.com/r/Proxmox/comments/134kqy3/iscsi_and_...


There's no way to combine the NVMe drives into a larger sized unit for redundancy / failover though, so not sure what kind of future uptake this could have.


Everyone who uses NVMe-over-network-transport simply does redundancy at the client layer. The networking gear is very robust, and it is easier to optimize the "data plane" path this way (map storage queues <-> network queues) so the actual storage system does less work, which improves cost and density. That also means clients can have their own redundancy solutions that more closely match their requirements, e.g. filesystems can use block devices and implement RAID10 for e.g. virtual machine storage, while userspace applications may use them directly with Reed-Solomon(14,10) and manage the underlying multiple block devices themselves. This all effectively improves density and storage utilization even further.

NVMe-over-network (fabrics w/ RDMA, TCP, ROCEv2) is very popular for doing disaggregated storage/compute, and things like Nvidia Bluefield push the whole thing down into networking cards on the host so you don't even see the "over network" part. You have a diskless server, plug in some Bluefield cards, and it exposes a bunch of NVMe drives to the host, as if they were plugged in physically. That makes it much easier to scale compute and storage separately (and also effectively increases the capacity of the host machine since it no longer is using up bandwidth and CPU on those tasks.)


Interesting. Sounds like it'll make for higher potential scaleability, but also increase the cost (at the network layer) instead.

Probably a trade off that a lot of enterprise places would be ok with.


I’m not sure what you mean. You can add the disks to a software RAID in the worst case. Are you talking about on the host?


Yeah. It seems like directly presenting raw disks to the network means any kind of redundancy would need to be done by whatever device/host/thing is mounting the storage.

And doing that over the network (instead of over a local PCIe bus) seems like it'll have some trade-offs. :/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: