Hacker Newsnew | past | comments | ask | show | jobs | submit | phire's commentslogin

The firewall on your typical IPv4 router does basically nothing. It just drops all packets that aren’t a response to an active NAT session.

If the firewall somehow didn’t exist (not really possible, because NAT and the firewall are implemented by the same code) incoming packets wouldn’t be dropped, but they wouldn’t make it through to any of the NATed machines. From the prospective any machine behind the router, nothing changes, they get the same level of protection they always got.

So for those machines, the NAT is inherently acting as a firewall.

The only difference is the incoming packets would reach the router itself (which really shouldn’t have any ports open on the external IP) reach a closed port, and the kernel responds with a NAK. Sure, dropping is slightly more secure, but bouncing off a closed port really isn’t that problematic.


NAT gateways that utilize connection tracking are effectively stateful firewalls. Whether a separate set of ‘firewall’ rules does much good because most SNAT implementations by necessity duplicate this functionality is a bit ignorant, IMO.

Meanwhile, an IPv6 network behind your average Linux-based home router is 2-3 nftables rules to lock down in a similar fashion.


It's also trivial to roll your own version of dropbox. With IPv6 it's possible to fail to configure those nftables rules. The firewall could be turned off.

In theory you could turn off IPv4 NAT as well but in practice most ISPs will only give you a single address. That makes it functionally impossible to misconfigure. I inadvertently plugged the WAN cable directly into my LAN one time and my ISP's DHCP server promptly banned my ONT entirely.


> In theory you could turn off IPv4 NAT as well but in practice most ISPs will only give you a single address

So, I randomly discovered the other day that my ISP has given me a full /28.

But I have no idea how to actually configure my router to forward those extra IP addresses inside my network. In practice, modern routers just aren't expecting to handle this, there is no easy "turn of NAT" button.

It's possible (at least on my EdgeRouterX), but I have to configure all the routing manually, and there doesn't seem to be much documentation.


You should be able to disable the firewall from the GUI or CLI for Ubiquiti routers. If you don't want to deal with configuring static IPs for each individual device, you can keep DHCP enabled in the router but set the /28 as your lease pool.

> So, I randomly discovered the other day that my ISP has given me a full /28.

Where is this? Here new ISP customers don't even get a single IPv4 unless you beg for it.


Not even CGNAT?

In the US many large companies (not just ISPs) still have fairly large historic IPv4 allocations. Thus most residential ISPs will hand you a single publicly routable IPv4 regardless of if you're using IPv6 or not.

We'll probably still be writing paper checks, using magnetic stripe credit cards, and routing IPv4 well past 2050 if things go how they usually do.


Out of curiosity how did you discover this?

Went to double check what my static IP address was, and noticed the router was displaying it as 198.51.100.48/28 (not my real IP).

I don't think the router used to show subnets like that, but it recently got a major firmware update... Or maybe I just never noticed, I've had that static IP allocation for over 5 years. My ISP gave it to me for free after I complained about their CGNAT being broken for like the 3th time.

Guess they decided it was cheaper to just gave me a free static IPv4 address rather than actually looking at the Wireshark logs I had proving their CGNAT was doing weird things again.

Not sure if they gave me a full /28 by mistake, or as some kind of apology. Guess they have plenty of IPs now thanks to CGNAT.


More like even if they looked at the logs they aren't about to replace an expensive box on the critical path when it's working well enough for 99% of their customers.

I once had my ISP respond to a technical problem on their end by sending out a tech. The service rep wasn't capable of diagnosing and refused to escalate to a network person. The tech that came out blamed the on premise equipment (without bothering to diagnose) and started blindly swapping it out. Only after that didn't fix the issue did he finally look into the network side of things. The entire thing was fairly absurd but I guess it must work out for them on average.


> With IPv6 it's possible to fail to configure those nftables rules. The firewall could be turned off.

So what? It's not like you get SNAT without a couple netfilter rules either.

This argument doesn't pass muster, sorry. Consumer and SOHO gear should come with a safe configuration out of the box, it's not rocket science.


Did you even read the second paragraph of the (rather short) comment you're replying to? In most residential scenarios you literally can't turn off NAT and still have things work. Either you are running NAT or you are not connected. Meanwhile the same ISP is (typically) happy to hand out unlimited globally routable IPv6 addresses to you.

I agree though, being able to depend on a safe default deny configuration would more or less make switching a drop in replacement. That would be fantastic, and maybe things have improved to that level, but then again history has a tendency to repeat itself. Most stuff related to computing isn't exactly known for a good security track record at this point.

But that's getting rather off topic. The dispute was about whether or not NAT of IPv4 is of reasonable benefit to end user security in practice, not about whether or not typical IPv6 equipment provides a suitable alternative.


> But that's getting rather off topic. The dispute was about whether or not NAT of IPv4 is of reasonable benefit to end user security in practice, not about whether or not typical IPv6 equipment provides a suitable alternative.

And, my argument, is that the only substantial difference is the action of a netfilter rule being MASQUERADE instead of ALLOW.

This is what literally everyone here, including yourself, continues to miss. Dynamic source NAT is literally a set of stateful firewall rules that have an action to modify src_ip and src_port in a packet header, and add the mapping to a connecting tracking table so that return packets can be identified and then mapped on the way back.

There's no need to do address and port translation with IPv6, so the only difference to secure an IPv6 network is your masquerade rule turns into "accept established, related". That's it, that's the magic! There's no magical extra security from "NAT" - in fact, there are ways to implement SNAT that do not properly validate that traffic is coming from an established connection; which, ironically, we routinely rely on to make things like STUN/TURN work!


> Dynamic source NAT is literally a set of stateful firewall rules that have an action to modify src_ip and src_port in a packet header, and add the mapping to a connecting tracking table so that return packets can be identified and then mapped on the way back.

Yes, and that _provides security_. Thus NAT provides security. You can say "well really that's a stateful firewall providing security because that's how you implement NAT" and you would be technically correct but rather missing the point that turning NAT on has provided the user with security benefits thus being forced to turn it on is preventing a less secure configuration. Thus in common parlance, IPv4 is more secure because of NAT.

I will acknowledge that NAT is not the only player here. In a world that wasn't suffering from address exhaustion ISPs wouldn't have any particular reason to force NAT on their customers thus there would be nothing stopping you from turning it off. In that scenario consumer hardware could well ship with less secure defaults (ie NAT disabled, stateful firewall disabled). So I suppose it would not be unreasonable to observe that really it is usage of IPv4 that is providing (or rather forcing) the security here due to address exhaustion. But at the end of the day the mechanism providing that security is NAT thus being forced to use NAT is increasing security.

Suppose there were vehicles that handled buckling your seatbelt for you and those that were manual (as they are today). Someone says "auto seatbelts improve safety" and someone else objects "actually it's wearing the seatbelt that improves safety, both auto and manual are themselves equivalent". That's technically correct but (as technicalities tend to go) entirely misses the point. Owning a car with an auto seatbelt means you will be forced to wear your seatbelt at all times thus you will statistically be safer because for whatever reason the people in this analogy are pretty bad about bothering to put on their seatbelts when left to their own devices.

> in fact, there are ways to implement SNAT that do not properly validate that traffic is coming from an established connection; which, ironically, we routinely rely on to make things like STUN/TURN work!

There are ways to bypass the physical lock on my front door. Nonetheless I believe locking my deadbolt increases my physical security at least somewhat, even if not by as much as I'd like to imagine it does.


The difference is that with IPv4 you know that you have that security because there is no other way for the system to work while with the IPv6 router you need to be a network expert to make that conclusion.

Except, you don't.

Assume eth0 is WAN, eth1 is LAN

Look at this nftables setup for a standard IPv4 masquerade setup

    table ip global {
        chain inbound-wan {
            # Add rules here if external devices need to access services on the router
        }
        chain inbound-lan {
            # Add rules here to allow local devices to access DNS, DHCP, etc, that are running on the router
        }
        chain input {
            type filter hook input priority 0; policy drop
            ct state vmap { established : accept, related : accept, invalid : drop };
            iifname vmap { lo : accept, eth0 : jump inbound-wan, eth1 : jump inbound-lan };
        }
        chain forward {
            type filter hook forward priority 0; policy drop;
            iifname eth1 accept;
            ct state vmap { established : accept, related : accept, invalid : drop };
        }
        chain inbound-nat {
            type nat hook prerouting priority -100;
            # DNAT port 80 and 443 to our internal web server
            iifname eth0 tcp dport { 80, 443 } dnat to 192.168.100.10;
        }
        chain outbound-nat {
            type nat hook postrouting priority 100;
            ip saddr 192.168.0.0/16 oiname eth0 masquerade;
        }
    }
Note, we have explicit rules in the forward chain that only forward packets that either:

* Were sent to the LAN-side interface, meaning traffic from within our network that wants to go somewhere else

* Are part of an established packet flow that is tracked, that means return packets from the internet in this simple setup

Everything else is dropped. Without this rule, if I was on the same physical network segment as the WAN interface of your router, I could simply send packets to it destined to hosts on your internal network, and they would happily be forwarded on to it!

NAT itself is not providing the security here. Yes, the attack surface here is limited, because I need to be able to address this box at layer 2 (just ignore ARP, send the TCP packet with the internal dst_ip address I want addressed to the ethernet MAC of your router), but if I compromised routers from other customers on your ISP I could start fishing around quite easily.

Now, what's it look like to secure IPv6, as well?

    # The vast majority of this is the same. We're using the inet table type here
    # so there's only one set of rules for both IPv4 and IPv6.
    table inet global {
        chain inbound-wan {
            # Add rules here if external devices need to access services on the router
        }
        chain inbound-lan {
            # Add rules here to allow local devices to access DNS, DHCP, etc, that are running on the router
        }
        chain inbound-nat {
            type nat hook prerouting priority -100;
            # DNAT port 80 and 443 to our internal web server
            # Note, we now only apply this rule to IPv4 traffic
            meta nfproto ipv4 iifname eth0 tcp dport { 80, 443 } dnat to 192.168.100.10;
        }
        chain outbound-nat {
            type nat hook postrouting priority 100;
            # Note, we now only apply this rule to IPv4 traffic
            meta nfproto ipv4 ip saddr 192.168.0.0/16 oiname eth0 masquerade;
        }
        chain input {
            type filter hook input priority 0; policy drop
            ct state vmap { established : accept, related : accept, invalid : drop };
            # A new rule here to allow ICMPv6 traffic, because it's not required for IPv6 to function correctly
            icmpv6 type { echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } accept;
            iifname vmap { lo : accept, eth0 : jump inbound-wan, eth1 : jump inbound-lan };
        }
        chain forward {
            type filter hook forward priority 0; policy drop;
            iifname eth1 accept;
            # A new rule here to allow ICMPv6 traffic, because it's not required for IPv6 to function correctly
            icmpv6 type { echo-request, echo-reply, destination-unreachable, packet-too-big, time-exceeded } accept;
            # We will allow access to our internal web server via IP6 even if the traffic is coming from an
            # external interface
            ip6 daddr 2602:dead:beef::1 tcp dport { 80, 443 } accept;
            ct state vmap { established : accept, related : accept, invalid : drop };
        }
    }
Note, there's only three new rules added here, the other changes are just so we can use a dual-stack table so there's no duplication of the shared rules in separate ip and ip6 tables.

* 1 & 2: We allow ICMPv6 traffic in the forward and input chains. This is technically more permissive than needs to be, we could block echo-request traffic coming from outside our network if desired. destination-unreachable, packet-too-big, and time-exceeded are mandatory for IPv6 to work correctly.

* 3: Since we don't need NAT, we just add a rule to the forward chain that allows access to our web server (2602:dead:beef::1) on port 80 and 443 regardless of what interface the traffic came in on.

None of this requires being a "network expert", the only functional difference in an actually secure IPv4 SNAT configuration and a secure IPv6 firewall is...not needing a masquerade rule to handle SNAT, and you add traffic you want to let in to forwarding rules instead of DNAT rules.

Consumers would never need to see the guts like this. This is basic shit that modern consumer routers should do for you, so all you need to think about is what you want to expose (if anything) to the public internet.


I actually avoid most YouTube channels that upload too frequently. Especially with consistent schedules.

Even if I'm 100% certain it's not AI slop, it's still a very strong indicator that the videos are some kind of slop.


Content farms, whether AI generated or not their incentive is to pump out low quality high output. Most of their content even it involves a human narrator are heavily packed with AI generated media.

I also notice that people with lots of experience with computers will automatically reboot when they encounter minor issues (have you tried turning it off and on again?).

When it then completely falls apart on reboot, they spend several hours trying to fix it and completely forget the "early warning signs" that motivated them to reboot in the first place.

I've think the same applies to updates. I know the time I'm most likely to think about installing updates is when my computer is playing up.


I try to do the opposite, and reboot only as a last resort.

If I reboot it and it starts working again, then I haven't fixed it at all.

Whatever the initial problem was is likely to still present after reboot -- and it will tend will pop up again later even if things temporarily seem to be working OK.


> Whatever the initial problem was is likely to still present after reboot

You only know this after the reboot. Reboot to fix the issue and if it comes back then you know you have to dig deeper. Why sink hours of effort into fixing a random bit flip? I'll take the opposite position and say that especially for consumer devices most issues are caused by some random event resulting in a soft error. They're very common and if they happen you don't "troubleshoot" that.


With any system: When I can find and correct the problem out of the gate, then it remains corrected the issue does not recur.


How do you avoid sinking time into chasing illusory bugs?


It’s not that big when you consider many DC car chargers can deliver 0.25 MW.

So ”only” 42 car sized chargers for a massive boat, there are probably some massive Tesla superchargers sites that approach that.


Yes, the actual bandwidth of the last-mile analog line was much, much higher. Hence why we eventually got 8mbit ADSL or 24mbit ADSL 2.0+ running across it. Or even 50-300mbit with VDSL in really ideal conditions.

Though the actual available bandwidth was very dependent on distance. People would lease dedicated pairs for high bandwidth across town (or according to a random guy I talked to at a cafe: just pirate an unused pair that happened to run between their two buildings). But once we start talking between towns, the 32kbit you could get from the digital trunk lines was almost always higher than what you could get on a raw analog line over the same distance.


Yeah, I’m the same. I default to anyhow unless I need a strong API boundary (like if I’m publishing a library crate)

Sure, it’s slightly more error prone than proper enum errors, but it’s so much less friction, and much better than just doing panic (or unwrap) everywhere.


It seems very useful for archiving branches that never got merged.

Sometimes I work on a feature, and it doesn’t quite work out for some reason or another. The branch will probably never get merged, but it’s still useful for reference later when I want to see what didn’t work when taking a second attempt.

Currently, those abandoned branches have been polluting my branch list. In the past I have cloned the repo a second time just to “archive” them. Tags seem like a better idea.


I don’t think I’ve ever returned to a branch that I can easily rebase on top of the main branch. And if I really wanted to, I’d prefer to extract a patch so that I can copy the interesting lines.

Any branch older than 6 months is a strong candidate for deletion.


I sometimes leave merged branches around for quite a while, because I squash them when I merge to master and sometimes when tracking down a bug the ability to bisect very handy.


What made you decide to squash when merging instead of leaving the commits in the history so you can always bisect?


Not GP, but we do the same. Branches become the atomic unit of bisection in master, but the need is extremely rare. I think because we have good tests.

We also keep merged branches around. This has never happened, but if we needed to bisect at the merged-branch level, we could do that.

I know squash-merge isn't everyone's cup of tea, but I find it to be simpler and clearer for the 99+% case, and only slightly less convenient for the remainder.


The range reason your history textbook is not infinitely long. The longer something is, the less digestible. When we need more granularity, it's there in the branches.


Wonder if it's worth squashing in the branch, merging to main, then immediately reverting.

Now the work is visible in history, branch can be deleted, and anyone in the future can search the ticket number or whatever if your commit messages are useful.

Dunno if it's worth polluting history, just thinking out loud.


let it go. You will not bother fixing those to work with master.

It just moves trash from branch list to tag list.


Yeah, i think the author has been caught out by the fact that there simply isn’t a canonical way to encode h264.

JPEG is nice and simple, most encoders will produce (more or less) the same result for any given quality settings. The standard tells you exactly how to compress the image. Some encoders (like mozjpeg) use a few non-standard tricks to produce 5-20% better compression, but it’s essentially just a clever lossy preprocessing pass.

With h264, the standard essentially just says how decompressors should work, and it’s up to the individual encoders to work out to make best use of the available functionality for their intended use case. I’m not sure any encoder uses the full functionality (x264 refuses to use arbitrary frame order without b-frames, and I haven’t found an encoder that takes advantage of that). Which means the output of different encoders has wildly different results.

I’m guessing moonlight makes the assumption that most of its compression will come from motion prediction, and then takes massive shortcuts when encoding iframes.


But you use spend much less energy fighting gravity.

I’d expect it to have more range underwater than a typical quadcoter has through air. And much longer “flight” time.

But I doubt it gains enough to compete with a fixed wing drone using the same battery.


>I’d expect it to have more range underwater than a typical quadcoter has through air.

I would expect the opposite, with the higher drag being much more of an issue than gravity. But I would be interested to hear a definitive answer.


It never stopped.

Just takes backwards steps from time to time with major architectural innovations that deliver better performance at significantly lower clock speeds. Intel's last backwards step was from Pentium 4 to Core all the way back in ~2005. AMD's last backwards step was from Bulldozer (and friends) to Zen in 2017.

7GHz is ridiculous and probably just a false rumour, but IMO; Intel and AMD are probably due for another backwards step, they are exceeding the peek speeds from the P4/Bulldozer eras. And Apple has proved that you can get better performance at lower clock speeds.


Intels plan for P4 was to scale to 10Ghz. Its always been a race but plans don't always work out.


And IBM were planning for the PS3's cell processor to run at like 6ghz, with later versions scaling further. Though, it's not like Sony were planning to ship the PS3 clocked that high, they were just expecting their 3-4ghz cpu to run much cooler than it did.

You can really see where the industry hit the wall with Dennard scaling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: