[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: simple security



On 3/23/10 3:02 PM, Lee Howard wrote:

The simple-security draft represents the best practice we know of for securing home networks.

It's not a best-practice, it's a best-guess.

Simple-security is being not being practiced at all on the vast majority of IPv6 residential connections today.

It describes the behavior that should be the default for all home networking gateways. Advanced users who know what they're getting into can change those default rules.

I'll argue the contrary.

Advanced users know how to manually poke holes in firewalls, run the right version of UPnP or NAT-PMP running, etc. Non-advanced users do not. It's the non-advanced users that need protocols to "just work".

Firewalls make networking more frustrating, particularly for the non-advanced users.

Some people argued that a stateful firewall is no longer needed because attackers no longer use vectors that a firewall protects against. This sounds like circular reasoning to me, as if you no longer need a roof because rain hasn't fallen on your head for years.

If everyone has an umbrella and rain-suit anyway, what good is the roof doing?

Yes, I know there are still OSes that will be compromised in a matter of seconds on the open Internet. These, however, do not run IPv6. With IPv6, we are really talking about Vista, Win 7, linux, and macosx. All ship with IPv6 firewalls (except linux I suppose), and far more secure IP stacks vs. that of ten years ago. All have tethers back home for updates, in the event that a new exploit is found. These firewalls are far more adaptive and secure than the "IPv6 simple-security" firewall.

I don't want any of these new IPv6-enabled OSes to think for a moment that they can let their guard down just because they are plugged into a firewalled residential gateway "most of the time".

It was also argued that attacks of this kind simply don't exist in IPv6. That sounds like the argument that faults in the space shuttle o-ring haven't caused explosions before, so it's safe.

Bad analogy. The O-ring problem wasn't because of a hacker, it was human/engineering error in a complex system. A bug. Rather than protecting against bugs, firewalls increase the possibility of having more bugs.

I'll also point out that OSes with smaller market share have fewer exploits written for them because they are a smaller target; as IPv6 exceeds 50%, there will be more attacks.

Simple-security loses its effectiveness considerably for a home with no roaming devices. By the time IPv6 exceeds 50%, what do you think home networks will look like? The perimeter is getting very porous, and a "simple" firewall designed around the idea of a fixed home with stationary devices and a hard perimeter will be ineffective and obsolete.

This is why the only "firewall" I can consider for a moment to help with security is one that actively detects whether hosts inside the home as well as traffic coming from outside the home constitutes a security breach. The "security cop" on the edge isn't so much a device trying to black-hole traffic initiated from one direction or another, it is watching the traffic to see if devices in your home network are compromised or being attacked. This even works when the attack vector comes in via an email attachment, because it can watch the traffic patterns of an infected host connecting back to its lair and shut it down accordingly.

"simple-security" is "simple-minded". It is based on a security-model that is rapidly becoming obsolete, and comes at the cost of complexity in both the RG, the host, and the applications that have to try and work despite all the various rules for having their packets dropped.

I disagreed at the mike with the argument that ISPs should be doing this kind of filtering themselves. I'd like to understand that argument better. If ISPs should be providing stateful firewall service, then doesn't that support the need for a draft documenting what ISPs should do?


The problem with any draft defining what kind of security an ISP or a gateway should do is that it is by definition a moving target. Security is, always has been, an arms race.

Which is why I think that if we should define anything it should be the base rules and interfaces for updating those rules in response to the threats. Any static document is going to be obsolete to the hackers before it is even becomes an RFC.

Yes, hosts should provide better security for themselves.

One reason the older ones don't, is that nat-firewalls have provided protection for them during a time when hosts were mostly stationary and not updated regularly. Today, hosts must be able to deal with operating in a multitude of environments. The idea that I have a home with no roaming clients, or can sell an OS that cannot exist with or without a firewall protect it, is very much a 10-year-old reality.

If there is one advantage IPv6 does give us, is that it lets us draw a line between old IP stacks and the OSes they are attached to and new ones. It wouldn't be sensible for us not to exploit that.

In some regions, users install three or four security packages on their computers, but even their almost 50% of machines are infected. Blocking the easiest paths to exploits using perimeter security is current best practice, and should be documented as such.

Those regions probably need advanced-security, not simple-security. Simple-security with IPv6 probably isn't going to help that much, the hackers will still find their way in, as is clear from the infection rate.

- Mark