[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
On 3/21/10 12:59 AM, Shane Amante wrote:
If you take most of these types of arguments to their logical end, I
think you'll find that the only real defense here is keeping the
end-devices secure. Particularly those that travel outside of the home
and can be infected while away, etc. Given that there is little or no
real data on whether IPv6 residential firewalls actually fend off
hackers, we are responding to a lot of hypothetical analysis and
conjecture. Much of it is to, in the end, make the user, ISP, or
whomever is responsible "feel" more secure (insert analogies of airport
security pat-downs here).
On Mar 20, 2010, at 16:55 MDT, Fred Baker wrote:
Rate-limiting unsolicited inbound connections rather than rejecting them provides greater end-to-end transparency while still providing protection against address and port scanning attacks as well as overloading of slow links or devices within the home.
SIlly question. Why do you believe that? An address or port scanning attack is not intended to overload a network, it is intended to find an address port that can be used or attacked.
The sentence is referencing two different things. One, the possibility that uninvited packets might overload some device or slow link (something I consider unlikely, but has been identified as a concern by some), the other is, indeed, designed to make blind port and address scanning less likely to succeed in a given amount of time.
Making the scan take more time doesn't prevent it from reaching its target. In what way does rate limiting an address or port scan provide protection?
With all due respect, when I set up security for my home or office, and when Cisco InfoSec sets up security for its home offices (of which mine is one), they're not asking about making an attack take ten minutes vs five. They're asking about preventing an attack from succeeding. Now, we can discuss the logic of the notion that "the bad guys are out there and the good guys are in here", but given the assumption, we need to be rational about the threats we are trying to prevent from succeeding.
Agreed. This proposal seems like a poor half-measure. IMHO, if a client/host wants real end-2-end transparency, then it should use something like NAT-PMP to explicitly signal to the upstream FW that the client is wise and mature enough to deploy it's own, onboard FW to protect itself. Although some people may think that's a bad idea, because a malware-infected PC/device could signal an upstream FW to open up everything, I personally think that's a very poor argument.
So, we're probably not going to get very far on technical arguments.
What the rate-limiting does is ensure that "only a few per second"
packets get on the inside. If you are a valid application trying to make
a connection to a valid open port known in advance, then this is all you
need. If you are a hacker without that information, you probably need to
try a few more times.
This, importantly for whether someone "feels" safe, is to give a setting
somewhere between "wide open" and "completely closed". I'd say a lot of
users fall in that category.
Namely, once a PC/device is infected, it's game-over, anyway.
And the attack vectors are often above L4 these days.
I wish I could conjure up confidence that a single, interoperable,
secure, ubiquitously available, FW control protocol will exist for IPv6.
Given that we haven't been able to achieve that for IPv4, I find it hard
to believe we will for IPv6. Assuming it will happen is little more than
wishful thinking at this stage.
Specifically, remote attackers already have the ability to remotely control that PC and perform further reconnaissance and attacks both /inside/ and outside the LAN. Second, if technically savvy people still don't like the idea of NAT-PMP operating on their FW, then they will be smart enough to disable it altogether on their (personal) FW's.