[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [idn] Re: character tables
Paul Hoffman wrote:
At 12:03 PM +0000 3/2/05, Gervase Markham wrote:
Could you tell us more about the problems you found with the ideas of
bundling and blocking?
It was impossible to come up with a bundling scheme that kept everyone
happy. The needs of the Chinese language communities for bundling were
different than the needs of the Scandinavian language communities, which
in turn were different than the needs of the Indic language communities,
which were different than the needs of the Arabic language communities,
and so on. Then toss in the communities that truly want multiple scripts
but want to avoid homograph attacks (yes, we really did think about that
years ago...), and your brain starts dripping from your ears.
Yes, as a long-time internationalization engineer, I can imagine that it
was difficult to come up with a single set of guidelines for all of the
world's registries. (In addition to language differences, some comments
on this list have led me to believe that there are also protocol
differences between the registries, i.e. VeriSign's multiple versions of
RRP vs the EPP that Edmon Chung seems to have been working on vs fax and
sneaker net vs any others?)
However, I note that this particular conversation is between a browser
developer (Gervase) and one of the IDNA authors (Paul), neither of which
is a registry representative, so why exactly are you 2 having this
conversation? :-)
Sorry, I'm half joking. Half, because you two have every right to
discuss whatever you wish. The other half because I believe browser
developers can afford to focus more on their end of things. Allow me to
insert an excerpt from a previous email I wrote up:
-----------------
It is pretty clear that none of the organizations can completely solve
the problem on its own. Unicode can warn about these issues, but that is
all they can do. They cannot remove characters. The IETF is currently
discussing the prohibition of certain characters or character types.
Even if the IETF publishes updated versions of the specs, there will
still be the problem of certain characters being unfamiliar to many
users (simply because they do not know all the legitimate characters in
the world), thereby leaving them exposed to the phishers. The registries
can enforce rules at their level, but nobody has yet shown that they can
truly enforce any rules at other levels. So, the browser developers must
address that problem.
There are several issues here. One is that domain names are typically
displayed inside something else, e.g. a URI. This, in itself, gives the
phishers something to work with. So the browser developers must think
about other ways to display domain names. This is not very easy. People
exchange URIs via email and other means all the time. Apps turn those
URIs into clickable links, as a service to users. If not, they can copy
and paste the URI into the URI field. Both of these methods could be
improved to highlight the domain name in the interests of security.
Another problem is that humans are only familiar with a small set of
characters. Some humans know *many* characters (i.e. the East Asians),
but most know a lot less than that. Now, within the set of characters
that each user is familiar with, there are no homograph problems (or
just a few). However, as soon as you stray outside any single user's
familiar set, there are many homographs, near-homographs and unfamiliar
symbols. When a typical computer user is faced with something
unfamiliar, they are quite likely to shrug it off and assume it's just
one of those "computer" things that they cannot understand. This is
something that IDN phishers could take advantage of, if the browsers do
not take steps to highlight the unfamiliar characters (via HTTP
Accept-Language and browser localization as I suggested). Of course,
highlighting is not sufficient. Education is also very important.
So, instead of wasting time talking about a non-solution (white/black
lists), it would be nice to see these parties spending their valuable
time on real solutions. The registries could be working on the
guidelines, to address the concerns about language tagging, variants and
so on. They could also get in touch with the IETF, to let them know
which Unicode characters and character types they wish to use, so that
the IETF can consider how to publish new specs that might prohibit other
characters. Browser developers could start working on ways to display
domain names in ways that give the phishers less to work with.
---------------------
In other words, I do not think browser developers need to be overly
concerned with the particular bundling/blocking schemes that the
registries might be using. Instead, I wish the browser developers would
focus more on the *user*, who may be "surfing" from one site to the
next, spanning the globe, and crossing language boundaries. In order to
protect such a user, the browser should focus on the core set of
characters that s/he is familiar with, and provide some sort of
indication when unfamiliar characters appear, so that the
security-conscious, educated user may know when to be careful. I.e. the
language of the *user* is important, not the language of the domain name.
I am *not* saying that this would be easy to implement. I am not at all
surprised that Mozilla and Opera have chosen an easy stopgap, hopefully
only for the interim. It's great to see Mozilla and Opera lead the way
as they have been!
Erik