[AktiviX] Register article
Drew
polaw at leeds.ac.uk
Mon Mar 8 09:40:03 UTC 2004
Interesting article from the Register at
http://www.theregister.co.uk/content/55/36033.html - sorry for
those who may have already seen it: what do people think?
Drew
Analysis There are several reasons why open-source software
provides for superior computer and network security, but the
computing public seems confused about why this is so.
Many attribute the security advantage to the very fact of openness.
It's long been popular to cite the "many eyes" theory, which holds
that flaws are discovered and fixed because selfless programmers
spend countless hours carefully combing through the source code
and alerting the development teams. In this way, we're told, the
mere fact that source code is available leads to enhanced security.
Wishful thinking
Actually, the people most likely to spend hour after hour
reviewing source code are blackhats looking for a novel exploit.
Code review is hardly the only way to attack, but it is obviously
more difficult with closed-source software. Some attacks against
closed-source systems have been discovered through reverse
engineering, a tedious and not entirely dependable process; but
reversing is difficult, and only a minority of attackers are capable
of it. Having the source code at one's disposal is a convenience.
Security through obscurity can work to a point because fewer
attackers are capable of reversing a closed product profitably. Of
course, once the obscurity is lost, the code becomes a target for a
larger number of attackers. And because it is meant to be closed, it
may possess more bugs and security holes than code that's meant
to be examined freely.
Due diligence
There are advantages to openness, though not the one most often
cited. Open source developers have got to be more careful and
security-conscious than their closed-source counterparts. This
encourages a better product overall.
There is a corresponding disadvantage in closed-source software:
obscurity may inconvenience blackhats a bit and help limit the
number of potential attackers, but it works only so long as
obscurity is maintained. Secrecy can be useful, but it is a fragile
defense. Once the code is released, the software becomes an easier
target than it once had been; but because it was developed with the
assumption that it would not be released, it is likely to be sloppier
and easier to exploit than code developed with the assumption that
world+dog will be welcome to review it. Closed-source products
aren't necessarily inferior; but, human nature being what it is, they
often turn out inferior. Few people are as diligent as possible when
they can easily conceal their shortcuts and mistakes.
Another security advantage in openness is the fact that end users
can place more confidence in their applications, utilities and
clients when the source code is available for review by anyone
who wishes to examine it. It is simply impossible to conceal
spyware, adware and secret phone-home capabilities in products
that can be examined freely.
So there are indeed a couple of security virtues in openness itself:
knowing that the source has got to be made public encourages
good work habits among developers, and malware functions can't
be concealed. And yes, the code can be reviewed and bugs
discovered before exploits are developed, though this is chiefly a
matter of wishful thinking. The blackhats are likely to be well
ahead of the whitehats when it comes to security-oriented code
review.
One word: modular
It's beyond dispute that open source systems are potentially more
secure than Windows, but the most important advantages don't
come from openness per se. They come instead from the
coincidence that open source systems, like Linux and BSD, are
modeled on UNIX, which is designed in a more modular fashion
than Windows. Such systems are more transparent to the user or
administrator, and have far fewer interdependencies - two factors
that are exceptionally good for security.
The deep integration and multiple interdependencies among
Windows components is a major security challenge in itself. It is
deep integration that makes the scores of exploits against the
Internet Explorer browser so serious and difficult to fix, for
example.
When a Windows component or application is broken, it is often
because Windows itself is broken. Fixing a flaw affecting one
component can reveal related flaws in numerous others. And
sometimes, other components will have developed with
dependencies on the actual bug, so that by fixing Windows, you
may well break several applications that depend on code that was
never right to begin with. This is what makes Windows patch
development so difficult and time consuming.
On the other hand, the more modular architecture of UNIX and its
open source cousins enables developers to fix a major component
without needing to re-work the kernel, or re-work other system
components dependent on flawed code buried in the guts of the
operating system. This is why bugs in open source components,
such as the Apache Web server or the Mozilla browser, can be
fixed in a matter of days, while corresponding Windows
components, like the IIS Web server or Internet Explorer, might
take weeks or even months to sort out.
Services
Another advantage of having fewer system interdependencies is
the ability to simplify and harden a system by disabling
unnecessary services and networking components. For example,
Microsoft has made Windows dependent on RPC (Remote
Procedure Call), a service that enables one machine to execute
code on another. It cannot be disabled, though it should not be
enabled unless it's needed, especially on an Internet-connected
machine. On UNIX-style systems, RPC can be disabled without
penalty. Other examples that come to mind are Terminal Services,
which is unnecessary and potentially insecure on many machines,
but upon which a handy service called Fast User Switching
depends; and Client for Microsoft Networks, on which a nice,
third party security application, PGP (Pretty Good Privacy),
depends.
There are about fifteen other questionable Windows networking
services that, while not necessary for other essential components
to function, are enabled by default (e.g., DCOM, which enabled
the Blaster worm to spread recently). On the other hand, I can
think of no potentially insecure daemon that can't be disabled on a
*nix system, or a networking feature that can't be safely
uninstalled. And most Linux distros enable very few such
daemons by default (though The Register did find recently that the
new Xandros Desktop went a bit over the top in that department).
Isolation
Windows XP is the first multiuser Windows system intended for
home use. Unfortunately, it can easily be set up as a single-user
system, with the owner running as root (or the administrator in
Redmond parlance). I would imagine that most XP systems not set
up by a professional admin or a power user are running from the
admin account by default, because most users are unaware of the
security benefits of isolating, or sandboxing, users. The chief
benefit is that malware run from a user account will have fewer
privileges, and therefore less impact, on the system overall.
Microsoft has stuffed up the multiuser environment even further
by enabling powerful code like ActiveX controls to access the guts
of the system when run from a user account, undermining the
inherent security of the user sandbox.
And because Windows administration is so GUI-dependent, it is
often necessary to log in to the admin account to get anything
accomplished, which again encourages home users to work from it
by default. UNIX-like systems can be administered easily from a
user account, either by logging in to a shell as root, or by using a
GUI admin interface such as SuSE's YaST or Mandrake's DrakX
and logging in as root.
Transparency
UNIX-like systems are transparent to users and administrators. It's
easy to see what processes are running, and to understand the
dependencies among them. A Windows system often has scores of
processes running, and it is often difficult to determine what effect
killing one will have on another. Windows also stashes data in
numerous obscure locations, such as the Registry, making data
hygiene difficult to practice. It has numerous databases like those
maintained by the Indexing Service, the famous index.dat files,
and the Registry. Configuration files are difficult to locate, often
unreadable, and options must be chosen with proprietary tools like
Regedit and GUI interfaces. UNIX-like systems, on the other
hand, use simple text configuration files that are easy to locate,
and that can be edited, and even write protected, easily.
Overall, the UNIX family of systems are designed to be
immensely easier to monitor, to simplify, and to administer for
security. They feature fewer interdependencies, more
transparency, and better isolation of users.
So, while openness provides a couple of security advantages in
itself, the chief reason why Linux and BSD offer superior security
is not so much because they're open source, but because they're
not Windows.
--
Drew Whitworth, School of Computing, University of Leeds, UK.
Tel: +44 (0)1422 844579. E-mail: polaw at leeds.ac.uk
http://www.tangentium.org/ - alternatives in IT, politics and
society.
More information about the AktiviX-discuss
mailing list