The Three Security Architectures

This is a simple HTMLization of an initial draft I sent out via email; I haven't produced a better version yet.

Date Fri, 23 Jun 2000 01:18:45 -0400 (EDT)
From Kragen <kragen@kirk.dnaco.net>
To kragen-tol@kragen.dnaco.net
Cc eros-arch@eros-os.org
Subject the three security architectures

This is a draft. It probably needs to be shortened by half and mostly rewritten. Comments are welcome --- indeed, urgently needed.

There are essentially three architectures used to secure computer systems: the out-of-band security architecture, the principal security architecture, and the capability security architecture. These architectures take shape in non-computer security systems as well.

The out-of-band security architecture does not rely on software to maintain security; instead, it relies on other factors, such as an air gap, a big gun, the innate goodness of human nature, financial incentives, or the ignorance or stupidity of adversaries. It can be very effective, but I will not consider it further in this post.

The principal security architecture encodes policies describing who can do what in software. Each piece of running software represents one or more principals; when it tries to take some action, the security system consults its policies to determine whether the principals represented by the software are entitled to take that action. If not, the action is refused.

The capability security architecture unifies privileges with names. The ability to name an action conveys the privilege to perform it, if the action is possible at all.

All three of these architectures are combined in all existing computer systems. The rest of this article is devoted to comparing and contrasting the capability and principal architectures.

My claim is that the principal architecture is inherently inferior to the capability architecture, for the following four reasons.

First, though, I want to look at how the architectures differ with regard to namespaces.

Namespaces

Computer programs manipulate finite-length strings of bits. Sometimes they manipulate fixed-length strings of bits; sometimes they manipulate variable-length strings of bits. Sometimes they manipulate strings of bits that are only one bit long. But that's all they can manipulate --- strings of bits.

Given sufficient storage to store it, and somewhere to get it from, any computer program can manipulate any string of bits. It is impossible to create a string of bits that can be manipulated by one program but not another, or one computer but not another, unless it's too small to fit in one computer or the other.

Anything a program can do to one string of bits, it can do to another string of bits of the same length, if it wants to. It's impossible, in general, to create a string of bits that can't be stored in memory, written to disk, negated, etc.

(Actually, in many cases, the underlying hardware has trouble with particular bit patterns; punching paper tape with too many ones weakens it and can cause it to break; while recording too many zeroes in succession on a magnetic medium results in the medium becoming unreadable due to loss of synchronization. Hardware designers have gone to enormous amounts of effort to create encoding schemes that allow programs to treat all strings of bits alike.)

Whenever a computer program wants to do something other than sling around strings of bits, it has to do it by outputting strings of bits. These strings of bits are interpreted --- either by digital hardware or by other computer programs --- as a request to perform some action.

For the purposes of this post, a ``namespace'' is a particular system of interpretation of strings of bits. Slam a string of bits into a namespace, and a requested action pops out. Namespaces can be very simple --- for example, a 1 bit can turn a light on, while a 0 bit turns it off --- or very complex --- for example, the URL namespace maps variable-length strings of tens, hundreds, even thousands of bits into requests to perform almost any conceivable action, mostly involving retrieving electronic documents.

Each place you can send a string of bits uses some kind of namespace to decide what to do with those bits.

You can build a namespace out of other namespaces; for example, a URL is part of an HTTP request, so knowing the HTTP request namespace --- being able to carry out HTTP requests --- requires knowing the URL namespace.

Some kinds of namespaces are also called ``protocols''.

Some namespaces are concrete, in the sense that each meaningful name in the namespace corresponds to some physical object. For example, an address in a computer's physical memory address space corresponds to eight microscopic capacitors and eight microscopic transistors on one of the SIMMs plugged into the computer's motherboard. The corresponding address in the virtual memory space of a process likewise typically corresponds to the same set of components, or to some space on the computer's hard disk.

To be pedantic, each physical address corresponds to the action of checking other memory control lines and then either reading or writing a memory cell depending on their state.

Other namespaces are not concrete, in the sense that a particular name may correspond to no physical object in particular until it is used.

Principal and Capability Systems in Terms of Namespaces

A principal system has a single namespace accessible to all programs, naming all the actions the principal system controls programs' access to. So any program can request any action by sending the string of bits that name that action to somewhere; security is enforced by refusing requests the program is not authorized to perform.

A capability system also has a single global namespace for all actions, too, but security is enforced by preventing programs from requesting actions they are not authorized to perform, rather than by refusing them after they are requested.

There are two ways I know of to do this: a weak capability system depends on unauthorized programs not knowing the names of actions they are not authorized to perform, while a strong capability system does not allow the programs themselves to talk to the global namespace --- instead, it gives each program a private namespace, and a trusted intermediary controls what global namese have a name in that private namespace.

It may be evident that the line between the two systems is not absolute; they are more a difference in philosophy than a difference in implementation and are often implemented in terms of one another.

Examples of Principal and Capability Systems

Traditional memory protection is an example of a strong capability system. Each virtual memory address is mapped to a physical address and permission bits by the MMU, and these mappings are controlled by the kernel.

Early versions of OS/2 on the 286 used a weak capability system for part of their security; they didn't use the permission bits. If a process had a mapping for a dynamically-linked library, it could read, write, or execute any of the memory locations in the DLL. However, the DLLs were mapped at unpredictable locations; a program trying to corrupt a DLL would theoretically have to guess its memory address and try to write to it. If it guessed wrong, it would crash.

(This is how Letwin's Inside OS/2 tells the story. To me, it seems that there must be a missing link there; how could you call functions in the DLL if you didn't know where it was mapped? Or, failing that, if you didn't at least know where to call code that knew where it was mapped?)

Passwords are another example of weak capability-based security; anyone who knows the right password has access to whatever the password controls access to.

Unix file descriptors are another example of a strong capability system.

Most of Unix's filesystem security is principal-based; when a process requests to open a file, the system looks up the file in the global filesystem namespace (modulo cwd and chroot) and determines whether the process's principals have permission to open it in the requested way.

However, if the OS determines that the request should be allowed, it allocates a record to keep information about the state of the file and writes a pointer to that record into an array associated with that process. Then, it hands the index into the array to the process as the return value of the open() system call; it's called a ``file descriptor''. Win32 weenies call it a ``Handle''.

When the system receives a read or write request, it uses the file descriptor in the request to index into the requestor's file descriptor array, then uses the pointer there. There is no way for a process to even request to read or write a file another process has open; there is simply no file descriptor or other name that would have that meaning. Actually, in Linux, that's a lie; the file descriptors can be mapped into the filesystem in /proc/*/fd/*, and usually are.

So the read and write requests do not need any permission checking.

Furthermore, file descriptors are more flexible than global filenames; one Unix process can pass an open file descriptor over a socket to another process, even if that other process would not have been permitted to open that file in the first place.

 

The Web's security is based almost entirely on the principal architecture. There's a global namespace of URLs, including the majority of documents produced in human history; no effort is made to keep URLs secret. Whether or not you are allowed to perform some action on a document depends on who you are (usually as authenticated by your knowledge of some string of bits, a weak capability system, but sometimes authenticated using other means), not whether you know the name of the action.

 

Whether or not I can drive my car is determined mostly by a capability system: do I have the right key to the car? The key --- or, at least in the abstract, the knowledge of the shape of the key --- enables access to the car.

Comparisons

Principal systems are ineffective without effective means of authentication of identity, which usually means either biometrics or weak capability systems.

Capability systems are ineffective when capabilities can be easily stolen. In today's world, it is at least theoretically possible to steal someone's car by photographing their key through a zoom lens as they get into their car and manufacturing a duplicate.

Strong capability systems are ineffective without the trusted intermediary controlling the mappings.

Principal systems make delegation difficult. If my car used biometrics to identify me to decide whether to start, I would have a harder time lending it out; I'd have to modify its ACL, or possibly change its owner temporarily, instead of just tossing my keys to my friend. Then I'd have to change it back later.

In a principal system, you can give the name of an action to somewhere; this is rather like telling them where your car is parked.

Likewise, if I want to enable other people to read my files without an existing capability system, I must write and run a file-server process which reads the files on their behalf. This is inconvenient. I'm much more likely to just give them my password so they can impersonate me.

Once a capability is granted --- typically, based on some kind of principal or out-of-band system --- it is not necessary to check privilege for each access. Sometimes, however, revoking capabilities is more difficult than revoking a principal's access.

 

The principle of least privilege says that, for maximum security, a program should not be able to do anything it doesn't need to. For example, it should not be able to write to files it doesn't need to be able to write to; it should not be able to determine the existence or nonexistence of files or processes it doesn't need to know about.

While applying the principle of least privilege to people is only rarely justifiable, applying it to programs is usually justifiable. Unfortunately, every program needs a different set of privileges; accordingly, practicing this principle in a principal system requires the creation of a new principal for every program. The result is that it is rarely practiced in principal systems, resulting in unnecessary security vulnerabilities.

 

The confused deputy problem is an interesting and subtle problem that arises in principal security architectures when a program acts on behalf of more than one principal, accepting information about what actions to take from multiple sources.

Suppose I test-drive cars for a living in a biometric future where everyone's car has ACLs. The protocol goes like this: I go to meet a car manufacturer; they tell me which parking space contains the car they want me to test-drive, and add me to its ACL; I go, drive the car, return it, and report on the experience.

In this fantasy, I'm a rich car buff, and so my own car is a really snazzy Ferrari Garibaldi.

I come up to the Pontiac plant one day to test-drive their new, secret sports car. I go inside and meet the guys. They tell me their new car is a lot like a Garibaldi; this is a surprise to me, because I'd heard rumors it was a really lousy lemon. They tell me where it's parked; I go out and drive it.

It turns out that it is, in fact, a great deal like a Garibaldi; it's uncannily like my own car. I give it a terrific review.

Just after I send the article to my publisher, I realize I've been duped. They didn't have a car at all! They just told me where my own car was parked; I was so stupid I went out and drove it. Their actual car sucks. I wasted gas and lost credibility.

Well, I probably wouldn't be that stupid in reality. But computer programs are.

If they had given me keys instead of just adding me to their car's ACL, I would have gone out to try the car and given up when the keys they supplied wouldn't start it. Of course. Only my keys can start my car. They don't have my keys. But, in the biometric principal-based world, I can't tell if the car is starting because I own it or because it's being lent to me.

This situation arises frequently in Unix security; someone runs a privileged program, telling it to read such-and-such a file as a configuration file, ostensibly on their behalf. Such-and-such is actually a file the attacker doesn't legitimately have access to, such as /etc/shadow; the program innocently reads it and complains that it isn't in the right format, quoting it in the process.

If you have to give the program the keys to the file, instead of just telling it where to find it, it won't read any file for you that you couldn't read yourself.

Recently, some programs have been storing the identity of the supplier of the filename along with the filename itself --- binding privilege and name into one entity --- and using seteuid() or setfsuid() to tell the operating system on whose behalf they accessed each file. This is an example of implementing a strong capability system atop a principal system; unfortunately, doing this requires that the program in question run as root.

Recently, on the Web, it was discovered that a browser serving two web sites --- a good web site to which it is known as an administrator, and an evil web site --- can often be persuaded to delete the contents of the good web site on behalf of the evil one. This is yet another incarnation of the confused deputy problem.

 

It is fairly easy to implement a principal-based system atop a capability-based system; one capability conveys the authority to all the capabilities owned by the principal. It is fairly difficult to implement a capability-based system atop a principal-based system, and doing so generally requires superuser or administrator access, because it requires constant creation of new principals. See also the following, about chroot.

 

Unix includes a chroot() call to restrict a process's view of the filesystem. This is a crucial tool for implementing the principle of least privilege. Unfortunately, chroot() is not accessible to normal users; making it accessible would eliminate the security provided by the OS by enabling any user to fool any setuid program into doing anything at all.

In general, setuid programs trust particular parts of the filesystem; for example, typically, invoking a program begins by running the dynamic linker. The filename of the dynamic linker is included inside the executable file, typically something like ``/lib/ld.so.2.1''.

chroot() allows you to provide any dynamic linker at all; by inserting an evil program in /home/mallet/gw/lib/ld.so.2.1, linking a setuid-root program like /bin/passwd into /home/mallet/gw/bin, and running passwd chrooted to /home/mallet/gw, Mallet can fool the system into running his evil program as root.

Similarly, /etc/passwd and /etc/shadow contain the password and shadow files; if you can run /bin/su with an arbitrary passwd and shadow file, you can become anyone you like, because /bin/su trusts /etc/passwd and /etc/shadow.

In a capability system, executables can contain capabilities to things like the dynamic linker and the password file, instead of containing globally-valid filenames.

Conclusion

Capability-based security makes certain kinds of delegation easier with no extra effort on the part of the implementor of the security system, although a sufficiently complex ACL-based principal system can provide the same ability. It makes implementing the principle of least privilege much easier. It reduces the amount of CPU time spent determining whether a particular access is valid or not.


<kragen@pobox.com>       Kragen Sitaker     <http://www.pobox.com/~kragen/>
The Internet stock bubble didn't burst on 1999-11-08.  Hurrah!
<URL:http://www.pobox.com/~kragen/bubble.html>
The power didn't go out on 2000-01-01 either.  :)