shithub: epublish

ref: 7ae2d7eab48168c0503903fdf30004e5ce94933a
dir: /sample/text/chap2.txt/

View raw version
Plan 9 from Bell Labs

[[[ms
.LP
Rob Pike
.br
Dave Presotto
.br
Sean Dorward
.br
Bob Flandrena
.br
Ken Thompson
.br
Howard Trickey
.br
Phil Winterbottom
]]]
[[[ebook
<p>Rob Pike<br/>
Dave Presotto<br/>
Sean Dorward<br/>
Bob Flandrena<br/>
Ken Thompson<br/>
Howard Trickey<br/>
Phil Winterbottom</p>
]]]

Appeared in a slightly different form in ‥Computing Systems, Vol 8 #3, Summer 1993, pp. 221-254.‥

# Motivation

By the mid 1980's, the trend in computing was
away from large centralized time-shared computers towards
networks of smaller, personal machines,
typically UNIX “workstations”.
People had grown weary of overloaded, bureaucratic timesharing machines
and were eager to move to small, self-maintained systems, even if that
meant a net loss in computing power.
As microcomputers became faster, even that loss was recovered, and
this style of computing remains popular today.

In the rush to personal workstations, though, some of their weaknesses
were overlooked.
First, the operating system they run, UNIX, is itself an old timesharing system and
has had trouble adapting to ideas
born after it.  Graphics and networking were added to UNIX well into
its lifetime and remain poorly integrated and difficult to administer.
More important, the early focus on having private machines
made it difficult for networks of machines to serve as seamlessly as the old
monolithic timesharing systems.
Timesharing centralized the management
and amortization of costs and resources;
personal computing fractured, democratized, and ultimately amplified
administrative problems.
The choice of
an old timesharing operating system to run those personal machines
made it difficult to bind things together smoothly.

Plan 9 began in the late 1980's as an attempt to have it both
ways: to build a system that was centrally administered and cost-effective
using cheap modern microcomputers as its computing elements.
The idea was to build a time-sharing system out of workstations, but in a novel way.
Different computers would handle
different tasks: small, cheap machines in people's offices would serve
as terminals providing access to large, central, shared resources such as computing
servers and file servers.  For the central machines, the coming wave of
shared-memory multiprocessors seemed obvious candidates.
The philosophy is much like that of the Cambridge
Distributed System [NeHe82].
The early catch phrase was to build a UNIX out of a lot of little systems,
not a system out of a lot of little UNIXes.

The problems with UNIX were too deep to fix, but some of its ideas could be
brought along.  The best was its use of the file system to coordinate
naming of and access to resources, even those, such as devices, not traditionally
treated as files.
For Plan 9, we adopted this idea by designing a network-level protocol, called 9P,
to enable machines to access files on remote systems.
Above this, we built a naming
system that lets people and their computing agents build customized views
of the resources in the network.
This is where Plan 9 first began to look different:
a Plan 9 user builds a private computing environment and recreates it wherever
desired, rather than doing all computing on a private machine.
It soon became clear that this model was richer
than we had foreseen, and the ideas of per-process name spaces
and file-system-like resources were extended throughout
the system—to processes, graphics, even the network itself.

By 1989 the system had become solid enough
that some of us began using it as our exclusive computing environment.
This meant bringing along many of the services and applications we had
used on UNIX.  We used this opportunity to revisit many issues, not just
kernel-resident ones, that we felt UNIX addressed badly.
Plan 9 has new compilers,
languages,
libraries,
window systems,
and many new applications.
Many of the old tools were dropped, while those brought along have
been polished or rewritten.

Why be so all-encompassing?
The distinction between operating system, library, and application
is important to the operating system researcher but uninteresting to the
user.  What matters is clean functionality.
By building a complete new system,
we were able to solve problems where we thought they should be solved.
For example, there is no real “tty driver” in the kernel; that is the job of the window
system.
In the modern world, multi-vendor and multi-architecture computing
are essential, yet the usual compilers and tools assume the program is being
built to run locally; we needed to rethink these issues.
Most important, though, the test of a system is the computing
environment it provides.
Producing a more efficient way to run the old UNIX warhorses
is empty engineering;
we were more interested in whether the new ideas suggested by
the architecture of the underlying system encourage a more effective way of working.
Thus, although Plan 9 provides an emulation environment for
running POSIX commands, it is a backwater of the system.
The vast majority
of system software is developed in the “native” Plan 9 environment.

There are benefits to having an all-new system.
First, our laboratory has a history of building experimental peripheral boards.
To make it easy to write device drivers,
we want a system that is available in source form
(no longer guaranteed with UNIX, even
in the laboratory in which it was born).
Also, we want to redistribute our work, which means the software
must be locally produced.  For example, we could have used some vendors'
C compilers for our system, but even had we overcome the problems with
cross-compilation, we would have difficulty
redistributing the result.

This paper serves as an overview of the system.  It discusses the architecture
from the lowest building blocks to the computing environment seen by users.
It also serves as an introduction to the rest of the Plan 9 Programmer's Manual,
which it accompanies.  More detail about topics in this paper
can be found elsewhere in the manual.

# Design

The view of the system is built upon three principles.
First, resources are named and accessed like files in a hierarchical file system.
Second, there is a standard protocol, called 9P, for accessing these
resources.
Third, the disjoint hierarchies provided by different services are
joined together into a single private hierarchical file name space.
The unusual properties of Plan 9 stem from the consistent, aggressive
application of these principles.

A large Plan 9 installation has a number of computers networked
together, each providing a particular class of service.
Shared multiprocessor servers provide computing cycles;
other large machines offer file storage.
These machines are located in an air-conditioned machine
room and are connected by high-performance networks.
Lower bandwidth networks such as Ethernet or ISDN connect these
servers to office- and home-resident workstations or PCs, called terminals
in Plan 9 terminology.
Figure 1 shows the arrangement.

(The full paper is available in the Plan 9 sources)