Time-Sharing Visions and Recollections

Fernando J. Corbató

November 4, 1998

[Presented at the 1998 C & C Award Ceremony in Tokyo, Japan]

Good afternoon President Tadahiro Sekimoto, Distinguished guests, and Ladies and Gentlemen. It is indeed a great pleasure to receive a 1998 C&C Award today. The award citation describes work done building two pioneering time-sharing systems, CTSS and Multics. Thus it is also a pleasure to accept this award for the recognition that it gives the teams of people who made those systems possible. We were all younger then, very dedicated and driven by a vision of showing the world that there was a better way to use computers.

When I first received notice of the C&C award, I was immediately reminded that in 1985 I was asked by MIT Press to review a manuscript they were considering publishing, by the long-time leader of NEC, Dr. Koji Kobayashi, entitled "Computers and Communications: A Vision of C&C". The book was published that year, and it should be no surprise that my review was a favorable one! What struck me then was the willingness of the chief executive of a major corporation to describe a remarkably ambitious vision of a future where computers and communication were intimately intertwined so that his dream of a translating telephone system was possible. In such a system each person in an international conversation would speak and hear in his own language without hesitations or pauses. This was a remarkably ambitious goal and one that few executives of major corporations then or now would have had the courage or technical depth to describe.

As we know, such a translating telephone system is still not attainable, but it is in the striving to make progress toward such goals, that progress is made. And the vision is still an inspiring one as the world continues to become a more connected and smaller place.

When we did our work on time-sharing, first with CTSS and later with Multics, we too had a dream, a dream of man-machine interaction. To explain this properly, I have to take you back in time to the beginning of the modern computer revolution.

The early computers began to be widely used in the 1950's when they first became commercially available. Initially they were used like gigantic personal computers, but they were very expensive to buy and operate. For example, an IBM 704 of that era cost about the same amount of money as a large commercial jet aircraft. (They also were impressively huge, consisting of maybe 25 to 50 shower-stall sized boxes in a vast room.) Thus it was not too surprising that great effort was made to get more efficient use of each expensive machine especially as more and more people began to discover the interesting things that could be done with computation.

[Slide #1: View of 7094 Machine Room]

In this photo from the past, one sees how vast computers of the 1950's and 1960's were.

By the end of the 1950's a style of computer operation had evolved which was called batch-processing. First, the machine was no longer accessible to a user but rather was only touched by a professional machine operator. Second, a program had to be submitted to the computer in the form of IBM cards prepared on a keypunch machine. Third, each person's job, consisting of a deck of IBM cards was collected with other jobs and by the use of yet another auxiliary machine, was prerecorded on a reel of magnetic tape. Fourth, the reel of magnetic tape was mounted on a tape drive of the main computer and the batch of jobs were processed serially with the output from each job being recorded on another magnetic tape. Finally the output magnetic tape was transferred to the auxiliary machine and the results printed out by a line printer. Only then after waiting for anywhere from several hours to a day, could the user find that he had made some trivial mistake, like leaving out a comma, and then had to go through the whole process again. Users of that era were indeed very frustrated and especially in universities where budgets were low and computers often overloaded.

[Remove Slide #1- lights on]

It was from this frustration that the notion of time-sharing the computer arose. The idea of time-sharing is a simple one which was elegantly described by John McCarthy, then at MIT. Briefly one has the computer serve many users simultaneously by giving each user a brief slice of time and quickly commutating among them. The effect of this commutation is that each user feels he has his own computer but can proceed at his own pace without concern for wasting the precious main computer. In short, each user feels he has a private computer. It was a grand vision, yet we found that no computer manufacturer at the time felt it was necessary to change from the batch processing style.

Indeed, there were many problems to be solved for time-sharing to work. A major one was the lack of input-output devices for users. Terminals with electric keyboards and typewriter mechanisms with paper output seemed like the best we could do. Fortunately we were able to get Teletype machines from AT&T and Selectric mechanisms from IBM to solve that problem. But even then we had to fight for both upper and lower case letters since most equipment of that day was not designed for user dialogues. Moreover the central computer did not have the input-output channels to interact with a large number of user terminals, so we had to acquire special purpose channels for the purpose.

We had other major problems too. First the large computers of the 1950's were not prepared for multiple users. In particular any program could issue input-output instructions at any time, moving data in or out of any part of the main memory. Second any program could read or write into any part of the main memory. Third, any program could run as long as it wanted to. And fourth, any user program could expect to use all the main memory for itself.

Fortunately, with the cooperation of some sympathetic IBM researchers, we were able to incorporate into an IBM 7090 machine the key hardware modifications to solve these problems, and the solutions are still in use in most modern machines today. You will recognize them when I list them: A hardware timer to interrupt user programs that was set by the supervisor; The notion of supervisor and user modes; Memory bounds registers to prevent programs operating outside their own areas; And extra banks of memory for both a supervisor program and for input-output buffering of user typing when user programs were not in memory.

The result of all these efforts was that by November 1961, we were first able to demonstrate time-sharing on a patched version of the IBM equipment. The initial setup was very crude and incomplete but it did run on the same machine that was being used for the heavy workload of batch-processing, and it was this property which made us choose the rather quaint name of the "Compatible Time-Sharing System" which everyone conveniently called CTSS.

[Slide #2: A composite photo of many users simultaneously using CTSS.]

This slide was taken from an article written by Robert Fano and myself for the September 1966 issue of the magazine Scientific American. It was an attempt to convey the notion that there were many users simultaneously doing work.

In retrospect CTSS was significant for several reasons. First, it showed that time-sharing was feasible without too much effort being required. Second, we were able to demonstrate that users and programmers both at MIT and around the world were excited by the new way of interacting with computers. Third, we found that computer manufacturers were remarkably uninterested since it meant a major change in their products. And fourth, CTSS became a vehicle upon which a new project could be started at MIT with government funding to explore the full implications of man-machine interaction. That project was, as many of you probably know, Project Mac, which since has evolved into the Laboratory for Computer Science and the Artificial Intelligence Laboratory.

[Remove slide #2- lights on]

Having given you some of the background and history of time-sharing, let me next briefly describe Project Mac which was led by Professor Robert Fano. In 1963 Project Mac had a famous summer study program where visitors from far and wide came to see what time-sharing on CTSS was like. By the fall of 1963 it had acquired its own copy of the CTSS hardware. It was a successful start but two critical observations can be made. First, the hardware had just made the transition from vacuum tubes to transistors, and second, large scale disk storage systems had just become economically available. In retrospect, time-sharing would have been premature if both of these key technologies were not ready since the first, the transistor technology, allowed the critical system reliability we take for granted today, and the second, the then novel large disk storage, permitted the system to store the files and programs of users at the central machine site. (This was an early example of just-in-time scheduling!)

One of the first goals of Project Mac was to develop a second-generation time-sharing system, one which would be organized from the start for time-sharing and not just an adaptation as CTSS was. This led to the development, starting in the fall of 1964, of the Multics (Multiplexed Information and Computer System) which was a cooperative project between Project Mac, the General Electric Computer Division, and the Bell Telephone Laboratories. By the fall of 1965 the goals and scope of the project were described in a well-known set of papers and we were hard at work starting to implement the ambitious system.

[Slide #3: Multics system configuration diagram.]

This slide is one of the diagrams in the 1965 series of papers describing the plan for Multics. The goal was symmetric multiprocessors capable of dynamic reconfiguration and non-stop operation to provide a computer utility.

The project history has been amply documented, but for our purposes, let me just say that we were perhaps too ambitious: we were using hardware with a radically new architecture, we were writing a completely new operating system in its entirety, we were using a then new language PL/1, we were a team of individuals who in most cases had not worked together before, and we were geographically distributed. Most management consultants would have said we were doomed to fail.

[Remove slide #3- lights on]

Instead we persevered, driven by our zeal to establish a new model of computing, and overcame many obstacles such as the change of hardware manufacturers (General Electric sold its computer business to Honeywell) and financial crises with our sponsors. We were able to use CTSS as a development tool and so were able to demonstrate the efficiency that time-sharing allowed, but still, the project took much longer than we expected and Multics only became usable by others by 1969.

With such a lame beginning, one would have expected Multics to fail as a commercial product. But to Honeywell's surprise, even though they thought the system was only a research toy, they found that many firms and organizations were pre-sold on the system and demanded it. And at its peak a little over a decade ago, there were about one hundred Multics sites around the world.

More important than mere site numbers though was the example that time-sharing set of introducing the engineering constraint that the interactive needs of the system users was just as important as the efficiency of the equipment. It seems obvious now but then it was not.

But I have left out a crucial part of the time-sharing story. One of the major goals of Multics was to discover and document the engineering techniques that could be used to solve some of the critical problems introduced by large-scale continuously operating systems. That was one reason we wrote papers about the system before it was built. It was also a philosophy that attracted important authors to spend time with the development team and to document the system. As a result two significant books were written about the mechanisms of Multics, one by Elliott Organick of the University of Utah and the other by Katsuo Ikeda of Kyoto University (who I am pleased to note is present with us today).

Another way that we hoped to propagate the knowledge discovered by the development of Multics was through the graduate students doing research theses about the system. I am pleased to note that one of my own doctoral students, Akira Sekino, whose thesis was on the modeling of Multics performance, is now President and CEO of a NEC subsidiary, HNSX Supercomputers Inc. in the United States. (I am also very gratified to mention his presence here today.)

But besides the books, articles and students, there were also the programmers who participated in the development. Over the years, our development team of programmers moved on to other firms and activities and in the process carried within themselves a critical understanding of how to organize multiple user real-time systems. The most famous example was when Ken Thompson and Dennis Ritchie went on from their Multics experience to develop the UNIX operating system. I note that a 1989 C & C Award recognized their impressive achievement.

But there were many other companies, Prime, Apollo, Stratus to only name a few which built on the experiences of Multics. Fortunately, one of the original Multics developers, Tom Van Vleck, has dedicated himself to being the historian of Multics. He has created a wonderful set of pages on the World Wide Web which are a veritable encyclopedia of the entire effort. I encourage you to explore them yourself at your leisure at:

[Slide #4: Web address of the Multics home page]

Note: A better and more permanent web address is at: http://www.multicians.org/

I do not think I have ever seen a more loving, complete and nostalgic description of the saga of an operating system, a system which still lives today. In those pages one can see the dedication and enthusiasm that was widespread among the development team and system maintainers. They had a vision of a new form of computing and they believed in it.

[Remove slide #4- lights on]

Of course, the computing field has evolved immensely since the dawn of the modern age a half century ago. As we look back, we can see that the development of time-sharing was a period of architectural and software discovery. But as hardware cost, performance, size, and power consumption have exponentially changed over almost the entire time span, a factor of two every two years being a typical rate, the engineering solutions have continued to proliferate. Thus we saw teletypes replaced by cathode ray tube displays, main frames joined by super-computers and mini-computers, mini-computers joined by personal computers and now even laptop and palm-sized computers. The key observation is that there is remarkable diversity in the possible computing environments today and, in fact, the choice depends on the type of problem one is trying to solve.

Superimposed on this picture has been the immense strides in digital communication. First it was only via telephone lines and modems. Later came packet switched networks which form the basis of both Local Area Networks and of the Internet and the World Wide Web. I am pleased to note that the immensity of this astounding improvement in communications was recognized in a 1996 C & C Award to Paul Baran, Vinton Cerf and Tim Berners-Lee.

One can of course ask the question; " Of what relevance to future systems are past engineering solutions such as those developed in time-sharing or in today's examples of locally networked personal computers?" The answer lies in the fact that despite the changes of many details, and the many changes in system architecture, most of the underlying software principles and concepts still remain the same.

More importantly, many of the vexing problems first realized in the early days of time-sharing, still remain with us and still demand solutions. Let me give a few examples:

[Slide #5: List of current vexing problems]

Security of information is a major headache. Destruction, corruption or leaking of data are only some of the possibilities that can silently occur over networked computers.

Reliability of systems is more critical than ever as systems become more intimately involved in the operation of ships, aircraft and organizations. The hardware is rarely the problem, but rather the pressure on software developers to release systems prematurely continues to increase so that the correctness of programs or systems is often in doubt. The year 2000 will be an interesting event.

Authenticity continues to be a serious problem. When you receive an e-mail, you normally assume that the sender is who the message says. But as we all should know, that is not necessarily true. Moreover, today's framework of communication not only allows misrepresentation, but also untraceablilty.

The last three examples related to the integrity of systems and information. But there are organizational hazards as well. In particular, there are strong incentives for everyone in an organization or company to use identical computing systems. But such a course of planned coherence exposes one to two major hazards. The first hazard is that there may be a fatal flaw or vulnerabilty in the identical systems which causes them to all fail nearly simultaneously. And the second hazard of a lack of diversity is that one is involuntarily at the mercy of a software or hardware vendor's monopoly position.

[Remove slide #5- lights on]

With those brief examples I hope I have presented you with a picture of a bright and challenging future for the field of computers and communications. The technology continues to evolve at a dizzy pace and the diversity of solutions available to users becomes more and more vast. We are where we are today thanks to the visions of many early pioneers and I am quite pleased to have been part of the early wave of explorers in laying the foundations of today's rich computing and communication environment. And I especially appreciate this opportunity to share some of these early recollections. Thank you.


This page has been accessed # times since December 18, 1998.

corbato@lcs.mit.edu