P
●
A
●
R
●
T
●
I
FUNDAMENTALS
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Source: NETWORK PROCESSORS
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
FUNDAMENTALS
CHAPTER 1
THE EVOLUTION OF NETWORK
TECHNOLOGY: DISTRIBUTED
COMPUTING AND THE
CONVERGENCE OF NETWORKS
In this introductory chapter, we will review the unprecedented changes that have occurred in com-
puting and telecommunications-related technologies over the last 30 years. We will also examine the
chain of events that caused this extraordinary cascade of technical breakthroughs on multiple fronts.
These breakthroughs ultimately helped generate the new high-speed broadband network requirements
for which network processors will be indispensable.
The various subjects discussed in this book are documented extensively within the corresponding
notes and references provided in this chapter. This chapter is more of an historical overview that
intends to provide a context and background against which readers (especially recent college gradu-
ates) will be able to properly understand the macroscopic picture of how and why we arrived where
we are. This background will enable readers to better view these complementary technologies in rela-
tion to each other and to appreciate and understand the main network-processing technologies dis-
cussed in this book.
IN THE BEGINNING
An explosion of information technology (IT) occurred predominantly in the last quarter of the twen-
tieth century. Computers, which were exotic devices to previous generations, have by now become
indispensable tools for our everyday work and leisure. Today all branches of industry, processes of
workflow, channels and methods of education, manufacturing techniques, financial management
tools, audio and video entertainment systems, transportation systems, electronics and engine control
systems, and even humble video games have taken advantage of this unbelievable progress.
In the 1960s and early 1970s, when many of us were in college, working with a computer meant
standing in line to use card punchers to write programs in primitive languages. A student program-
mer would have to wait until the following day to receive the printout results because the data-center
staff had to feed numerous programs on a batch base daily into the university mainframe. The spooler
was invented to manage the output for so many different people at different times of the day. This pro-
duced one single output point that would convey the results to the users who were expecting to see
the fruit of their work. This all sounds unreal, yet it was still happening just 25 years ago.
Large mainframe computers were the solution for that era’s IT problems. IBM was the leading par-
adigm for these computers. Companies that more or less emulated its business model, such as Amdahl,
Burroughs, Control Data, and so on, also dominated the stage. Only universities, major organizations,
3
Source: NETWORK PROCESSORS
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
and large (usually multinational) corporations could afford these machines. Some “enlightened”
industry executives have even gone down in history affirming that there could not be any potential for
more than two to three computers in the market!
Soon the card punchers disappeared and were replaced by alphanumeric terminals. People could
sit in front of a computer screen and type in their code using a typewriter-like keyboard. The progress
of compiling technology and operating systems facilitated interactive work sessions. Programmers no
longer had to wait one day to get results. Once the programs were executed, the programmer could
sit down and examine the results or reexamine the code and debug the program. Interactivity between
man and machine started increasing.
The site topology and IT architectures of these machines were mostly based on an inverted tree
structure. The mainframe, also affectionately known as the big iron, was at the top of the hierarchy
(the root of the inverted tree). The structure contained a series of layers of controllers of variable per-
formance. It had a capacity that would individually cluster several nearby or remote downstream
devices. This would eventually create an array of terminals that enabled interactive users to use the
mainframe’s computing power on a time-shared basis.
IBM led the industry and the world by creating the first comprehensive and extremely powerful
intercomputer communications architecture called the Systems Network Architecture (SNA).
1
This
architecture was quite advanced for its time. SNA enabled mainframes to communicate with each
other at different sites. Little by little, tasks that were previously tedious or impossible could be done
in a complex but well-tested, documented, and straightforward way. Users could easily perform file
transfers and log into other computers remotely. It would still take a few more years until SNA was
developed enough to enable programs running on different systems to almost seamlessly communi-
cate with each other, synchronize themselves, and exchange data in real time. This became possible
in the late 1980s.
In the midst of all this change in the late twentieth century, semiconductor technology underwent
a revolution. Because more powerful capabilities could be integrated into a silicon microchip, users
could envision the ever-increasing possibilities in terms of the complexity, the integration of func-
tions, the speed, and the accuracy. The commensurate progress that was made in software engineer-
ing, which was essentially driven forward by the ever-increasing requirements of new and more
sophisticated IT applications, continued to try to use the available hardware capabilities. This formed
an endless loop: Faster hardware was needed to run the more sophisticated software. The more sophis-
ticated the software became, the more powerful the underlying hardware had to become. Central pro-
cessing units (CPUs) became faster and more complex by first packing hundreds of thousands and
then millions of transistors and even millions of logical gates on a chip (with typically four, six, or
even eight transistors per logical gate).
It was only a matter of time before the centralized IT fabric changed. Computing power was essen-
tially going to break up and would be physically distributed around corporate and organizational sites.
DEPARTMENTAL MACHINES ERODE THE
MAINFRAME’S FOLLOWING
The organizational and political reasons why a corporate department, such as manufacturing or R&D,
did not like to be connected to and controlled by a corporate IT center go beyond the subject of this
book; however, they remain a fact of life. The founders of companies such as Digital Equipment
Corporation (DEC), Hewlett-Packard, Prime, and Data General, which pioneered the so-called
midrange systems or departmental machines, understood this problem.
With the advent of sleek interactive operating systems such as Digital’s VAX/VMS and with the
university world open-heartedly accepting the UNIX effort from Bell Labs, a new generation of com-
4 NETWORK PROCESSORS
1. Atul Kapoor, SNA: Architecture, Protocols, and Implementation, J.Ranade IBM Series (New York: McGraw-Hill, 1992).
THE EVOLUTION OF NETWORK TECHNOLOGY: DISTRIBUTED COMPUTING AND THE CONVERGENCE OF NETWORKS
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
puter systems was developed. These systems were much more affordable than mainframes and were
easy to run and manage with small teams of people. A plethora of these machines eventually appeared
on academic and industrial campuses. People who used them were almost as enthusiastic about these
machines as neophytes devoted to a cult.
THE FIRST LOCAL AREA NETWORK (LAN)
Around the early 1980s, local area networks (LANs) slowly moved out of the research community
into the industrial world. Digital, Intel, and Xerox created the Ethernet based on research that was
done at Xerox’s Palo Alto Research Center (PARC). Technology suddenly became extremely inter-
esting. For example, a user could be running a program on one VAX and interact with another system
on the network to develop software code while choosing his or her own printer that was going to be
shared among several users on the LAN. These users would quickly become indignant of the older
and rigid mainframe technologies. In many cases, they would even look down on traditional data-cen-
ter IT staff and qualify them as “nonenlightened.” Two parallel popular cultures were created. At the
risk of stereotyping, it seemed that one culture was dressed in a coat and tie, and the other was dressed
in jeans and a T-shirt.
IBM followed suit with the introduction of the token ring, which was based on research that was
mostly carried out at the IBM Research Lab in Rueschlikon, which is located outside of Zurich. The
early introduction however of an open standard, coupled with the availability of off-the-shelf semi-
conductor chips that implemented the basic Media Access Control (MAC) and physical layer (PHY)
interface functions, helped Ethernet keep its market lead. Several other manufacturers tried to come
up with their own LAN approaches until the Institute of Electrical and Electronics Engineers (IEEE)
stepped in and started standardizing the landscape. IEEE 802.3 covers the original Ethernet approach
(carrier sense multiple access with collision detection [CSMA/CD])
2
and IEEE 802.5 covers the
token ring. Vendors could now design adapters, also known as printed circuit boards (PCBs), that
could be plugged into systems (for example, a departmental VAX computer) to connect devices on
a LAN.
As IT managers realized that the proliferation of connected users was depleting the available
network segment addresses, a wider structure was created. Gateways between LAN segments and
bridges started appearing between token rings and/or Ethernets. By using a straightforward lookup
table mechanism, they would remain two or more address spaces apart and steer traffic to and from
the appropriate destinations and sources. If users were connected inside a building, it was only a mat-
ter of time before they would also require the appropriate levels of connectivity with the external
world.
In the late 1970s and early 1980s, visionaries of the engineering community realized that the
increasing complexity of design work in the mechanical as well as the electronic and civil engineer-
ing fields would require more sophisticated computer-based tools. Thus, the era of computer-aided
design/computer-aided manufacturing (CAD/CAM) was born.
Very complex pieces of software were developed in the electronics arena to enable users to design
sophisticated integrated circuits and multilayer PCBs. Similarly, in the mechanical area, advanced
tools appeared in the market that would enable users to create two-dimensional and three-dimensional
mechanical designs for car frames, ship hulls, airplane fuselages and wings, and even offshore drilling
platforms. These tools were extremely computation oriented, especially when they combined mathe-
matical techniques such as finite-element simulation modules. Special computing platforms were
needed.
In addition to being too expensive for the average research and development lab, traditional IBM
mainframes were not equipped with number-crunching capabilities. The IBM mainframe S/360 and
THE EVOLUTION OF NETWORK TECHNOLOGY 5
2. http://standards.ieee.org/getieee802/
THE EVOLUTION OF NETWORK TECHNOLOGY: DISTRIBUTED COMPUTING AND THE CONVERGENCE OF NETWORKS
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.