The time of a federally provided general purpose backbone network for
the research and science community is coming to a close as of April of
1995. Its roots stem from early ARPA research on packet switching and
its development of the TCP/IP protocol suite, which the NSF elected for
its NSFNET program in the mid-eighties, at a time of strong tendency
towards GOSIP ISO protocols and support for X.25.
Evolving from the Arpanet core model
which centered around a single infrastructure to interconnect campuses, the
NSFNET focused on broad operation al interconnection
infrastructure which considered regional
clients and agency peer networks, each of which would connect to their
respective clients.
The TCP/IP selection
for the NSFNET resulted in a strong acceptance worldwide in the ten
years since the mid-eighties, as the NSFNET creation was the enabler
for broad interconnectability in the Internet community. The NSFNET
program itself initially came out of the NSF supercomputing center
program, with two of the awardees, SDSC and JvNC, having proposed a
consortium network. NSF then orchestrated the interconnection of its
supercomputing centers via a 56kbps "Fuzzball" based backbone
(already synchronized to radio
clocks), to which shortly thereafter regional (or mid-level)
networks connected, which used the 56kbps NSFNET backbone as the
national interconnection fabric. In July 1988, a 1.544 Mbps
T1 (sorry, photo shows the now
empty rack) replacement of the NSFNET backbone operationally
started, and was replaced by a 45Mbps T3 backbone
in the early
nineties, to meet growing demand patterns. By then the
commercialization and privatization of the Internet started to
significantly take off, with the NSF getting under increating pressure
to move networking activities to the private sector, rather than
bulk-providing general networking services by the federal government.
This pressure has resulted in a rethinking of the NSFNET architecture,
to ensure Internet stability for the time window between government
supported services and full privatization of the network.
topology history
56kbps NSFNET backbone
T1/448kbps physical NSFNET backbone
T1/448kbps logical NSFNET backbone
T1 non-muxed NSFNET backbone
T3 NSFNET backbone service
The new NSFNET architecture
To address the aforementioned time window, the National Science Foundation
created four new projects, three of the infrastructure related, and one of
them supporting network research and development activities. Those are:
infrastructure related projects
support for regional interconnectivity to regional networks
In its initial implementation network users typically selected specific
services that they explicitly connected to in a one-to-one connection,
largely to transfer files, for interactive access to remote machines,
and for electronic mail to other users.
This has evolved in the last few years towards a broad "information
perimeter" as seen by individual users. The information source is not
perceived as specific machines any more, but a horizon consisting of the
available information resources, with a one-to-many mapping between a user
and information resources.
This has contributed to the notion of an information infrastructure. In the
future, even that view will be too limiting, as a many-to-many weave of
connectivity is arising, from a mixture of collaboration, information, and
generic facility resources environment.
Summary
The NSFNET has been shaping the Internet from a federal network research
effort, via a federally provided infrastructure, towards a commercialized
environment. Some of the next challenges will be in the focus on
applications, and how they are provisioned throughout the networked
environment, and to support collaboration, information, and facilities
resources. Some of the network analysis over the years has shown a dramatic
impact of new applications on the IP switching substrate, something that
will have to be considered for the overall traffic profiles, as new high
end applications demand significant amounts of bandwidth for extensive
periods of time.
Some of this is seen already by the increasing use of audio and video
applications on the Internet.
Alot of areas need further exploration, including:
work towards an information provisioning architecture, including
information resource discovery
network/server load considerations
architected information cache infrastructure
architected information brokerage
scalable multi-user collaboration environments, including
collaboration resource discovery
hierarchical server structures
movability of clients among servers
dynamic creation and support for collaboration groups
real-time visualization and sensory data, including
environment status information (e.g., air quality)