Hisashi Kobayashi's Blog
Sherman Fairchild University Professor Emeritus of Electrical Engineering and Computer Science, Princeton University

Keynote Speech at the 24th International Teletraffic Congress

Modeling and Analysis Issues

In the Future Internet

Keynote Speech at the 24th International Teletraffic Congress (ITC 24)

September 4th, 2012, Krakow, Poland

The 24th International Teletraffic Congress (ITC 24) was held at Krakow, Poland on September 4-6, 2012, and I gave a keynote speech on the first day of the conference. http://www.itc24.net/keynote-speakers/

Shown below is the text of my speech. Some background information and advanced discussion, which were not presented at the meeting in the interest of time, are shown in a italic and smaller font.

Also given below are the slides used in the speech. Please download a PDF version of the slides here.

Text of the Keynote Speech

Professor Paul Kühn, Thank you for your gracious introduction.

It is a great honor to be invited to the ITC24 as a keynote speaker. I thank Dr. Thomas Bonald and Prof. Michal Pioro, TPC co-chairs, as well as the conference co-chairs, Prof. Andrzej Jajszczyk,and Prof. Zdzisław Papir for providing me with this opportunity.

I would like to cover three main topics in this talk. First, I want to review the current Internet, its pros and cons with focus on its End-to-End design practice. I will then outline some key points of the New Generation Networks (NwGN for short), which is Japan’s Future Internet project pursued by NICT (National Institute of Information and Communications Technology). Then I would like to present some ideas and suggestions that might be of interest to the ITC community and relevant to the future Internet research. I recognize several people in the audience who listened to my keynotes presented at Euroview 2009 and 2012 at the University of Wüzburg. Please allow me that I will be repeating some slides that I used at these meetings [1, 2].

Slide 2: Outline of the presentation

Here is the outline of my talk

  1. The Internet: Its Original Features
  2. End-to-End Design: Its Benefits
  3. Problems with the E2E Design
  4. What is NwGN, and Why
  5. Network virtualization
  6. AKARI Architecture and JGN-X
  7. Modeling and Analysis Issues

I won’t be able to cover technical details in this one hour presentation, so I’ll post a full text as well as the slides in my blog, www.hisashikobayashi.com, where some background information, technical details and references will be included.

 

 

I. The Internet

Slide 3:The Original Features: As most of you are well aware, when the ARPANET, the predecessor of the Internet, got started over 40 years ago, its primary objectives were to allow researchers to share and exchange their programs and data files. Thus, the main applications they had in mind were file transfers and email. Real-time or time-sensitive applications such as VoIP and stream video were not envisioned. End devices were host machines which were at fixed locations, so mobile devices such as lap top PCs, smart phones, etc. were not imagined. Only “best effort” services were provided, which means that no effort for QoS (quality of services) shall be made. Last but not least, an important assumption made then was that there would be no vicious users. In other words all users were considered trustworthy.

 

The network environment we are in today are completely different from the one envisioned by the original designers of the Internet.

 

Slide 4: E2E Design of the Internet

It is therefore amazing to find that the 40 years old ARPANET’s architecture remains an essential part of today’s Internet. Much of its success is attributed to the so-called “end-to-end (E2E) design” practice.

 

This diagram illustrates how the TCP/IP network based on the E2E design looks like. Simply put, this design approach suggests that the network interior should be a dumb network, whose task is just to deliver packets from one end to another. All intelligence should reside in applications at edge nodes that run on the TCP transport layer.

 

Slide 5: E2E Design of the Internet- cont’d

The “End-to-End argument” discussed by Saltzer, Reed and Clark [3], however, contains some flaws in their arguement, yet seems to have served as a major design guideline throughout the evolution of the Internet built upon the TCP/IP protocol devised by Cerf and Kahn [4].

 

Their argument [3] goes, in the context of a network design, as follows: “Such communication functions as error control, routing and security should be implemented not within the network, but at the end nodes (hosts), since these functions can be completely specified only at the end nodes that run applications, and any partially implemented functions within the network will be redundant, waste network resources and degrade the system performance in most cases.”

 

Then they add a concessionary note that sometimes an incomplete version of the function provided by the communication system may be useful as a performance enhancement. A number of papers have been written that criticize, defend, reinterpret or modify the original argument, and this designguideline has encountered many challenges and has been significantly compromised, for better or worse, for various reasons.

 

In my opinion the end-to-end design should merely be one of many design options and is not something that should be labeled as a “principle.” It is unfortunate that some Internet experts hold a dogmatic view of this design option.

 

Slide 6: Main Features of the Internet: So here are three major features of the Internet.

  • The network provides basic packet delivery service (called “datagram service”)
  • Applications are implemented at end hosts.
  • The transparency of the IP led to innovative deployment of the Internet and quick development of new applications

The first two features are not necessarily the strength of this network architecture, but the third feature has been critical to the success of the Internet.

Slide 7: Today’s Internet Landscape: So here is how Today’s Internet looks like.

    • Every service is an end-to-end application.
    • New applications can be deployed by anyone, because he/she can have easy access to the transparent IP network.

Slide 8: Problems with the E2E Design:

However, the simplicity of the IP network has created serious performance and security problems of the Internet that confront us today.

TCP performs an E2E ARQ (automatic repeat request) for reliable transport of packets in a flow. But the E2E ARQ makes sense only if the channels are clean and the file size is not to large. Furthermore, E2E ARQ may increase the chance of undetectable or uncorrectable errors.

There are many situations where reliability and delay can be improved by applying localized (or hop-by-hop) ARQ, which can prevent unnecessary increase in traffic load. Furthermore, the localized ARQ does not require simultaneous availability of two end points. These benefits of localized error control will be even greater in multicasting, since the traffic load near the source will be substantially reduced. Note that content distribution networks (CDNs) can be viewed as an asynchronous multicasting scheme that avoids E2E delivery of information.

Such routing protocol as OSPF (Open Shortest Path First) and RIP (Routing Information Protocol) , which conform to the E2E design, just look at the IP addresses of packets, and the routing decision cannot reflect traffic load within the network, because the IP network does not have such information. The absence of flowstate information at routers leads to connection-less service, called datagram service.

The TCP protocol, which runs on the IP network, provides a virtual circuit for a flow. Designing IP routers such as RIP and OSPF with no flow state information was probably a valid decision in the 1970s and 1980s when memory required to store state information was expensive and processing such information would have considerably slowed down the routing operations.

Various versions of the TCP protocol (TCP Tahoe in 1988, TCP Reno in 1990, and TCP Vega in 1995, FAST TCP in 2002, CUBIC in 2005, etc.) attempt to provide some congestion control and flow rate control [6], but the performance they can achieve is intrinsically limited, because they cannot have the current information of individual flows.

Slide 9: Problems with the E2E Design-cont’d

A related major problem with the TCP/IP protocol is that it cannot provide call admission control (CAC). It permits any user attached to the Internet to initiate its flows, and allows all flows to share the network resources. In other words, TCP/IP attempts to mimic processor sharing (PS), which we address later in a broader context. It has been empirically shown [5], however, that the performance of TCP/IP is much inferior to the PS scheduling, primarily because TCP/IP cannot have the current information concerning the individual flows’ states.

Slide 10: Departure from the E2E Design

One consequence of the simplicity of the E2E Design approach is that it lacks sufficient mechanisms required to control the network.

A network architecture in general can be decomposed into three planes: data plane, control plane and administrative plane. The control plane is a mechanism that connection management devices use to control and access network components and services. In routing, the control plane is that portion of the routing protocol which is concerned with finding the network topology and updating the routing table. It allows the router to select the outgoing interface that is most appropriate for forwarding a packet to its destination.

The data plane (also known as the forwarding plane) is responsible for the actual process of sending a packet received on a logical interface to an outbound logical interface.

While the data plane remained simple in the Internet, the control plane has become extremely complex over years, because a number of control mechanisms have been appended to the IP layer as new requirements such as mobility (Mobile IP), security (IPSec) and middle-box (e.g., firewalls and network address translator or NAT) control have arisen. Other functions such as IntServ (Integrated Service), Multicast IP, ICMP (Internet Control Message Program), ARP (Address Resolution Protocol), AAA (Authentication, Authorization and Accounting) protocols also belong to the control plane at the IP layer.

The flow routing architecture by Roberts [7] , a part of DARPA’s Control Plane project, attempt to guarantee QoS of an IP network by letting routers store state information on individual flows (such as (i) whether a given flow is active or not, (ii) the bandwidth allocated to the flow, (iv) the priority of the flow, (v) the type of service requested by the flow and (vi) the path assigned to the flow). In flow routing, the signaling is “in band,” i.e., carried as part of the data stream. A flow router processes the signaling information in hardware, and hence it can handle flow establishment at line speed.

Slide 11: Departure from the E2E Design-cont’d

CHART (Control for High-Throughput Adaptive Resilient Transport) [8, 9], also part of the DARPA Control Plane Project, addressed both IP layer control and transport layer control. This control plane also allows routers to monitor and collect a richer set of network state information to control resource usage better than the simple-minded E2E design approach can possibly achieve. The CHART project has developed an explicit rate signaling protocol, which is used by its transport layer to determine the window sizes. This significantly improve the performance of a network with a large bandwidth-delay product and/or a high packet-loss rate.

Slide 12: OpenFlow Switch and Virtual Node

OpenFlow [10, 11] is a recent development and allows networking researchers to experiment with new networking protocols, both E2E designs and non-E2E designs.

Specifically, an OpenFlow switch maintains a Flow Table, which is managed by a Controller. The Controller creates new flow table entries, which are then stored in the Flow Table. A flow table entries specifies how an incoming packet is identified as belonging to the flow and how the packet should be processed. For example, a researcher could run a new routing protocol without disrupting normal production traffic by specifying a flow entry in each switch, which would then identify which packets are to be routed by the rouging algorithm under study. Many commercial switches and routers can be converted into an OpenFlow switch, because most of them have Flow Tables used for the purpose of implementing a firewall.

The Virtual Node (VNode) project [12, 13] pursued by Prof. Akihiro Nakao at the University of Tokyo and NICT, Japan also provides a platform, similar to OpenFlow, that will allow researchers to experiment new network protocols, including non-E2E designs. VNode’s model of programmability is much more generic than OpenFlow’s limited capability, and support both control plane and data plane programmability. OpenFlow allows control plane programmability, but not data plane programmability.

I will describe the virtual node further in the next part of my talk.

The control scheme in the conventional Internet is primarily based on routing using the IP addresses, whereas that of OpenFlow intends to improve the quality of services and increase the efficiency of the network by doing routing control at the flow unit level, whereas a “flow” is defined as a communication that is determined by the combination of the MAC addresses, IP addresses and port numbers involved in the communication. NEC, which is a founding member of the Open Flow Consortium, is developing a ”programmable flow switch.”

 

 

II. New Generation Network (NwGN)

 

Slide 13: New Generation Network

The NwGN project is a flagship project, so to speak, of the networking research in Japan. Its purpose is to design a new architecture and protocols, and implement and verify them on a testbed.

The NwGN project aims at a revolutionary change so as to meet societal needs of the future [14-16]. AKARI is the architecture of such a network and JGN-X is the testbed.

Slides 14 & 15: Requirements of NwGN

There are numerous requirements that we need to take into account concerning network services of the future. Here is a list of what I consider as requirements for the NwGN:

  1. Scalability (users, things, “big data)
  2. Heterogeneity and diversity (in “clouds”)
  3. Reliability and resilience ( against naturaldisasters)
  4. Security (against cyber attacks)
  5. Mobility management
  6. Performance
  7. Energy and Environment
  8. Societal needs
  9. Compatibility (withtoday’s Internet)
  10. Extensibility (for the unforeseenand unexpected)

 

Slide 16:AKARI Network Architecture. Here are fourmajor features of the AKARI architecture. It takes a layered structure like all network architectures we know of, but instead of adhering to static and strict boundaries between the layers, it takes an adaptive approach, by adjusting layer boundaries, depending on the load placed on the network and resource usage. Such a design philosophy is referred to as “cross-layer optimization.” Such adaptive quality of service (QoS) management is pursued actively in the networking community at large. I will discuss the three other features of the AKARI architecture in the next several slides.

Slide 17: ID and Locator in the Internet

In the current Internet, devices on the network are identified in terms of their “IP addresses,” which are their identification numbers on the network layer. In the original internet, all end devices were host machines with their addresses being fixed. Thus, there was no problem in interpreting the IP addresses as “locators,” namely, the devices’ location information. In designing a future Internet, however, we must take into account that a majority of end devices are mobile, with devices with fixed locations being exceptions.

Slide 18: ID/Locator Split Architecture

An end device or an enterprise network may be connected to the Internet via multiple links, and such a technique is referred to as “multihoming.” Its primary purposes are to increase the reliability and resilience and to mitigate a possible overload on one link or circuit.

In order to efficiently deal with the mobile devices and/or multihoming requirements, we should distinguish IDs and locators, and assign two different sets of numbers to them. Then, even if a mobile or multihomed device’s locator changes in the network layer, its ID associated with communications in the upper layers will remain unchanged. The split architecture is also useful to solve the security issue.

In the split architecture, not only locators, but also IDs are present in packet headers. So using IDs to enforce security or packet filtering is possible, and remains applicable even when the locators are changed due to mobility/multihoming. In the current Internet, the IP address in each packet is used as a key to enforce security or packet filtering. IPsec is an example of this location-based security. See RFC 2401: http://www.ietf.org/rfc/rfc2401.txt .

The split architecture is also effective against denial-of-service (DoS) attacks and man-in-the-middle (MitM) attacks, by relating IDs to some security credentials such as public keys and certificates. When an unknown device wants to communicate with a server, the server may ask the device to prove that the ID is associated with a public key and that the association has been certified by a reliable third party, before the server sets aside any resource (e.g., memory) for the session. The server may also ask the device to solve a puzzle of middle-level complexity before setting up the session.

There are two approaches to the ID/Locator Split Architecture. One is a host-based approach, in which the ID/Locator split protocols are implemented in the end hosts only. Its objectives are to achieve secure communications over the unsecured Internet and also to support mobility. As an example, consider the Host Identity Protocol (HIP) described in RFC 5201 http://www.ietf.org/rfc/rfc5201.txt and P. Nikander, A. Gurtov and T. R. Henderson, “Host Identity Protocol (HIP): Connectivity, Mobility, Multihoming, Security, and Privacy over IPv4 and IPv6 Networks,” IEEE Communications Survey & Tutorials, Vol. 12, no. 2, pp. 186-204, Second Quarter, 2010.

 

The other approach is a router-based approach in which the ID/Locator split protocols are implemented in routers, not in end hosts. Its primary objective is to make the BGP (Border Gateway Protocol) routing table size smaller by using two different addressing spaces in edge and core networks. It is known as LISP (Locator/ID Separation Protocol). LISP is about to become an RFC. See

http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=05451761 .

We can get information about its implementation/standardization status as well as tutorial documents from this site: http://www.lisp4.net/ . Both the HIP and LISP ideas were generated by the IETF (Internet Engineering Task Force).

 

In the NwGN project we are implementing the ID/locator split in both hosts and edge routers so that we can get benefits of both Host Identity Protocol or HIP (for security, mobility) and Locator/ID Separation Protocol or LISP (for core routing scalability) [5, 6].

 

As is schematically shown in this slide, we insert Identity Layer between Transport Layer and Network Layer. We are making application and transport layer protocols independent of the network layer protocols so that the same application can be transported over various network protocols. Our approach supports heterogeneous protocols in the edge networks (e.g., a host in an IPv4 network can communicate with another host located in an IPv6 network, and the host can move across heterogeneous networks).

 

The way in which the Internet is used is shifting from “communications from a device to another device” to “communications from data to humans.” When we wish to retrieve data or information, using a web browser and a web server, the data or information itself is an object of our interest, and it is immaterial from which device the data or information is fetched. A network architecture based on such a philosophy is called a “data centric” architecture.

In the ID/Locator Split Architecture, data and information can be treated as “things,” and we can assign IDs to them. Thus, the split architecture has an advantage of being applicable to a data-centric architecture as well.

Slide 19: Network Virtualization. I suppose that a majority of the audience is familiar with the notion of network virtualization, so I will skip a detailed definition of this term.

The notion of “virtualization” in computer technologies goes back to circa 1960, when virtual memory was introduced in the Atlas machine of the University of Manchester, UK. In 1972, IBM introduced VM/370, a virtual machine (VM) operating system that ran on System/370.

In the last decade, IT (information technology) departments of enterprises have begun to adopt a variety of virtualization technologies available as commercial products, ranging from server virtualization, storage virtualization, client (or desktop) virtualization to software virtualization, e.g., allowing Linux to run as a guest on top of a PC that is natively running a Microsoft Windows operating system. Such virtualization techniques allow multiple users and applications to dynamically share physical resources. Thus, they increase resource utilization and/or reduce electric energy consumption, as well as simplify complex administrative operations of IT.

Simply put, network virtualization chooses a subset of a collection of real (or physical) resources (routers, links, etc.) and functionalities (routing, switching, transport) of a real network (or multiple real networks) and combines them to form a logical network called a virtual network.

Slide 20: Virtual Networks and Overlaid Networks:

Virtual networks take different forms, depending on specific layers to which virtualization is applied. Here we illustrate what is termed “overlaid networks” (also known as “overlay networks”). Nodes in an overlaid network are connected by virtual links (or logical links) which are comprised of paths that are formed by combining multiple links in the network underneath. Distributed systems such as cloud computing, peer-to-peer (P2P) networks, and client-server applications (e.g., web browser and web server), can be viewed as overlaid networks running on the Internet. And the Internet itself is an overlaid network built on top of the telephone network.

Slide 21: Configuration of a Virtual Node (or VNode)

This slide shows the configuration of the aforementioned “virtual node” (or VNode) designed by Prof. Akihiro Nakao’s group (The University of Tokyo and NICT), and implemented on JGN-X. The virtual node consists of two parts: one is called “Redirector,” which handles the conventional routing function, and the other is “Programmer,” which runs a program that implements the virtual node functions. Here, each “slice” corresponds to each “virtual network.” Thus, by replacing the conventional routers/switched by Virtual Nodes we will have a platform to allow experimental work of network architectures and protocols, whether E2E based designs or non-E2E designs.

Slide 22: VNode project and participating companies

The VNode project has industrial partners, who are greatly contributing in turning the theory into practice. NTT is working on the domain controller, Fujitsu on an access gateway which controls access to other networks (e.g., a cloud). Hitachi is responsible for a router with a custom hardware board for constructing virtual links and NEC is developingaprogrammable environment at a node for flexible creation of a network service. For details, see [12, 13].

 

Slide 23:Optical packet and Optical path

In the future network environment, a majority of end devices will be mobile devices and sensors, which are connected by wireless access networks. But for a core network that requires broad bandwidth, an optical network will be very important.

When we talk about a network architecture, we often say that the architecture should be independent of technologies, while its implementation may depend on available technologies. But this simplistic argument will not hold for an optical network architecture, since it is quite different from that of wired or wireless networks. The main reason is that unlike electric signals that wired and wireless networks deal with, optical signals do not yet have inexpensive random access memory or operation circuits to build an arithmetic logic unit (ALU).

Packet switching is based on asynchronous time division multiplexing (ATDM or statistical time division multiplexing), and in today’s optical technology, it is not possible to switch or route multiplexed optical signals as they are. While the “payload” portion of the signal may remain in the optical domain, the packet header must be translated into an electric signal. We often use optical delay circuits or lines as buffer and try to maintain the high speed of optical signals. In order to make best use of the speed of optical signal, wavelength division multiplexing (WDM) must be adopted. But WDM will provide circuit switching, like frequency division multiplexing (FDM) and synchronous time division multiplexing. An end-to-end circuit that involves wavelength routers at nodes in between is referred to as an optical path.

In the NwGN architecture, we take advantage of our strength in optical technology and propose an architecture that integrates an optical packet switching system and an optical path circuit switching system.

Slide 24: Integrated optical packet and optical path system

As shown in this slide, telemedicine, which requires real-time transmission of high-definition video, is an ideal application example of an optical path system. DCN (Dynamic Circuit Network), which is also supported by the JGN-X testbed, is another network that integrates the Internet with packet switching and optical circuit switching.

Slide 25: JGN-X Network Overview

NICT’s test bed effort for NwGN is called JGN-X, which is an evolutionary outgrowth of JGN (Japan Gigabit Network) that started in year 2000 as a testbed for large capacity networking. As its speed and capacity increased, the name changed to JGN2 (which supported multicast environment and IPv6), then JGN2 plus, and finally the JGN-X project started in the fiscal year 2011, where X stands for ”eXtreme.”

The JGN-X testbed of NICT implements network control by “Open Flow” and DCN (dynamic circuit network) as well as the network controlled by the virtual nodes (which is also called the VNode plane”). Here the term “plane” is used as an abbreviation of a“control plane architecture.”

In other words, the JGN-X allows us to pursue an architectural study of the above three types of virtual networks.

DCN integrates the packet switching based Internet and an all-optical network that performs on-demand circuit switching using the aforementioned wavelength division multiplexing (WDM). It is used in such applications as remote medical systems (i.e., telemedicine), the Large Hadron Collider (LHC) project at Cern in Switzerland, and other advanced science fields.

Slide 26: JGN-X International Circuits

As this slide indicates, JGN-X is connected not only with various groups within Japan but also with the networking communities of the world.

Slide 27: Research around JGN-X

The JGN-X group also collaborates with the communities of advanced networking and cloud computing. It also provides an emulation environment for HPC (high performance computing). The objective of JGN-X is to provide an environment for research and development of the NwGN technologies, but also that for development of network applications for the future.

 

III. Modelling and Analysis Issues in the Future Internet Research

Slide 28: Now I change gears and present my personal observations and suggestions to this audience, concerning opportunities and challenges of the future Internet research.

Although I talked exclusively about the NwGN project of NICT, there are a number of significant, perhaps more significant, research efforts taking place in the U.S., Europe and elsewhere, but in the interest of time, I will have to skip them in my presentation. But I do provide a brief summary and a reference in the text I will post in my blog.

The NSF’s FIA (Future Internet Architecture) program supports MobilityFirst (Rutgers and 7 other universities), Named Data Networking (NDN; UCLA and 10 other universities), eXpressive Internet Architecture (XIA:CMU and 2 other universities), and NEBULA (U. of Penn and 11 universities). Each FIA program has its own comprehensive website where you can find more information than you could possibly digest. A recent survey paper in the July 2011 issue of IEEE Communications Magazine provides a good introduction to the FIA, GENI and EU’programs. The article also allocates about a half page to AKARI and JGN-X. See,J. Pan, S. Paul and R. Jain, “A Survey of the Research on Future Internet Architectures,” IEEE Communications Magazine, July 2011, pp. 26-35.

NSF also funds a testbed program called the GENI (Global Environment for Network Innovations) program (2005-present) , which is managed by Mr. Chip Elliot of BBN Technologies, who holds quarterly meetings/workshops, called GEC (GENI Engineering Conference). I have attended several GEC meetings in the past three years, and I have been impressed by how fast each of the four testbed groups (called “GENI Control Framework” or simply “clusters”) has been making progress. The following four clusters (lead institutions) are currently supported: PlanetLab (Princeton University), ProtoGENI (Univ. of Utah), ORCA (Duke University and RENCI-Renaissance Computing Institute) and ORBIT (Rutgers University).

In Europe, a collaboration of FP7 (the Seventh Framework Programme) on Future Internet research is referred to, somewhat confusingly, as Future Internet Assembly (FIA). The EIFFEL (European Internet Future for European Leadership) program and the Future Internet Private-Public Partnership (FI-PPP) were launched in 2006 and 2011, respectively. As we assemble here, Germany has been sponsoring the G-Lab (German Laboratory) through BFBM (Bundesministrerium für Bildung und Forschung; Federal Ministry of Education and Research) in addition to their participation in the aforementioned EU efforts.

To come up with a quantitative comparison of one network architecture against another is a rather difficult proposition. Will the complexity of any of the candidate future networks be too great for us to comprehend? Our inability to quantitatively characterize the present Internet seems to come, not only form the limited state-of-affairs in mathematical modeling techniques, but also from the character, culture and history of the Internet community, where many researchers do not seem interested in the modeling and analysis aspect.

Slide 29: Modeling and Analysis Issues-Cont’d

The original TCP/IP network provides merely “best effort” services and its performance guarantee was not an issue of much concern, and this historical aspect seems to dictate the culture and mentality of the Internet community even today. There have been very few textbooks and papers that present modeling and analysis of the Internet. Most books and papers are primarily concerned about description of what the network or its susbsystem does, but not so much about discussing how well or poorly the network performs compared with some analytical results or “theoretical” bounds. Quantitative results are usually limited to simple plots of measurements data or simulation results. There are, of course, some few exceptions. The book by Profs. Kumar, Manjunath and Kuri [21] provides a fair amount of mathematical models of the Internet and its protocols, and a forthcoming book by Prof. Mung Chiang of Princeton [6] will be an excellent textbook, relating quantitative techniques to practical issues. It’s selected annotated bibliography will be also useful.

Slide 30: Testbed and Overdimensioning:

Although the research of future Internet seems still dominated by this traditional Internet culture, my own conviction is that prototyping and testbeds alone will never lead us to satisfactory understanding of system performance, reliability and security. Up to now, our limited capability to analyze and improve the network performance has been compensated for by over-dimensioning, which has been possible because the technological improvements and cost reduction in network components such as processors, memory and communication bandwidths have been able to match the phenomenal growth in the Internet users and insatiable appetites for resources by new applications. But there is no guarantee that the cost/performance of network components will continue to improve in a geometrical fashion as they have had in the past. We should also note that the energy consumption of IT systems is now a serious concern, as listed in an earlier slide.

Slide 31: Virtual Network as a network of processor sharing servers

Network virtualization is certainly a very powerful tool that allows us to test multiple candidates of new network architectures and protocols in parallel. This technology should ultimately help us migrate from the existing Internet to new one(s). But as it stands now, very little attention and effort seem to be paid to the performance aspect of each “slice” network, as well as, the performance limit and constraints of virtual networks. After all, network virtualization is nothing more than a form of (statistical) sharing of physical resources. A virtual network can be viewed as a network of processor sharing (PS) servers.

Slide 32: Processor sharing (PS) , a mathematical concept introduced by Prof. Len Kleinrock more than 40 years ago [22] (see also [23, 24]) has been proven to be a very powerful mathematical abstraction of “virtual processors” in a time-shared system. Similarly, it can represent “virtual links or circuits,” i.e., multiplexed streams of packets or data over acommunication channel. A link congested by TCP flows can be modeled also as a PS server.

Slide 33:Processor sharing (PS) –cont’d:

The so-called “fair scheduling” can be viewed as a discipline that emulates processor sharing. A processor sharing model often leads to a very simple performance analysis, because of its insensitivity to statistical properties of traffic load. A network of processor-sharing nodes lends itself to a closed expression for the steady state distribution.

N. Dukkipati et al. [5] mentioned earlier compares the performance of TCP/IP algorithms against the theoretical limit implied by a processor-sharing model. More of this kind of analysis should be practiced by networking researchers, and I believe the group here is the right audience whom I should encourage to work on it.

Slide 34: Loss Network Model

The loss network theory pioneered by Prof. Frank Kelly [25] (see also [ 23,24]) is a rather recent development, and it is a very general tool that can characterize a network with resource constraints which supports multiple end-to-end circuits with different resource requirements. It can be interpreted as a generalization of the classical Erlang or Engset loss models, and its insensitivity to network traffic or load, similar to the property of processor sharing, make this characterization very powerful.

Slide 35: Performance Analysis

The performance measures such as “blocking probability” and “call loss rate” can be represented in terms of the normalization constant of a loss network model, just like the performance measures such as server utilization, throughput and average queueing delay in a queueing network model are represented in terms of its normalization constant.

Computational complexity of an exact evaluation of the normalization constant may grow exponentially as the network size (in terms of the number of nodes and links, the bandwidths, buffer sizes, and router/switch’s speed) and/or the number of users (i.e., end-to-end connections to be supported by the network) incease. Fortunately, however, in such a regime as one for the future Internet of large size with a large number of users, an asymptotic analysis (see e.g. [26, 23] ) will become more accurate and often lend itself to a closed form expression.

Slides 36. Open Loss Network (OLN)

Here we show what we call an open loss network (OLN), where the path (or routing chain) of a call is open.

The number of links L in this example network is five, i.e., L=5

We define a call class as r=(c, τ), where c is a routing chain or path, and τ is a call type.

Slide 37. An OLN is equivalent to a Generalized Erlang Loss Station !

I will show you a very important observation. This observation should be rather simple, but is probably new to you, unless you have read my papers or books with Prof. Brian Mark (see [23, 24] and references therein) . For any given OLN, we can represent it by a single loss station as shown in this slide, where L, which was the number of links in the OLN, is now the number of serer types.

A call of class r holds Al, r lines simultaneously at link l, i.e., Al, r servers of type l.

ml = number of lines available at link l, i.e., the number of servers of type l.

rAl, rnl, rml

We have a simple product-form expression for the joint distribution of the number of different classes of calls processed in the network in terms of the normalization constants G’s. The blocking and call loss probabilities can be also expressed in terms of G ‘s.

Slide 38. Mixed Loss Network (MLN)

Similarly, a mixed loss network (MLN), which contains both open and closed networks as depicted here, can be also mathematically represented by a single loss station that generalizes the Erlang and Engset loss models. The network state distribution and the performance measures are again representable in terms of the normalization constants, and computational algorithms have been developed.

For large network parameters, recursive computation of the normalization constants may be prohibitive. However, the generating function of the normalization constant sequence can be obtained in a closed form and its inversion integral can be numerically evaluated. For very large network parameters, which will surely be the case for the future network, asymptotic approximation of the inversion integral will be applicable with high accuracy [26, 23].

Slide 39. Queueing and Loss Network

Finally, a diagram shown here is the concept of a queueing-loss network (QLN), which contains both queuing subnetwork(s) and loss subnetwork(s). The network state distribution and network performance measures can be again expressed in terms of the normalization constants.

The integrated optical-packet switching and optical paths system we discussed in Slide 24 can be formulated as a QLN. Please refer to [23] for detailed discussion.

Slide 40: Acknowledgments

I thank Prof. Brian L. Mark (George Mason University), Drs. Hiroaki Harai, Dr. Ved Kafle, Dr. Eiji Kawai (all at NICT, Japan) and Prof. Akihiro Nakao (University of Tokyo and NICT) for their help in preparing this speech and slides. I also thank Prof. Mung Chiang (Princeton University) for sharing the manuscript of his forthcoming textbook [6].

 

References

[1] H. Kobayashi, “An End to the End-to-End Arguments,” Euroview 2009, Würzburg, Germany, July 28, 2009. https://hp.hisashikobayashi.com/?p=122

[2] H. Kobayashi, “The New Generation Network (NwGN) Project: Its Promises and Challenges,”

Euroview 2009, Würzburg, Germany, July 23, 2012. https://hp.hisashikobayashi.com/?p=228

[3] J. H. Saltzer, D. P. Reed and D. D. Clark, “End-to-End Arguments in System Design,” ACM Trans. Comp. Sys., 2 (4), pp. 277-288, Nov. 1984.

[4] V. G. Cerf and R. E. Kahn, “A Protocol for Packet Network Intercommunications,” IEEE Trans. on Comms. 22(5), pp. 637-648, May 1974.

[5] N. Dukkipati, M. Kobayashi, R. Zhang-Shen and N. McKeown, “Processor Sharing Flows in the Internet,” in H. de Meer and N. Bhatti (Eds.) IWQoS 2005, pp. 267-281, 2005. http://pdf.aminer.org/000/465/981/processor_sharing_flows_in_the_internet.pdf

[6] M. Chiang, Networked Life: 20 Questions and Answers, Cambridge University Press, 2012 (to appear). ISBN 978-1-207-02494-6. http://www.cambridge.org/aus/catalogue/catalogue.asp?isbn=9781107024946

[7] L. G. Roberts, “The Next Generation of IP-Flow Routing,” SSGRR 2003 International Conference, L’Aquila, Italy, July 29, 2003, http://www.packet.cc/files/FlowPaper/NextGenerationofIP-FlowRouting.htm

[8] A. Bavier et al., “Increasing TCP Throughput with an Enhanced Internet Control Plane,” Proceedings of MILCOM, October 2006.

[9] J. Brassil et al., “The Chart System: A High-Performance, Fair Transport Architecture Based on Explicit Rate Signaling,” Operating Systems Review, Vol. 43, No.1, pp. 26-35, January 2009. http://napl.gmu.edu/pubs/JPapers/Brassil-SIGOPS09.pdf

[10] OpenFlow website; http://www.openflow.org/wp/learnmore/

[11] OpenFlow White Paper: N. McKeown et al., “OpenFlow: Enabling Innovation in Campus Networks,” ACM SIGCOM Computer Communication Review, Vol. 38, No.2, April 2008, pp. 69-74. Also available at http://www.openflow.org/documents/openflow-wp-latest.pdf

[12] A. Nakao, “Virtual Node Project: Virtualization Technology for Building New-Generation Networks,” NICT News, June 2010, No. 393, June 2010, pp. 1-6. http://www.nict.go.jp/en/data/pdf/NICT_NEWS_1006_E.pdf

[13] A. Nakao, A. Takahara, N. Takahashi, A. Motoki, Y. Kanada and K, Matoba, “VNode: A Deeply Programmable Network Testbed Through Network Virtualization,” submitted for publication. July 2012.

[14] NICT, New Generation Network Architecture AKARI: Its Concept and Design (ver2.0), NICT, Koganei, Tokyo, Japan, September, 2009. http://akari-project.nict.go.jp/eng/concept-design/AKARI_fulltext_e_preliminary_ver2.pdf

 

[15] T. Aoyama, “A New Generation Network: Beyond the Internet and NGN,” IEEE Commun. Mag., Vol. 47, No. 5, pp. 82-87, May 2008.

 

[16] N. Nishinaga, “NICT New-Generation Network Vision and Five Network Targets,” IEICE Trans. Commun., Vol. E93-B, No. 3, pp. 446-449, March 2010.

[17] J. P. Torregoza, P. Thai, W. Hwang, Y. Han, F. Teraoka, M. Andre, and H. Harai, “COLA: COmmon Layer Architecture for Adaptive Power Control and Access Technology Assignment in New Generation Networks,” IEICE Transactions on Communications, Vol. E94-6, No. 6, pp. 1526–1535, June 2011.

 

[18] V. P. Kafle, H. Otsuki, and M. Inoue, “An ID/Locator Split Architecture for Future Networks,” IEEE Communications Magazine, Vol. 48, No. 2, pp. 138–144, February 2010.

 

[19] ITU-T SG13, “Future Networks Including Mobile and NGN,” http://itu.int/ITU-T/go/sg13

 

[20] H. Furukawa et al. , H. Harai, T. Miyazawa, S. Shinada, W. Kawasaki, and N. Wada, “Development of Optical Packet and Circuit Integrated Ring Network Testbed”, Optics Express, Vol. 19, No. 26, pp. B242–B250, December 2011.

 

[21] A. Kumar, D. Manjunath and J. Kuri, Communication Networking: An Analytical Approach, Elsevier 2004.

[22] L. Kleinrock and R. R. Muntz, “Processor-sharing queueing models of mixed scheduling disciplines for time-sharing queuing systems,” J. ACM. Vol. 72 (1972), pp. 464-472.

[23] H. Kobayashi and B. L. Mark, System Modeling and Analysis: Foundations of System Performance Evaluation. Pearson-Prentice Hall, 2009

[24] H. Kobayashi, B. L. Mark and W. L. Turin, Probability, Random Processes and Statistical Analysis, Cambridge University Press, 2012.

[25] F. P. Kelly, “Loss Networks (invited paper),” Ann. Appl. Probab., Vol. 1, No. 3, pp. 319-378, 1991.

 

[26] Y. Kogan, “Asymptotic expansions for large closed and loss queueing networks,” Math. Prob. Eng. Vol. 8, No. 4-5, pp. 323-348, 2003.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *