Hisashi Kobayashi's Blog
Sherman Fairchild University Professor Emeritus of Electrical Engineering and Computer Science, Princeton University

Keynote speech at Euroview 2012

I delivered a keynote address “The New Generation Network (NwGN) Project: Its Promises and Challenges,” at Euroview 2012 Conference with the general them “Visions of Future Generation Networks,” which was held at the University of Würzburg, Germany on July 23 & 24, 2012.  http://www.euroview2012.org/

Shown below are the full text (augmented with some background information) of the keynote and the slides.  A similar lecture, with more technical details on the latter half of this talk,  will be given at the 24th  International Teletraffic Congress (ITC 24) at Krakow, Poland on September 4th-7th, 2012, and I will post that speech when it is done.

For the slides in PDF form, please click here.

 

The New Generation Network (NwGN) Project:

Its Promises and Challenges

Keynote Speech presented at Euroview 2012

July 23, 2012, University of Würzburg

Hisashi Kobayashi

The Sherman Fairchild University Professor Emeritus of

Electrical Engineering and Computer Science,

Princeton University, Princeton, New Jersey, USA

and

Executive Advisor

National Institute of Communications and Information and Technology (NICT)

Koganei, Tokyo, Japan

 

Abstract: This presentation consists of two parts. The first part is an overview of the New Generation Network (NwGN) project, a future Internet research project at NICT (National Institute of Information and Communication Technology), Japan. Its architecture, named AKARI, has four main features: cross-layer optimization, ID/Locator split, virtual nodes, and integrated optical packet switching and optical paths. JGN-X is a testbed that provides an environment to implement the AKARI and other future Internet architectures and to develop applications that run on these virtual networks.

The second part of this talk is my personal observations about the future Internet research in general, including the efforts made in the U.S., Europe, Japan and elsewhere. I question how several candidate architectures for the Future Internet will converge to one good network architecture that is acceptable to all members in the research community and various stakeholders. The anticipated difficulty will be exasperated because the research community is not well equipped with quantitative characterizations of the network performance. We propose some ideas and approaches that may remedy the current state of affairs.

About the Speaker: Hisashi Kobayashi is the Sherman Fairchild University Professor Emeritus of Princeton University, where he was previously Dean of the School of Engineering and Applied Science (1986-91). Currently he is Executive Advisor of NICT, Japan, for their New Generation Network. Prior to joining the Princeton faculty, he spent 15 years at the IBM Research Center, Yorktown Heights, NY (1967-82), and was the Founding Director of IBM Research-Tokyo (1982-86).

He is an IEEE Life Fellow, an IEICE Fellow, was elected to the Engineering Academy of Japan (1992), and received the 2005 Eduard Rhein Technology Award.

He is the author or coauthor of three books, “Modeling and Analysis: An Introduction to System Performance Evaluation Methodology” (Addison-Wesley, 1978), “System Modeling and Analysis: Foundations of System Performance Evaluation” (Pearson/Prentice Hall, 2009), and “Probability, Random Processes and Statistical Analysis” (Cambridge University Press, 2012). He was the founding editor-in-chief of “An International Journal: Performance Evaluation” (North Holland/Elsevier).

Text of the Speech

Good morning, President Forchel, the conference participants and other guests. It is a great honor to be invited to the Euroview as a keynote speaker. I thank Prof. Phuoc Tran-Gia, Dr. Tobias Hossfeld, Dr. Rastin Pries and the organization committee for providing me with this opportunity. Tobias suggested me to give an English version of the keynote I gave at a NICT conference held in Tokyo last November. So I will speak about NwGN, Japan’s Future Internet project, in the first half of this talk. Then I would like to present some views that might be considered somewhat provocative. I raise some questions, speculations and make some suggestions for challenges in future networking research. Since the allocated time is rather short to cover details, I will post a full text in my blog, www.hisashikobayashi.com , where some background information and technical details will be given in an italic and smaller font.

Slide 2: Outline of the presentation

Here is the outline of my talk

  1. What is NwGN and Why?
  2. AKARI Architecture

– Cross-layer optimization

– ID/Locator split architecture

– Network virtualization

– Integration of optical packets and optical paths

  1. JGN-X Test Bed
  2. Challenges in Future Network Research

 

I. What is NwGN and Why?

 

Slide 3: What is NwGN ?

The NwGN project is a flagship project, so to speak, of the networking research in Japan. The NwGN intends to make a revolutionary jump from the current Internet. Its purpose is to design a new architecture and protocols, and implement and verify them on a testbed called JGN-X.

Slide 4: Why NwGN?

Consider the explosively growing network traffic, mounting cyber attacks, and mobile devices and sensors connected to the Internet. Then, it should be rather obvious that the NGN (Next Generation Network)—which is merely an extension of today’s IP based Internet will hit its performance limit sooner or later.

The NwGN project aims at a revolutionary change so as to meet societal needs of the future [1-3]. AKARI is the architecture of such a network and JGN-X is a testbed, on which we will implement and verify the new architecture and its protocols.

Slides 5 & 6: Requirements of NwGN

There are numerous requirements that we need to take into account concerning network services of the future. Here is a list of what I consider as requirements for the NwGN:

  1. Scalability (users, things, “big data)
  2. Heterogeneity and diversity (in “clouds”)
  3. Reliability and resilience ( against naturaldisasters)
  4. Security (against cyber attacks)
  5. Mobility management
  6. Performance
  7. Energy and Environment
  8. Societal needs
  9. Compatibility (withtoday’s Internet)
  10. Extensibility (for the unforeseenand unexpected)

 

II. AKARI Network Architecture

Slide 7: The AKARI network architecture takes a layered structure like all network architectures we know of, but instead of adhering to static and strict boundaries between the layers, it takes an adaptive approach, by adjusting layer boundaries, depending on the load placed on the network and resource usage. Such a design philosophy is referred to as “cross-layer optimization,” intended to improve quality of services under varying operational conditions. Such adaptive quality of service management is a subject pursued actively in the networking community at large.

Slide 8: ID and Locator in the Internet

In the current Internet, devices on the network are identified in terms of their “IP addresses,” which are their identification numbers on the network layer. In the original internet, i.e., ARPANET, all end devices were host machine, with their locations being fixed. Thus, there was no problem in interpreting the IP addresses as “locators,” namely, the devices’ location information. In designing a future Internet, however, we must take into account that a majority of end devices are mobile, with devices with fixed locations being exceptions.

Slide 9: ID/Locator Split Architecture

An end device or an enterprise network may be connected to the Internet via multiple links, and such a technique is referred to as “multihoming.” Its primary purposes are to increase the reliability and resilience and to mitigate a possible overload on one link or circuit.

In order to efficiently deal with the mobile devices and/or multihoming requirements, we should distinguish IDs and locators and assign two different sets of numbers to them. Then, even if a mobile or multihomed device’s locator changes in the network layer, its ID associated with communications in the upper layers will remain unchanged.

The set of mappings from IDs to locators is referred to as IDR (ID registry).

The development of mapping algorithms and a scheme for determining where and how to store the ID Registry are both important issues of the split architecture. The split architecture is also useful to solve the security issue.

In the split architecture, not only locators, but also IDs are present in packet headers. So using IDs to enforce security or packet filtering is possible, and remains applicable even when the locators are changed due to mobility/multihoming. In the current Internet, the IP address in each packet is used as a key to enforce security or packet filtering. IPsec is an example of this location-based security. See RFC 2401: http://www.ietf.org/rfc/rfc2401.txt .

The split architecture is also effective against denial-of-service (DoS) attacks and man-in-the-middle (MitM) attacks, by relating IDs to some security credentials such as public keys and certificates. When an unknown device wants to communicate with a server, the server may ask the device to prove that the ID is associated with a public key and that the association has been certified by a reliable third party, before the server sets aside any resource (e.g., memory) for the session. The server may also ask the device to solve a puzzle of middle-level complexity before setting up the session.

There are two approaches to the ID/Locator Split Architecture. One is a host-based approach, in which the ID/Locator split protocols are implemented in the end hosts only. Its objectives are to achieve secure communications over the unsecured Internet and also to support mobility. As an example, consider the Host Identity Protocol (HIP) described in RFC 5201http://www.ietf.org/rfc/rfc5201.txtand P. Nikander, A. Gurtov and T. R. Henderson, “Host Identity Protocol (HIP): Connectivity, Mobility, Multihoming, Security, and Privacy over IPv4 and IPv6 Networks,” IEEE Communications Survey & Tutorials, Vol. 12, no. 2, pp. 186-204, Second Quarter, 2010.

 

The other approach is a router-based approach in which the ID/Locator split protocols are implemented in routers, not in end hosts. Its primary objective is to make the BGP (Border Gateway Protocol) routing table size smaller by using two different addressing spaces in edge and core networks. It is known as LISP (Locator/ID Separation Protocol). LISP is about to become an RFC. See

http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=05451761 .

We can get information about its implementation/standardization status as well as tutorial documents from this site: http://www.lisp4.net/ . Both the HIP and LISP ideas were generated by the IETF (Internet Engineering Task Force).

 

In the NwGN project we are implementing the ID/locator split in both hosts and edge routers so that we can get benefits of both Host Identity Protocol or HIP (for security, mobility) and Locator/ID Separation Protocol or LISP (for core routing scalability) [5, 6]. Additionally, our approach supports heterogeneous protocols in the edge networks (e.g., a host in an IPv4 network can communicate with another host located in an IPv6 network, and the host can move across heterogeneous networks). We are making application and transport layer protocols independent of the network layer protocols so that the same application can be transported over various network protocols.

 

The way in which the Internet is used is shifting from “communications from a device to another device” to “communications from data to humans.” When we wish to retrieve data or information, using a web browser and a web server, the data or information itself is an object of our interest, and it is immaterial from which devicethe data or information is fetched. A network architecture based on such a philosophy is called a “data centric” architecture.

In the ID/Locator Split Architecture, data and information can be treated as “things,” and we can assign IDs to them. Thus, the split architecture has an advantage of being applicable to a data-centric architecture as well.

Network Virtualization

I suppose that a majority of the audience is familiar with the notion of network virtualization, so I will skip a detailed definition of this term.

The notion of “virtualization” in computer technologies goes back to circa 1960, when virtual memory was introduced in the Atlas machine of the University of Manchester, UK. In 1972, IBM introduced VM/370, a virtual machine (VM) operating system that ran on System/370.

In the last decade, IT (information technology) departments of enterprises have begun to adopt a variety of virtualization technologies available as commercial products, ranging from server virtualization, storage virtualization, client (or desktop) virtualization to software virtualization, e.g., allowing Linux to run as a guest on top of a PC that is natively running a Microsoft Windows operating system. Such virtualization techniques allow multiple users and applications to dynamically share physical resources. Thus, they increase resource utilization and/or reduce electric energy consumption, as well as simplify complex administrative operations of IT.

Slide 10: Simply put, network virtualization chooses a subset of a collection of real (or physical) resources (routers, links, etc.) and functionalities (routing, switching, transport) of a real network (or multiple real networks) and combines them to form a logical network called a virtual network.

Slide 11: Virtual networks take different forms, depending on specific layers to which virtualization is applied. Here we illustrate what is termed “overlaid networks” (also known as “overlay networks”). Nodes in an overlaid network are connected by virtual links (or logical links) which are comprised of paths that are formed by combining multiple links in the network underneath. Distributed systems such as cloud computing, peer-to-peer (P2P) networks, and client-server applications (e.g., web browser and web server), can be viewed as overlaid networks running on the Internet. And the Internet itself is an overlaid network built on top of the telephone network.

Slide 12: Configuration of a Virtual Node

This slide shows the configuration of a “virtual node” designed by Prof. Akihiro Nakao’s group (The University of Tokyo and NICT), and implemented on JGN-X. The virtual node consists of two parts: one is called “Redirector,” which handles the conventional routing function, and the other is “Programmer,” which runs a program that implements the virtual node functions. Here, each “slice” corresponds to each “virtual network.”

Slide 13: Virtual node project and participating companies

The virtual node project has industrial partners, who are greatly contributing in turning the theory into practice. NTT is working on the domain controller, Fujitsu on an access gateway which controls access to other networks (e.g., a cloud). Hitachi is responsible for a router with a custom hardware board for constructing virtual links and NEC is developing a programmable environment at a node for flexible creation of a network service. For details, see [7,8].

Slide 14:Optical packet and Optical path

As already remarked, in the future network environment, a majority of end devices will be mobile devices and sensors, which are connected by wireless access networks. But for a core network that requires broad bandwidth, optical links and optical networks will be very important. When we talk about a network architecture, we often say that the architecture should be independent of technologies, while its implementation may depend on available technologies. But this simplistic argument will not hold for an optical network architecture, since it is quite different from that of wired or wireless networks. The main reason is that unlike electric signals that wired and wireless networks deal with, optical signals do not yet have inexpensive random access memory or operation circuits to build an arithmetic logic unit (ALU).

Packet switching is based on asynchronous time division multiplexing (ATDM or statistical time division multiplexing), and in today’s optical technology, it is not possible to switch or route multiplexed optical signals as they are. While the “payload” portion of the signal may remain in the optical domain, the packet header must be translated into an electric signal. We often use optical delay circuits or lines as buffer and try to maintain the high speed of optical signals. In order to make best use of the speed of optical signal, wavelength division multiplexing (WDM) must be adopted. But WDM will provide circuit switching, like frequency division multiplexing (FDM) and synchronous time division multiplexing. An end-to-end circuit that involves wavelength routers at nodes in between is referred to as an optical path.

Slide 15: Integrated optical packet and optical path system

In the NwGN architecture, we take advantage of our strength in optical technology and propose an architecture that integrates an optical packet switching system and an optical path circuit switching system. As shown in this slide, telemedicine, which requires real-time transmission of high-definition video, is an ideal application example of an optical path system. DCN (Dynamic Circuit Network), which will be mentioned in the discussion of the JGN-X testbed, is also a network that integrates the Internet with packet switching and optical circuit switching.

III. JGN-X: Testbed for NwGN

Slide 16: JGN-X network overview

NICT’s test bed effort for NwGN is called JGN-X, which is an evolutionary outgrowth of JGN (Japan Gigabit Network) that started in year 2000 as a testbed for large capacity networking. As its speed and capacity increased, the name changed to JGN2 (which supported multicast environment and IPv6), then JGN2 plus, and finally the JGN-X project started in the fiscal year 2011, where X stands for ”eXtreme.”

The JGN-X testbed of NICT implements network control by “Open Flow” and DCN (dynamic circuit network) as well as the network controlled by the virtual nodes (which is also called the “virtual node plane”) .

Here the term “plane” is used as an abbreviation of a“control plane architecture.”

In other words, the JGN-X allows us to pursue an architectural study of the above three types of virtual networks.

The control scheme in the conventional Internet is primarily based on routing using the IP addresses, whereas that of OpenFlow intends to improve the quality of services and increase the efficiency of the network by doing routing control at the flow unit level, whereas a “flow” is defined as a communication that is determined by the combination of the MAC addresses, IP addresses and port numbers involved in the communication. NEC, which is a founding member of the Open Flow Consortium, is developing a ”programmable flow switch.” DCN integrates the packet switching based Internet and the all-optical network that performs on-demand circuit switching using the aforementioned wavelength division multiplexing (WDM). It is used in such applications as remote medical systems (i.e., telemedicine), the Large Hadron Collider (LHC) project at Cern in Switzerland, and other advanced science fields.

Slide 17: JGN-X International Circuits

As this slide indicates, JGN-X is connected not only with various groups within Japan but also with the networking communities of the world.

Slide 18: Research around JGN-X

The JGN-X group also collaborates with the communities of advanced networking and cloud computing. It also provides an emulation environment for HPC (high performance computing). The objective of JGN-X is to provide an environment for research and development of the NwGN technologies, but also that for development of network applications for the future.

 

Slide 19:

IV. Challenges in the Future Internet Research

The History and Culture of the Internet Research

Now I change gears and present my personal questions, speculations and suggestions concerning the challenges in the future Internet research.

Although I talked exclusively about the NwGN project of NICT, there are a number of significant, perhaps more significant, research efforts taking place in the U.S., Europe and elsewhere, and I will defer to Mr. Chip Elliot, Dr. Peter Freeman, Prof. Raychaudhuri and other speakers in this conference and workshop for discussion of some of these efforts.

The NSF’s FIA (Future Internet Architecture) program supports MobilityFirst (Rutgers and 7 other universities), Named Data Networking (NDN; UCLA and 10 other universities), eXpressive Internet Architecture (XIA:CMU and 2 other universities), and NEBULA (U. of Penn and 11 universities). Each FIA program has its own comprehensive website where you can find more information than you could possibly digest. A recent survey paper in the July 2011 issue of IEEE Communications Magazine provides a good introduction to the FIA, GENI and EU’programs. The article also allocates about a half page to AKARI and JGN-X. See, J. Pan, S. Paul and R. Jain, “A Survey of the Research on Future Internet Architectures,” IEEE Communications Magazine, July 2011, pp. 26-35.

NSF also funds a testbed program called the GENI (Global Environment for Network Innovations) program (2005-present) , which is managed by Mr. Chip Elliot of BBN Technologies, who holds quarterly meetings/workshops, called GEC (GENI Engineering Conference). I have attended several GEC meetings in the past three years, and I have been impressed by how fast each of the four testbed groups (called “GENI Control Framework” or simply “clusters”) has been making progress. The following four clusters (lead institutions) are currently supported: PlanetLab (Princeton University), ProtoGENI (Univ. of Utah), ORCA (Duke University and RENCI-Renaissance Computing Institute) and ORBIT (Rutgers University).

In Europe, a collaboration of FP7 (the Seventh Framework Programme) on Future Internet research is referred to, somewhat confusingly, as Future Internet Assembly (FIA). The EIFFEL (European Internet Future for European Leadership) program and the Future Internet Private-Public Partnership (FI-PPP) were launched in 2006 and 2011, respectively. As we assemble here, Germany has been sponsoring the G-Lab (German Laboratory) through BFBM (Bundesministrerium für Bildung und Forschung; Federal Ministry of Education and Research) in addition to their participation in the aforementioned EU efforts.

As the slide on the “JGN-X international circuits” (Slide 17) implies, some of the testbeds of GENI and G-Lab are already connected to JGN-X, and many more testbeds will be connected, and I am sure that the same can be said about GENI, G-Lab, etc. Valuable exchanges of information on novel architectures and protocols have been regularly held, like the one we have at this Euroview.

Nevertheless, I fear that it will be extremely difficult, if not impossible, for all key players in the future Internet research community to come up with a universally agreeable architecture. First, there is an issue of the so-called NIH syndrome, where NIH stands for “Not Invented Here.” In other words, our ego problem, as well as our economical and political considerations, tends to dictate so that we may be reluctant to admit that ideas of other people may be better than our own.

Another question: Will the backward compatibility with the existing Internet and applications be a decisive factor? Or can we agree on an “optimal” clean-slate architecture first, and then try to figure out the best feasible migration strategy? Or do we continue letting the existing Internet run, at least for a while, as one of the virtual networks to be supported together with the new Internet(s)?

Slide 20: How to Evaluate Architectures?

Coming up with a quantitative comparison of one network architecture against another is a rather difficult proposition. Will the complexity of any of the candidate future networks be too great for us to comprehend? Our inability to quantitatively characterize a network architecture seems to come from not only the limited state-of –affairs in mathematical modeling techniques, but also the character, culture and history of the Internet community.

Slide 21: TCP-IP Networks

As you are well familiar, the original TCP/IP network provides merely “best effort” service and its performance guarantee was not an important issue, and this historical aspect seems to dictate the culture and mentality of the Internet community even today. In the Internet literature, there have been very few quantitative characterizations and discussions. The researchers and implementers are primarily concerned about what the system and applications deliver, but not so much interested in evaluating how well or poorly the system works as compared with other alternatives or against some “theoretical” limit. There are, of course, some exceptions such as [14] as I will refer to in a minute.

The fields of performance analysis and optimal control of resource allocations were active and thriving in the 1970s through 1990s. There was a strong need and a big payoff in designing and operating an optimal multiprogramming time-sharing computer, which had to serve many users under the constraints of physical resources.

As the computing paradigm shifted from the client-server model to the peer-to-peer model, and as powerful workstations and PCs with fast processors, abundant storage, and much broader communication bandwidth became available for a fraction of cost compared with a generation ago, there has been very little need and incentives to attain an optimal performance by insightful analysis and clever control algorithms. Quantitative analysis of the system (even a back-of –envelope type calculation) or simulation of a network system in a controlled environment have been replaced by quick prototyping of a target system.

Slide 22: B-ISDN vs. the Internet

In the research and development of B-ISDN centered around the ATM fast packet switching, which was hailed as the vision of multimedia services of the 21st century in the 1980s and 1990s, performance modeling and analysis of networks was very active. Unfortunately, the B-ISDN camp lost the race against the Internet camp, not because they were interested in performance modeling and analysis, but they were slow in coming up with what were called “killer applications,” i.e., new and attractive applications. The closed and centrally controlled architecture of B-ISDN lost the game to the open architecture of the IP network, i.e., the so-called “end-to-end design principle” allowed many applications around the WWW to be rolled out to the consumer market [10].

It would be an interesting “Gedankenexperiment” to ponder what the world would be like today, if the B-ISDN should have taken control of the telecommunications market of the 21st century, as was once envisioned by the telecommunication carriers of the world. Social networks such as Facebook and Twitter might not exist yet, and hence the revolutions in Egypt and other countries with dictatorial regime might not have occurred.

The radical “computer trading” that ran on computers connected to the Internet might not have developed without safeguard, and might not have triggered the market to crash (so-called Lehman Shock”) on September 15, 2008, and the “flash crash” of May 6, 2010? The world would not be threatened by the kind of cyber attacks we witness today. There would be no need for working on the future Internet, and most of us who assembled today would be working on research papers on better algorithms and performance improvement of ATM switches. I would be rich with a lot of royalty flowing in, with my books selling like hotcakes. It is too bad.

Slide 23: Over-reliance on Testbeds ?

Although we are now dominated by the Internet and its culture, my own conviction is that prototyping and testbeds alone will never lead us to quantitative understanding of system performance, reliability and security. Up to now, our inability to analyze and tune the network performance has been aided by over-dimensioning of the network, which was possible because the technological improvements and cost reduction in network components such as processors, memory and communication bandwidths have been able to match the phenomenal growth in the Internet users and insatiable appetites for resources by new applications. But there is no guarantee that the cost /performance figures of network components will continue to improve in a geometrical fashion as they have had in the past, and the energy consumption of IT systems is now a serious concern, as listed in Slide 5.

Modeling and Analysis Issues of the Future Internet

Slide 24: Modeling and Analysis Issue

Network virtualization is certainly a very powerful tool that allows us to test multiple candidates of new network architectures and protocols in parallel. This technology should ultimately help us migrate from the existing Internet to new one(s). But as it stands now, very little attention and effort seem to be paid to the performance aspect of each “slice” network, as well as, the performance limit and constraints of virtual networks. After all, network virtualization is nothing more than a form of (statistical) sharing of physical resources. A virtual network can be viewed as a network of processor sharing (PS) servers.

Slide 25: Processor sharing (PS) (see e.g., [11, 12, 13]) has been proven to be a very powerful mathematical abstraction of “virtual processors” in a time-shared system. Similarly, it can represent “virtual links or circuits,” i.e., multiplexed streams of packets or data over acommunication channel. A link congested by TCP flows can be modeled as a PS server.

Slide 26:Processor sharing (PS) –cont’d:

The so-called “fair scheduling” can be viewed as a discipline that emulates processor sharing.

Processor sharing often leads to a very simple performance analysis, because of its robustness or insensitivity to statistical properties of traffic load.

N. Dukkipati et al. [13] compares the performance of TCP/IP algorithms against the theoretical limit implied by a processor-sharing model . I believe that more of this kind of analysis should be practiced by networking researchers.

Slide 27: Loss Network Model

The loss network theory (see e.g. [15, 12, 13]) is a rather recent development, and it is a very general tool that can characterize a network with resource constraints which supports multiple end-to-end circuits with different resource requirements. It can be interpreted as a generalization of the classical Erlang or Engset loss models, and its insensitivity and robustness against the network traffic or load characteristics make this characterization very powerful.

Slide 28: Performance Analysis

The performance measures such as “blocking probability” or “call loss rate” are represented in terms of the normalization constant (or the “partition function” in thermodynamics or statistical mechanics) , just like the performance measures such as server utilization, throughput and average queueing delay in a queueing network model are represented in terms of its normalization constant.

The complexity of an exact computation of the normalization constant grows exponentially as the network size (i.e., in terms of the number of nodes, links, the bandwidths, buffer size, and router or switch’s speed) and/or the number of users (i.e., end-to-end connections to be supported by the network) grow. Fortunately, however, in such a regime, an asymptotic analysis (see e.g. [16, 12] ) becomes more accurate and often lends itself to a closed form expression.

Slides 29. Open Loss Network (OLN)

Here we show what we call an open loss network (OLN), where the path of a call is open.

The number of links L in this example network is five, i.e., L=5

We define a call class as r=(c, τ), where c is the routing chain, and τ is a call type.

Slide 30. Generalized Erlang Loss Model

Then, for any given OLN, we can represent it by a loss station given in this slide, where L, which was the number of links in the OLN, is now the number of serer types.

A call of class r holds Al, r lines simultaneously at link l, i.e., Al, r servers of type l.

ml = number of lines available at link l, i.e., the number of servers of type l.

rAl, rnl, rml

We have a simple closed form expression for the joint distribution of the number of different classes of calls in progress in the network in terms of the normalization constants G. The blocking and call loss probabilities can be also found in terms of G ‘s.

Slide 31. Mixed Loss Network (MLN)

A mixed loss network (MLN) depicted here can be viewed as a generalized Engset loss model. The network state distribution and the performance measures are again representable in terms of the normalization constants, and computational algorithms have been developed.

For large network parameters, recursive computation of the normalization constants may be prohibitive. However, the generating function of the normalization constant sequence can be obtained in a closed form and its inversion integral can be numerically evaluated. For very large network parameters, which will surely be the case for the future network, asymptotic approximation of the inversion integral will be applicable with high accuracy [16, 12].

Slide 32. Queueing and Loss Network

Finally, a diagram shown here is the concept of a queueing-loss network (QLN), which contains both queuing subnetwork(s) and loss subnetwork(s). The network state distribution and network performance measures can be again expressed in terms of the normalization constants.

The integrated optical-packet switching and optical paths system we discussed in Slide 15 can be formulated as a QLN. Please refer to [12] for detailed discussion.

Slide 33: For Further Information

For my further discussion on the modeling and analysis aspects, please refer to a forthcoming presentation at the ITC 24 to be held in Krakow, Poland in September 2012 [17].

Acknowlegments:

I thank Drs. Hiroaki Harai, Dr. Ved Kafle, Dr. Eiji Kawai of NICT and Prof. Akihiro Nakao of the University of Tokyo and NICT, for their great help in preparing and improving this speech.

References

[1] NICT, “New Generation Network Architecture AKARI: Its Concept and Design (ver2.0),” NICT, Kognei, Tokyo, Japan, September, 2009. http://akari-project.nict.go.jp/eng/concept-design/AKARI_fulltext_e_preliminary_ver2.pdf

 

[2] T. Aoyama, “A New Generation Network: Beyond the Internet and NGN,” IEEE Commun. Mag., Vol. 47, No. 5, pp. 82-87, May 2008.

 

[3] N. Nishinaga, “NICT New-Generation Network Vision and Five Network Targets,” IEICE Trans. Commun., Vol. E93-B, No. 3, pp. 446-449, March 2010.

[4] J. P. Torregoza, P. Thai, W. Hwang, Y. Han, F. Teraoka, M. Andre, and H. Harai, “COLA: COmmon Layer Architecture for Adaptive Power Control and Access Technology Assignment in New Generation Networks,” IEICE Transactions on Communications, Vol. E94-6, No. 6, pp. 1526–1535, June

2011.

 

[5] V. P. Kafle, H. Otsuki, and M. Inoue, “An ID/Locator Split Architecture for Future Networks,” IEEE Communications Magazine, Vol. 48, No. 2, pp. 138–144, February 2010.

 

[6] ITU-T SG13, “Future Networks Including Mobile and NGN,” http://itu.int/ITU-T/go/sg13

 

[7] A. Nakao, “Virtual Node Project: Virtualization Technology for Building New-Generation Networks,” NICT News, June 2010, No. 393, June 2010, pp. 1-6. http://www.nict.go.jp/en/data/pdf/NICT_NEWS_1006_E.pdf

[8] A. Nakao, A. Takahara, N. Takahashi, A. Motoki, Y. Kanada and K, Matoba, “VNode: A Deeply Programmable Network Testbed Through Network Virtualization,” submitted for publication. July 2012.

[9] H. Furukawa et al. , H. Harai, T. Miyazawa, S. Shinada, W. Kawasaki, and N. Wada, “Development of Optical Packet and Circuit Integrated Ring Network Testbed”, Optics Express, Vol. 19, No. 26,

pp. B242–B250, December 2011.

[10] H. Kobayashi, “An End to the End-to-End Arguments,” Euroview 2009, Wurzburg, Germany, July 2009.

[11] L. Kleinrock and R. R. Muntz, “Processor-sharing queueing models of mixed scheduling disciplines for time-sharing queuing systems,” J. ACM. Vol. 72 (1972), pp. 464-472.

[12] H. Kobayashi and B. L. Mark, System Modeling and Analysis: Foundations of System Performance Evaluation. Pearson-Prentice Hall, 2009

[13] H. Kobayashi, B. L. Mark and W. L. Turin, Probability, Random Processes and Statistical Analysis, Cambridge University Press, 2012.

[14] N. Dukkipati, M. Kobayashi, R. Zhang-Shen and N. McKeown, “Processor Sharing Flows in the Internet,” in H. de Meer and N. Bhatti (Eds.) IWQoS 2005, pp. 267-281, 2005.

 

[15] F. P. Kelly, “Loss Networks (invited paper),” Ann. Appl. Probab., Vol. 1, No. 3, pp. 319-378, 1991.

 

[16] Y. Kogan, “Asymptotic expansions for large closed and loss queueing networks,” Math. Prob. Eng. Vol. 8, No. 4-5, pp. 323-348, 2003.

 

[17] H. Kobayashi, “Modeling and Analysis Issues in the Future Internet,” Plenary Lecture at the 24th International Teletraffic Congress (ITC24), Krakow, Poland, September 4th-7th, 2012

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *