Hisashi Kobayashi's Blog

Sherman Fairchild University Professor Emeritus of Electrical Engineering and Computer Science, Princeton University

Remembrances of Shoshichi Kobayashi (小林昭七)

September 14th, 2012

Remembrances of

Shoshichi Kobayashi (小林昭七)

January 4, 1932 – August 29, 2012

 

On August 29, 2012 my brother Shoshichi Kobayashi, Emeritus Professor of Mathematics at the University of California at Berkeley, died peacefully in his sleep. The memorial service was held on September 8, 2012 at Sunset View, El Cerrito, California, presided by Reverend Philip Brochard and Reverend Kristin Krantz.

The service was attended by a number of Shoshichi’s colleagues, former students, family friends as well as his wife Yukiko Grace Kobayashi, his daughters, Sumire and Mei, Sumire’s husband, Philip Chou and their children, Andrew and Brendan, my wife Masae Kobayashi and myself.

The readers were Andrew Chou, grandson and Yukiko Kobayashi, wife. The musicians were Ms. Miwako Tomizawa, violin, and Ms. Kuniko Weltin-Wu, piano.

Here is a table of contents that includes the four remembrance speeches given at the service, and other materials concerning Shoshichi. Clicking any item will let you jump to that section.

I. Remembrances

Hisashi Kobayashi, Brother

Mei Kobayashi, Younger Daughter

Prof. Alan D. Weinstein, Colleague

Prof. Arthur E. Ogus, Chair of Mathematics Department

II. Condolence Letters

Prof. Heisuke Hironaka and Mrs. Wakako Hironaka

III. Shoshichi Kobayashi, Mathematician: 1932-2012

Biography

List of Books

List of Publications

IV. The Shoshichi Kobayashi Memorial Fund (小林昭七記念基金)

Online Donation (オンラインによるご寄付)

Donation by Check (小切手郵送によるご寄付)

I. Remembrances

Hisashi Kobayashi小林久志), Shoshichi’s Younger Brother

Reverend Brochard and Reverend Krantz, Ladies and Gentlemen.

On behalf of the Kobayashi family, I would like to express our sincere thanks for kindly attending the funeral service of my brother, Shoshichi Kobayashi.

Shoshichi was born on January 4th, 1932 as the first child of our parents, Kyuzo and Yoshie Kobayashi in Kofu City, Yamanashi Prefecture. Soon after his birth the family moved to Tokyo to start a business because they found such an opportunity was limited in Kofu at that time, when Japan was still in the midst of the Great Depression. The second son, Toshinori, the third son, Hisashi, that is me, and the fourth son, Hisao, were born three years apart. I am not sure whether our parents planned to produce children every three years, but this regular periodic sequence was interrupted during the war, so their fifth son, Kazuo, was born six years after Hisao. Unfortunately, Hisao died when he was only two years old, and Kazuo died soon after graduating from college. My second brother, Toshinori in Japan, is regrettably unable to join us here today because of his poor health.

Since Shoshichi and I were six years apart, I don’t recall that we played together as children. He has been always my mentor and role model, and I am really fortunate to have had such a great brother. He was extraordinarily generous with his time in encouraging Toshinori and me to excel academically.

As B-29 fighter bombers began to threaten Tokyo in 1944, we frequently had to run into a “Bokugo,” or an underground shelter. Shoshichi was in his sixth grade at Elementary School, and always carried mathematics books and candles with him. In the spring of 1945, our whole family decided to evacuate from Tokyo, and moved to Minami-Saku, Nagano Prefecture. Shoshichi attended Nozawa “Chugakko” (or Middle School) there. In the Japanese education system at that time, entering one of the eight so-called “Number Higher Schools” was most competitive. Advancing from one of these Number Schools to one of the Imperial Universities was less difficult.

No. 1 Higher School (called “Daiichi Koto Gakko” or “Ichiko” for short) in Tokyo was the most difficult Higher School to get into. The middle school at that time required five years of schooling, but students were allowed to take an entrance exam in their fourth year. But only a handful of brilliant students could pass the competitive exam. Shoshichi was successfully admitted to Ichiko in his fourth year at Nozawa Middle School. This was an unprecedented achievement by any student at the Nozawa Middle School, so Shoshichi became a legendary figure of the School. At that time I was a fourth grader at Elementary School. Our family was congratulated by everyone in the village.

In the fall of 1948, six months after Shoshichi entered Ichiko, our family finally moved back to Tokyo. When he came home from his dormitory on weekends, he often took me to a “Furuhonya” (used book store) where he found appropriate math books for me to study. I was ten years old, a fourth grader.

Around this period he also taught me Franz Schubert’s Heiderröslein (Wild rose). I just memorized the song like a parrot without knowing anything about the German words. I can still recite the song from my memory. In fact this is one of the few songs for which I know the lyrics as well as the melody. (“Sah ein Knabe ein Röelein stehn, Röelein auf der Heiden, War so jung und morgenschön. Lief er schnell es nah zu sehn, Sah’s mit vielen Freuden. Röslein, Röslein, Röslein rot, Röslein auf der Heiden.”)

In the spring of 1951, when Shoshichi started his junior year at Tokyo University (Ichiko became Tokyo University’s Junior College), our parents finally bought a house in Setagaya-ku, Tokyo. The house was bigger than the one we rented in Kichijoji. So Shoshichi got out of the dormitory and lived with us. I was a good student and my parents were completely happy with my performance, but Shoshichi was very demanding. He gave me an order that I should attend an English class in the evening at Aoyama Gakuin at Shibuya, three times a week. On days when I had no evening class, I sometimes went to a local movie theater with my friends to see cowboy movies of John Waynes, and Shoshichi reprimanded me, saying “Hisashi, you are wasting your time. You should study.”

Shoshichi graduated from Tokyo University at age 21. During his senior year, he won a scholarship of the French Government that granted him graduate studies in France. So in the summer of 1953, he left Yokohama by ship for France. But his role as my mentor did not stop there. Before he left for France, he bought for me a Japanese translation of “A Survey of Modern Algebra” written by Harvard professors, Birkhoff and MacLane, and instructed me that I should study one chapter per week and send him by airmail my solutions of exercise problems. He corrected errors in my solutions and sent them back by airmail. So he continued to be my teacher even after he left Japan. He must have been very busy with his own study in France, but he was very generous about spending his time to educate me.

After a year’s study at mathematical institutes in Strasbourg and Paris, he moved to the U.S. in 1954, admitted to the Ph.D. program of the University of Washington in Seattle, where he received his Ph.D. in less than two years at age 24. During this period he told me that mastering foreign languages was important and that I should start studying German. So I was enrolled in Takada Gaigo, a foreign language institute, near my high school.

In the spring of 1956 our parents received a letter from Shoshichi, announcing that he was going to marry Ms. Yukiko Grace Ashizawa. I was surprised to find that he was interested in marrying a woman, because until then I thought all that he cared about was studying mathematics, and teaching mathematics to me. I don’t think he had any girlfriend when he was a student at Todai. His letter included a beautiful portrait of Yukiko. I wrote him back, saying “I am happy for you, and I am impressed that you have found such a beautiful woman as your future wife. I will support your decision, regardless of our parent’s reaction.” Our parents seemed caught by surprise too. Our father visited the temple of the Ashizawa family and was satisfied to find that they had a distinguished “Haka” or grave. So the father was convinced that Yukiko-san must be a daughter of a respectable family. Our father was very proud of Kobayashi family’s ancestors and impressive “Haka.”

I think Shoshichi’s character changed significantly after his marriage with Yukiko-san. In almost all photos taken after the marriage, he is always smiling or laughing. I don’t recall seeing his smile often when he was in Japan. He was always serious looking. After he got married with Yukiko-san, he never said anything critical to me such as “Hisashi, you are wasting your time.” I am thankful to Yukiko-san for transforming Shoshichi to a well-rounded and tolerant character.

I think that he has led a very happy and gratifying life, surrounded by his cheerful wife, two loving daughters, Sumire and Mei, a very thoughtful son-in-law, Phil Chou, and two promising grandsons, Andrew and Brendan. He would have written a few more books, were he able to live for several more years, as we expected, but ending one’s life during sleep, as he did, is the most peaceful way to depart from this world. In this sense I am happy for him. We all miss him dearly, but Kobayashi’s theorem, Kobayashi’s metric, his fifteen books and numerous research papers will be here to stay forever. He has had a great life, and we are proud of being part of his life, and will cherish our fond memories of him for many years to come.

Mei Kobayashi (小林メイ), Shoshichi’s Younger Daughter

My first memory of my father was our annual fall event – getting dressed up with my sister to go to the UC campus to be photographed for my parents’ upcoming Christmas card. Weeks before the event, my mother spent hours sewing us matching dresses then finding lace bobby socks and patent leather shoes. Sumire, being the A+ student that she was, always cooperated. Me? Well, … My parents found it a challenge to get me dressed and an even greater challenge to get me to sit still for 2 to 3 rolls of film (that is, 2 to 3 dozen photos). Ancient cameras of yesteryear consumed 3 square inches of film per photo so only a dozen could fit on each tall roll.

My second memory of my father is on my first day of nursery school up in the Berkeley Hills. As he dropped me off, I begged him not to leave. He was scheduled to rush off to the University to work, but he parked the car and stayed an hour or so until I met and started playing with other children. A few days later he bought me a beautiful square lunch box with a matching thermos bottle and cup. It was white and adorned with pink flowers in a lace pattern. I could now walk in every morning as a fashionable young lady!

Around elementary school, we started having dinner guests on a regular basis. To make sure I would learn table manners, I had to sit next to my father for breakfast, lunch and dinner. “Sit still. And don’t let you pigtails dangle onto my dinner plate”, he would say whenever I leaned over to whisper a secret to Sumire. Sitting next to my father ended up becoming an educational experience in a completely unrelated matter – mathematics. I am not sure how or when the practice started, but he taught my sister and me mathematics at the breakfast table every summer morning. When we became too uncooperative, he instituted a policy. We would receive 5 cents per page each time we completed a chapter and finished all of the exercises at the end. When we got a little older and more rebellious, he revised his policy to a whopping 10 cents per page, but we were required to deposit 50% of our earnings in the bank to save up for college. We were quite naïve at the time, and were quite pleased with ourselves for having negotiated what we thought was a fantastic deal that doubled our earnings.

(We fast forward several years, bypassing acne and other adolescent perils.)

Before we went off to college, we were surprised when our father told us that we were now adults and responsible for ourselves. The temptation to keep clinging on as a parent would be too great if we stayed in Berkeley. “Just as children must outgrow childhood, parents must outgrow parenthood”, he said. My last memory from my childhood is at Oakland Airport. My father is standing with my mother by his side. Both are desperately trying to look happy, confident and reassuring. They are smiling and waving good-bye as I board a plane to Newark to go off to study at Princeton.

Prof. Alan D. Weinstein, Colleague, Mathematics Department

I have known Shoshichi Kobayashi since the 1960’s, when I started here as a graduate student. I have been a faculty member since 1969, and it is partly thanks to Sho that I am still here. He was chair when I had an offer from Caltech in the late 1970’s. He very effectively convinced me to forsake sunny Southern California and return to Berkeley, on attractive terms which he negotiated on my behalf. Part of the arrangement was for me to serve as his Vice-Chair for Faculty Appointments for a year upon my return. This may not sound like much of a prize to many of you who have done that kind of administrative job recently. But, in fact, Sho did himself much of the work himself which other chairs delegated to their vice-chairs, so I was very lucky.

I’m very glad that things worked out as they did; among other things, it gave Margo and me many opportunities to enjoy the company of Sho and his wife, whom we always knew by her very appropriate English name of Grace.

Sho has left a most impressive mathematical legacy in the form of a roster of 35 Ph.D. students, a long list of contributions to differential geometry, and many influential monographs.

Perhaps the most well-known mathematical object bearing his name is the “Kobayashi pseudometric,” which he introduced in 1967. Despite a name which makes it sound like something fake, this is a real measure of distance which quickly became in Sho’s hands, and remains throughout the mathematical world, an essential tool for the study of mappings between and within complex manifolds.

These are spaces, some of whose directions are parameterized by “imaginary numbers”, but that is not where the “pseudo” comes from. The “pseudo” refers to the fact that, in some spaces, two different points could have zero distance between them. Sho identified the absence of this undesirable property as one which characterized certain “good” spaces which he called “hyperbolic” and which are known as “Kobayashi hyperbolic.”

Sho’s work remained concentrated in the area of complex geometry, where he made a string of fundamental contributions throughout a career of over fifty years, but he worked in other areas of differential geometry as well. One of my own papers was a variation on a theme he created in a paper on positively curved manifolds.

Sho was a master of mathematical communication. He even wrote a paper called “How to write a mathematical paper (in English).” (It was written in Japanese.) More important, his books, especially the two-volume “Foundations of Differential Geometry” with Katsumi Nomizu, have taught differential geometry and complex geometry to generations of students and other researchers.

Sho was my personal agent for “opening Japan to the West.” Through his collaborator Takushiro Ochiai, I was invited to visit the University of Tokyo in the Spring of 1987, and Japan has become for me and Margo one of our two favorite destinations (along with France, where Sho himself made his first foreign mathematical visit). We have gone back many many times and even, a couple of times, benefited from the collection of equipment which he and Grace accumulated for the guest apartments of Keio University.

We share the grief of the Kobayashi family, especially Grace, Mei, and Sumi, whom we have long known, as well as other members whom we met just today. We are glad that Sho’s passing was a peaceful one of the kind we all hope for, after a long and fulfilling life, but we will also miss very much his generous friendship, his sense of humor, and the wonderful smile to which Hisashi referred earlier this morning. Fortunately, Sho lives on in the form of his magnificent mathematical legacy and our memories of a wonderful man.

Prof. Arthur E. Ogus, Chair, Mathematics Department

It is a sad but very great honor to attempt to express our Department’s enormous admiration of and appreciation for Shoshichi Kobayashi, a task which I am finding as momentous as any I have yet faced. Kobayashi was a major figure in the history of mathematics and of our department: a stellar colleague and mathematician and a heroic chairman. He had a brilliant career, having been appointed Assistant Professor in 1962 and rising rapidly to the rank of Full Professor by 1966. He was also a very kind man, with a quiet strength and a disarming smile whose company was simultaneously comforting and awe-inspiring. Of course I had heard of him long before I came to Berkeley, and when I arrived I was thrilled to meet him and attend some of his seminars. Sho was chairman of the department from 1978 to 1981, and was very kind with me and others. This was also at the time of the famous “space wars,” when the central campus administration was attempting, by means of obscurantist proclamations, formulas, and calculations, to take a large amount of space away from the math department. Calvin Moore, in his book on the history of our department, says “…through subtle and clever diplomacy, Sho succeeded in holding the loss to about ten percent of the total space….a victory.” I remember it somewhat differently: each time our department received a memo from the administration, Sho would post it in public on the bulletin board, along with a polite but thoroughly devastating rebuttal. This made for enormously amusing reading for members of the department, but was not so amusing for the administration. Sho’s meticulous work revealed to me then the difficulty and complexity of the role of chairman. I deeply wish he were still here to help me with his profound and kind wisdom. When I became chair, I asked him for general advice, based on his time as chair. He warned me not to try to do big things to make a name for myself. If I can do half what he did for our department, I will be very proud.

 


II. Condolence Letters

Prof. Heisuke Hironaka (広中平祐教授) & Mrs. Wakako Hironaka (広中和歌子氏)

To my true friend and life long mentor, Shoshichi Kobayashi sensei.

I feel deeply sad to hear of Kobayashi-san’s passing.  He was one year younger than me in age but he had been always ahead of me academically and intellectually.  He became a university student one year earlier than I could do, so that was a two year jump ahead over my head.  When I was still working on my Ph.D thesis, he was already a professional researcher and teacher. When I began thinking of my own marriage, he was already in a position to give me advice on the married life of a mathematician.

Somehow, simply and naturally, he was my advisor and mentor. If there exists anything called native wisdom or inborn maturity, he was the one who had it. His success as a mathematician and his precious friendship to me and to my wife appeared all so naturally, just as matter-of-factly as if we had a big brother who had been looking after us all the way.

Our deep sadness is we don’t understand why he was destined to go ahead of us into the hereafter.

Heisuke and Wakako Hironaka
September 7, 2012

 


III. Shoshichi Kobayashi小林昭七)Mathematician, 1932-2012

Biography of Shoshichi Kobayashi

Shoshichi Kobayashi, 80, Emeritus Professor of Mathematics at the University of California at Berkeley, died peacefully in his sleep on August 29, 2012. He was on the faculty at Berkeley for 50 years, and has authored over 15 books in the area of differential geometry and the history of mathematics.

Shoshichi studied at the University of Tokyo, receiving his B.S. degree in 1953. He spend one year of graduate study in Paris and Strasbourg (1953-54), and completed his Ph.D. at the University of Washington, Seattle in 1956. He was appointed Member of the Institute for Advanced Study at Princeton (1956-58), Postdoctoral Research Associate at MIT (1958-60), and Assistant Professor at the University of British Columbia (1960-62). In 1962 he joined the faculty at Berkeley and became Full Professor in 1966.

He was a visiting professor at numerous departments of mathematics around the world, including the University of Tokyo, the University of Mainz, the University of Bonn, MIT and the University of Maryland. Most recently he had been visiting Keio University in Tokyo. He was a Sloan Fellow (1964-66), a Guggenheim Fellow (1977-78) and Chairman of his Department (1978-81).

Shoshichi Kobayashi was one of the most contributors to the field of differential geometry in the last half of the twentieth century.

His early work, beginning in 1954, concerned the theory of connections, a notion basic to all aspects of differential geometry and its applications. Prof. Kobayashi’s early work was essentially in clarifying and extending many of Élie Cartan‘s ideas, particularly those involving projective and conformal geometry, and making them available to modern differential geometers. A second major interest of his was the relation of curvature to topology, in particular on Kähler manifolds.

Throughout his career, Prof. Kobayashi continued to focus his attention on Kähler and more general complex manifolds. One of his most enduring contributions was the introduction in 1967 of what soon became known as the “Kobayashi pseudodistance,” along with the related notion of “Kobayashi hyperbolicity.” Since that time. These notions have become indispensable tools for the study of mappings of complex manifolds.

Other areas in which Kobayashi made fundamental advances, into the twenty-first century, include the theory of complex vector bundles, intrinsic distances in affine and projective differential geometry, and the study of the symmetries of geometric structures using filtered Lie algebras.

Several of Shoshichi Kobayashi’s books are standard references in differential and complex geometry, among them his two-volume treatise with Katsumi Nomizu entitled “Foundations of Differential Geometry.” Generations of students and other scholars have learned the essentials of the subject from his books.

The following is a translation by Prof. Toshiki Mabuchi (Osaka University) of his 1992 description of Shoshichi Kobayashi’ work.

  1. His books “Foundations of Differential Geometry, Volumes I & II” coauthored by K. Nomizu are very popular not only among mathematicians but also among physicists.
  2. His book on hyperbolicity and transformation groups also influenced many mathematicians.
  3. He published more than one hundred papers, which have received an exceptionally large number of citations.
  4. His mathematical achievements range across differential geometry, Lie algebras, transformation groups and complex analysis. The most important ones are:
  1. Kobayashi’s intrinsic pseudo-distance and its distance-decreasing property for holomorphic mappings;
  2. Kobayashi hyperbolicity;
  3. Measure hyperbolicity and the generalized Schwarz lemma;
  4. Projectively invariant distances for affine and projective distances;
  5. The study of compact complex manifolds with positive Ricci curvature and Kobayashi-Ochiai’s characterization of complex projective spaces and hyperquadrics:
  6. Filtered Lie algebras and geometric structures;
  7. The study of Hermitian-Einstein holomorphic vector bundles and Kobayashi-Hitchin correspondence.

In (1),(2) and (3), we see his extremely high originality, and (5) had led succeeding

mathematicians to Frankel’s conjectures, which (7) has had great impact on algebraic

geometry as well as differential geometry—Tian-Donaldson-Yau’s Conjecture on the K-stability

and existence of Kähler-Einstein metrics is still a central problem in complex geometry.

Books authored by Shoshichi Kobayashi (小林昭七)

  1. Foundations of Differential Geometry Vol. I (with Katsumi Nomizu), Wiley & Sons, 1963/1996.
  2. Foundations of Differential Geometry Vol. II (with Katsumi Nomizu), Wiley & Sons, 1969/1996.
  3. Hyperbolic Manifolds and Holomorphic Mappings, an introduction, Marcel Dekker, 1970, World Scientific, 2005.
  4. Transformation Groups in Differential Geometry, Springer-Verlag, 1972/1995.
  5. Differential Geometry of Curves and Surfaces, Shokabo , 1972 (in Japanese). 曲線と曲面の微分幾何(1989,1995), 裳華房
  6. Complex Differential Geometry (with H. H. Wu), Birkhäuser Verlag, 1983.
  7. Differential Geometry of Complex Vector Bundles, Publications of the Mathematical Society of Japan, No. 15, Iwanami Shoten and Princeton University Press, 1987.
  8. Differential Geometry of Connections and Gauge Theory, Shokabo, 1989, 1995 (in Japanese). 接続の微分幾何とゲージ理論(1989, 1995), 裳華房
  9. Euclidean Geometry of Today’s Geometry, Japan Hyoronsha, 1990 (in Japanese). ユークリッド幾何から現代幾何へ (1990), 日本評論社
  10. Hyperbolic Complex Spaces, Springer, 1998.
  11. Mathematics of Circles, Shokabo, 1999 (in Japanese).円の数学(2000), 裳華房
  12. Calculus-One Variable, Shokabo, 2000 (in Japanese). 微分積分読本1変数(2000), 裳華房
  13. Calculus-Several Variables, Shokabo, 2000 (in Japanese). 続微分積分読本-多変数(2000), 裳華房
  14. Understanding Euler and Fermat, Kodansha, 2003 (in Japanese). なっとくするオイラーとフェルマー (なっとくシリーズ) (2003), 講談社。
  15. Complex Geometry, Iwanami, 2005 (in Japanese). 岩波講座 現代数学の基礎〈16〉複素幾何1・複素幾何2 (2005), 岩波書店

A list of published work by Shoshichi Kobayashi

The following link provides a complete compilation of his published work (151 items).
http://bibserver.berkeley.edu/cgi-bin/bibs7?source=http://bibserver.berkeley.edu/DB/UCB_MATH1/Kobayashi__Shoshichi.bib

 


 IV. The Shoshichi Kobayashi Memorial Fund

 

Kindly donate to The Shoshichi Kobayashi Memorial Fund which has been established to support foreign graduate students in the Mathematics Department at the University of California, Berkeley.

Online Donation

A donation may be made online by charging it to your credit card.

  1. Open the website by clicking http://math.berkeley.edu/about/donate
  2. Click on “Make a gift to Mathematics online” and fill out the required sections on the Give to Cal Online Giving Form.
  3. Under “Gift Instructions and Recognitions,” check the box for memorial gift and insert the name Shoshichi Kobayashi.
  4. Under “Special Instructions,” please note that this gift is intended forThe Shoshichi Kobayashi Fund.

Donation by mailing a check

Please send your check “Payable to The Shoshichi Kobayashi Memorial Fund” to

The Shoshichi Kobayashi Memorial Fund
c/o Ms. Nancy Palmer, 979 Evans Hall
University of California,
Berkeley, CA 94720-3840, U.S.A.

Notes:

  1. Matching gift from the Chancellor: Gifts from current UC Berkeley faculty, staff and students, retired UC Berkeley faculty and staff, or surviving spouses of active or retired UC Berkeley faculty and staff will be matched under the Chancellor’s Challenge Program.
  2. Tax deductibility: Donations are fully deductible from your Federal tax obligations.


 

小林昭七記念基金

カリフォルニア大学数学科大学院の留学生の経済的援助を目的として小林昭七教授記念基金が設立されました。この趣旨にご賛同下さり、ご寄付を賜りますようお願い申し上げます。

 

オンラインによるご寄付

カリフォルニア大学のオンライン寄付システムを使ってのご寄付が、最も迅速且つ安全であり、手数料も他の方法に比べ最小限と思います。オンライン・ショッピングやご寄付の経験の無い方でも容易に出来ると思います。

  1.  インターネットで http://math.berkeley.edu/about/donate にアクセスしていただきますと”Mathematics + Berkeley” というタイトルの画面が現れます.Donate という見出しの直ぐ下にある”Make a Gift to Mathematics Online”をクリックしてください。
  2. ”Give to Cal Online Giving Form”があらわれます。*のついた欄は必ずご記入ください。Personal information の後半は配偶者との共同でのご寄付の場合のみご記入ください。Matching funds の欄は(米国での大企業で働いている方を除いて)空白で結構です。Cal affiliationはカリフォルニア大学の卒業生、あるいはその保護者、カリフォルニア大学職員である方のみ記入してください。Gift instruction and recognitionでは、寄付者芳名録にお名前を載せることを許可される方は “You may publish my/our name(s) in donor rolls” の左側のボックスをチェックされ、”This is an honorific gift” のボックスは無視して下さい。その下にある “This is a memorial gift” をチェックして、”Name of person to recognize” の右側のボックスに Prof. Shoshichi Kobayashi  と記入して下さい。紙の領収書の郵送を希望される方は”In addition to the online receipt, I would like a paper gift receipt mailed to me” をチェックして下さい。一番下にある”Special instructions or designations for this gift”と題するボックスには、”This gift is intended for the Shoshichi Kobayashi Memorial Fund” と記入して下さい。 “Next: review and confirm information” をクリックされますと、記入された項目が画面に出ますので、内容を確認して次の手順に従ってください。
  3. クレジットカードの番号等は次のステップで記入することになります。もし途中で手順がわからなくなったり、コンピュータが動作しなくなってもご心配いりません。すべてのステップが無事完了した段階で初めて、カードにチャージされることのなります。 オンライン寄付システムの使い方でご質問のある方は、小林久志氏(昭七氏の弟でプリンストン大学名誉教授) hisashi@princeton.edu 宛てに遠慮なくお問い合わせ下さい。

 

小切手郵送によるご寄付

米国銀行に口座をお持ちの方は “Payable to The Shoshichi Kobayashi Memorial Fund” と記入され下記宛に郵送して下さい。

The Shoshichi Kobayashi Memorial Fund
c/o Ms. Nancy Palmer, 979 Evans Hall
University of California,
Berkeley, CA 94720-3840, U.S.A.

 

Keynote Speech at the 24th International Teletraffic Congress

September 11th, 2012

Modeling and Analysis Issues

In the Future Internet

Keynote Speech at the 24th International Teletraffic Congress (ITC 24)

September 4th, 2012, Krakow, Poland

The 24th International Teletraffic Congress (ITC 24) was held at Krakow, Poland on September 4-6, 2012, and I gave a keynote speech on the first day of the conference. http://www.itc24.net/keynote-speakers/

Shown below is the text of my speech. Some background information and advanced discussion, which were not presented at the meeting in the interest of time, are shown in a italic and smaller font.

Also given below are the slides used in the speech. Please download a PDF version of the slides here.

Text of the Keynote Speech

Professor Paul Kühn, Thank you for your gracious introduction.

It is a great honor to be invited to the ITC24 as a keynote speaker. I thank Dr. Thomas Bonald and Prof. Michal Pioro, TPC co-chairs, as well as the conference co-chairs, Prof. Andrzej Jajszczyk,and Prof. Zdzisław Papir for providing me with this opportunity.

I would like to cover three main topics in this talk. First, I want to review the current Internet, its pros and cons with focus on its End-to-End design practice. I will then outline some key points of the New Generation Networks (NwGN for short), which is Japan’s Future Internet project pursued by NICT (National Institute of Information and Communications Technology). Then I would like to present some ideas and suggestions that might be of interest to the ITC community and relevant to the future Internet research. I recognize several people in the audience who listened to my keynotes presented at Euroview 2009 and 2012 at the University of Wüzburg. Please allow me that I will be repeating some slides that I used at these meetings [1, 2].

Slide 2: Outline of the presentation

Here is the outline of my talk

  1. The Internet: Its Original Features
  2. End-to-End Design: Its Benefits
  3. Problems with the E2E Design
  4. What is NwGN, and Why
  5. Network virtualization
  6. AKARI Architecture and JGN-X
  7. Modeling and Analysis Issues

I won’t be able to cover technical details in this one hour presentation, so I’ll post a full text as well as the slides in my blog, www.hisashikobayashi.com, where some background information, technical details and references will be included.

 

 

I. The Internet

Slide 3:The Original Features: As most of you are well aware, when the ARPANET, the predecessor of the Internet, got started over 40 years ago, its primary objectives were to allow researchers to share and exchange their programs and data files. Thus, the main applications they had in mind were file transfers and email. Real-time or time-sensitive applications such as VoIP and stream video were not envisioned. End devices were host machines which were at fixed locations, so mobile devices such as lap top PCs, smart phones, etc. were not imagined. Only “best effort” services were provided, which means that no effort for QoS (quality of services) shall be made. Last but not least, an important assumption made then was that there would be no vicious users. In other words all users were considered trustworthy.

 

The network environment we are in today are completely different from the one envisioned by the original designers of the Internet.

 

Slide 4: E2E Design of the Internet

It is therefore amazing to find that the 40 years old ARPANET’s architecture remains an essential part of today’s Internet. Much of its success is attributed to the so-called “end-to-end (E2E) design” practice.

 

This diagram illustrates how the TCP/IP network based on the E2E design looks like. Simply put, this design approach suggests that the network interior should be a dumb network, whose task is just to deliver packets from one end to another. All intelligence should reside in applications at edge nodes that run on the TCP transport layer.

 

Slide 5: E2E Design of the Internet- cont’d

The “End-to-End argument” discussed by Saltzer, Reed and Clark [3], however, contains some flaws in their arguement, yet seems to have served as a major design guideline throughout the evolution of the Internet built upon the TCP/IP protocol devised by Cerf and Kahn [4].

 

Their argument [3] goes, in the context of a network design, as follows: “Such communication functions as error control, routing and security should be implemented not within the network, but at the end nodes (hosts), since these functions can be completely specified only at the end nodes that run applications, and any partially implemented functions within the network will be redundant, waste network resources and degrade the system performance in most cases.”

 

Then they add a concessionary note that sometimes an incomplete version of the function provided by the communication system may be useful as a performance enhancement. A number of papers have been written that criticize, defend, reinterpret or modify the original argument, and this designguideline has encountered many challenges and has been significantly compromised, for better or worse, for various reasons.

 

In my opinion the end-to-end design should merely be one of many design options and is not something that should be labeled as a “principle.” It is unfortunate that some Internet experts hold a dogmatic view of this design option.

 

Slide 6: Main Features of the Internet: So here are three major features of the Internet.

  • The network provides basic packet delivery service (called “datagram service”)
  • Applications are implemented at end hosts.
  • The transparency of the IP led to innovative deployment of the Internet and quick development of new applications

The first two features are not necessarily the strength of this network architecture, but the third feature has been critical to the success of the Internet.

Slide 7: Today’s Internet Landscape: So here is how Today’s Internet looks like.

    • Every service is an end-to-end application.
    • New applications can be deployed by anyone, because he/she can have easy access to the transparent IP network.

Slide 8: Problems with the E2E Design:

However, the simplicity of the IP network has created serious performance and security problems of the Internet that confront us today.

TCP performs an E2E ARQ (automatic repeat request) for reliable transport of packets in a flow. But the E2E ARQ makes sense only if the channels are clean and the file size is not to large. Furthermore, E2E ARQ may increase the chance of undetectable or uncorrectable errors.

There are many situations where reliability and delay can be improved by applying localized (or hop-by-hop) ARQ, which can prevent unnecessary increase in traffic load. Furthermore, the localized ARQ does not require simultaneous availability of two end points. These benefits of localized error control will be even greater in multicasting, since the traffic load near the source will be substantially reduced. Note that content distribution networks (CDNs) can be viewed as an asynchronous multicasting scheme that avoids E2E delivery of information.

Such routing protocol as OSPF (Open Shortest Path First) and RIP (Routing Information Protocol) , which conform to the E2E design, just look at the IP addresses of packets, and the routing decision cannot reflect traffic load within the network, because the IP network does not have such information. The absence of flowstate information at routers leads to connection-less service, called datagram service.

The TCP protocol, which runs on the IP network, provides a virtual circuit for a flow. Designing IP routers such as RIP and OSPF with no flow state information was probably a valid decision in the 1970s and 1980s when memory required to store state information was expensive and processing such information would have considerably slowed down the routing operations.

Various versions of the TCP protocol (TCP Tahoe in 1988, TCP Reno in 1990, and TCP Vega in 1995, FAST TCP in 2002, CUBIC in 2005, etc.) attempt to provide some congestion control and flow rate control [6], but the performance they can achieve is intrinsically limited, because they cannot have the current information of individual flows.

Slide 9: Problems with the E2E Design-cont’d

A related major problem with the TCP/IP protocol is that it cannot provide call admission control (CAC). It permits any user attached to the Internet to initiate its flows, and allows all flows to share the network resources. In other words, TCP/IP attempts to mimic processor sharing (PS), which we address later in a broader context. It has been empirically shown [5], however, that the performance of TCP/IP is much inferior to the PS scheduling, primarily because TCP/IP cannot have the current information concerning the individual flows’ states.

Slide 10: Departure from the E2E Design

One consequence of the simplicity of the E2E Design approach is that it lacks sufficient mechanisms required to control the network.

A network architecture in general can be decomposed into three planes: data plane, control plane and administrative plane. The control plane is a mechanism that connection management devices use to control and access network components and services. In routing, the control plane is that portion of the routing protocol which is concerned with finding the network topology and updating the routing table. It allows the router to select the outgoing interface that is most appropriate for forwarding a packet to its destination.

The data plane (also known as the forwarding plane) is responsible for the actual process of sending a packet received on a logical interface to an outbound logical interface.

While the data plane remained simple in the Internet, the control plane has become extremely complex over years, because a number of control mechanisms have been appended to the IP layer as new requirements such as mobility (Mobile IP), security (IPSec) and middle-box (e.g., firewalls and network address translator or NAT) control have arisen. Other functions such as IntServ (Integrated Service), Multicast IP, ICMP (Internet Control Message Program), ARP (Address Resolution Protocol), AAA (Authentication, Authorization and Accounting) protocols also belong to the control plane at the IP layer.

The flow routing architecture by Roberts [7] , a part of DARPA’s Control Plane project, attempt to guarantee QoS of an IP network by letting routers store state information on individual flows (such as (i) whether a given flow is active or not, (ii) the bandwidth allocated to the flow, (iv) the priority of the flow, (v) the type of service requested by the flow and (vi) the path assigned to the flow). In flow routing, the signaling is “in band,” i.e., carried as part of the data stream. A flow router processes the signaling information in hardware, and hence it can handle flow establishment at line speed.

Slide 11: Departure from the E2E Design-cont’d

CHART (Control for High-Throughput Adaptive Resilient Transport) [8, 9], also part of the DARPA Control Plane Project, addressed both IP layer control and transport layer control. This control plane also allows routers to monitor and collect a richer set of network state information to control resource usage better than the simple-minded E2E design approach can possibly achieve. The CHART project has developed an explicit rate signaling protocol, which is used by its transport layer to determine the window sizes. This significantly improve the performance of a network with a large bandwidth-delay product and/or a high packet-loss rate.

Slide 12: OpenFlow Switch and Virtual Node

OpenFlow [10, 11] is a recent development and allows networking researchers to experiment with new networking protocols, both E2E designs and non-E2E designs.

Specifically, an OpenFlow switch maintains a Flow Table, which is managed by a Controller. The Controller creates new flow table entries, which are then stored in the Flow Table. A flow table entries specifies how an incoming packet is identified as belonging to the flow and how the packet should be processed. For example, a researcher could run a new routing protocol without disrupting normal production traffic by specifying a flow entry in each switch, which would then identify which packets are to be routed by the rouging algorithm under study. Many commercial switches and routers can be converted into an OpenFlow switch, because most of them have Flow Tables used for the purpose of implementing a firewall.

The Virtual Node (VNode) project [12, 13] pursued by Prof. Akihiro Nakao at the University of Tokyo and NICT, Japan also provides a platform, similar to OpenFlow, that will allow researchers to experiment new network protocols, including non-E2E designs. VNode’s model of programmability is much more generic than OpenFlow’s limited capability, and support both control plane and data plane programmability. OpenFlow allows control plane programmability, but not data plane programmability.

I will describe the virtual node further in the next part of my talk.

The control scheme in the conventional Internet is primarily based on routing using the IP addresses, whereas that of OpenFlow intends to improve the quality of services and increase the efficiency of the network by doing routing control at the flow unit level, whereas a “flow” is defined as a communication that is determined by the combination of the MAC addresses, IP addresses and port numbers involved in the communication. NEC, which is a founding member of the Open Flow Consortium, is developing a ”programmable flow switch.”

 

 

II. New Generation Network (NwGN)

 

Slide 13: New Generation Network

The NwGN project is a flagship project, so to speak, of the networking research in Japan. Its purpose is to design a new architecture and protocols, and implement and verify them on a testbed.

The NwGN project aims at a revolutionary change so as to meet societal needs of the future [14-16]. AKARI is the architecture of such a network and JGN-X is the testbed.

Slides 14 & 15: Requirements of NwGN

There are numerous requirements that we need to take into account concerning network services of the future. Here is a list of what I consider as requirements for the NwGN:

  1. Scalability (users, things, “big data)
  2. Heterogeneity and diversity (in “clouds”)
  3. Reliability and resilience ( against naturaldisasters)
  4. Security (against cyber attacks)
  5. Mobility management
  6. Performance
  7. Energy and Environment
  8. Societal needs
  9. Compatibility (withtoday’s Internet)
  10. Extensibility (for the unforeseenand unexpected)

 

Slide 16:AKARI Network Architecture. Here are fourmajor features of the AKARI architecture. It takes a layered structure like all network architectures we know of, but instead of adhering to static and strict boundaries between the layers, it takes an adaptive approach, by adjusting layer boundaries, depending on the load placed on the network and resource usage. Such a design philosophy is referred to as “cross-layer optimization.” Such adaptive quality of service (QoS) management is pursued actively in the networking community at large. I will discuss the three other features of the AKARI architecture in the next several slides.

Slide 17: ID and Locator in the Internet

In the current Internet, devices on the network are identified in terms of their “IP addresses,” which are their identification numbers on the network layer. In the original internet, all end devices were host machines with their addresses being fixed. Thus, there was no problem in interpreting the IP addresses as “locators,” namely, the devices’ location information. In designing a future Internet, however, we must take into account that a majority of end devices are mobile, with devices with fixed locations being exceptions.

Slide 18: ID/Locator Split Architecture

An end device or an enterprise network may be connected to the Internet via multiple links, and such a technique is referred to as “multihoming.” Its primary purposes are to increase the reliability and resilience and to mitigate a possible overload on one link or circuit.

In order to efficiently deal with the mobile devices and/or multihoming requirements, we should distinguish IDs and locators, and assign two different sets of numbers to them. Then, even if a mobile or multihomed device’s locator changes in the network layer, its ID associated with communications in the upper layers will remain unchanged. The split architecture is also useful to solve the security issue.

In the split architecture, not only locators, but also IDs are present in packet headers. So using IDs to enforce security or packet filtering is possible, and remains applicable even when the locators are changed due to mobility/multihoming. In the current Internet, the IP address in each packet is used as a key to enforce security or packet filtering. IPsec is an example of this location-based security. See RFC 2401: http://www.ietf.org/rfc/rfc2401.txt .

The split architecture is also effective against denial-of-service (DoS) attacks and man-in-the-middle (MitM) attacks, by relating IDs to some security credentials such as public keys and certificates. When an unknown device wants to communicate with a server, the server may ask the device to prove that the ID is associated with a public key and that the association has been certified by a reliable third party, before the server sets aside any resource (e.g., memory) for the session. The server may also ask the device to solve a puzzle of middle-level complexity before setting up the session.

There are two approaches to the ID/Locator Split Architecture. One is a host-based approach, in which the ID/Locator split protocols are implemented in the end hosts only. Its objectives are to achieve secure communications over the unsecured Internet and also to support mobility. As an example, consider the Host Identity Protocol (HIP) described in RFC 5201 http://www.ietf.org/rfc/rfc5201.txt and P. Nikander, A. Gurtov and T. R. Henderson, “Host Identity Protocol (HIP): Connectivity, Mobility, Multihoming, Security, and Privacy over IPv4 and IPv6 Networks,” IEEE Communications Survey & Tutorials, Vol. 12, no. 2, pp. 186-204, Second Quarter, 2010.

 

The other approach is a router-based approach in which the ID/Locator split protocols are implemented in routers, not in end hosts. Its primary objective is to make the BGP (Border Gateway Protocol) routing table size smaller by using two different addressing spaces in edge and core networks. It is known as LISP (Locator/ID Separation Protocol). LISP is about to become an RFC. See

http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=05451761 .

We can get information about its implementation/standardization status as well as tutorial documents from this site: http://www.lisp4.net/ . Both the HIP and LISP ideas were generated by the IETF (Internet Engineering Task Force).

 

In the NwGN project we are implementing the ID/locator split in both hosts and edge routers so that we can get benefits of both Host Identity Protocol or HIP (for security, mobility) and Locator/ID Separation Protocol or LISP (for core routing scalability) [5, 6].

 

As is schematically shown in this slide, we insert Identity Layer between Transport Layer and Network Layer. We are making application and transport layer protocols independent of the network layer protocols so that the same application can be transported over various network protocols. Our approach supports heterogeneous protocols in the edge networks (e.g., a host in an IPv4 network can communicate with another host located in an IPv6 network, and the host can move across heterogeneous networks).

 

The way in which the Internet is used is shifting from “communications from a device to another device” to “communications from data to humans.” When we wish to retrieve data or information, using a web browser and a web server, the data or information itself is an object of our interest, and it is immaterial from which device the data or information is fetched. A network architecture based on such a philosophy is called a “data centric” architecture.

In the ID/Locator Split Architecture, data and information can be treated as “things,” and we can assign IDs to them. Thus, the split architecture has an advantage of being applicable to a data-centric architecture as well.

Slide 19: Network Virtualization. I suppose that a majority of the audience is familiar with the notion of network virtualization, so I will skip a detailed definition of this term.

The notion of “virtualization” in computer technologies goes back to circa 1960, when virtual memory was introduced in the Atlas machine of the University of Manchester, UK. In 1972, IBM introduced VM/370, a virtual machine (VM) operating system that ran on System/370.

In the last decade, IT (information technology) departments of enterprises have begun to adopt a variety of virtualization technologies available as commercial products, ranging from server virtualization, storage virtualization, client (or desktop) virtualization to software virtualization, e.g., allowing Linux to run as a guest on top of a PC that is natively running a Microsoft Windows operating system. Such virtualization techniques allow multiple users and applications to dynamically share physical resources. Thus, they increase resource utilization and/or reduce electric energy consumption, as well as simplify complex administrative operations of IT.

Simply put, network virtualization chooses a subset of a collection of real (or physical) resources (routers, links, etc.) and functionalities (routing, switching, transport) of a real network (or multiple real networks) and combines them to form a logical network called a virtual network.

Slide 20: Virtual Networks and Overlaid Networks:

Virtual networks take different forms, depending on specific layers to which virtualization is applied. Here we illustrate what is termed “overlaid networks” (also known as “overlay networks”). Nodes in an overlaid network are connected by virtual links (or logical links) which are comprised of paths that are formed by combining multiple links in the network underneath. Distributed systems such as cloud computing, peer-to-peer (P2P) networks, and client-server applications (e.g., web browser and web server), can be viewed as overlaid networks running on the Internet. And the Internet itself is an overlaid network built on top of the telephone network.

Slide 21: Configuration of a Virtual Node (or VNode)

This slide shows the configuration of the aforementioned “virtual node” (or VNode) designed by Prof. Akihiro Nakao’s group (The University of Tokyo and NICT), and implemented on JGN-X. The virtual node consists of two parts: one is called “Redirector,” which handles the conventional routing function, and the other is “Programmer,” which runs a program that implements the virtual node functions. Here, each “slice” corresponds to each “virtual network.” Thus, by replacing the conventional routers/switched by Virtual Nodes we will have a platform to allow experimental work of network architectures and protocols, whether E2E based designs or non-E2E designs.

Slide 22: VNode project and participating companies

The VNode project has industrial partners, who are greatly contributing in turning the theory into practice. NTT is working on the domain controller, Fujitsu on an access gateway which controls access to other networks (e.g., a cloud). Hitachi is responsible for a router with a custom hardware board for constructing virtual links and NEC is developingaprogrammable environment at a node for flexible creation of a network service. For details, see [12, 13].

 

Slide 23:Optical packet and Optical path

In the future network environment, a majority of end devices will be mobile devices and sensors, which are connected by wireless access networks. But for a core network that requires broad bandwidth, an optical network will be very important.

When we talk about a network architecture, we often say that the architecture should be independent of technologies, while its implementation may depend on available technologies. But this simplistic argument will not hold for an optical network architecture, since it is quite different from that of wired or wireless networks. The main reason is that unlike electric signals that wired and wireless networks deal with, optical signals do not yet have inexpensive random access memory or operation circuits to build an arithmetic logic unit (ALU).

Packet switching is based on asynchronous time division multiplexing (ATDM or statistical time division multiplexing), and in today’s optical technology, it is not possible to switch or route multiplexed optical signals as they are. While the “payload” portion of the signal may remain in the optical domain, the packet header must be translated into an electric signal. We often use optical delay circuits or lines as buffer and try to maintain the high speed of optical signals. In order to make best use of the speed of optical signal, wavelength division multiplexing (WDM) must be adopted. But WDM will provide circuit switching, like frequency division multiplexing (FDM) and synchronous time division multiplexing. An end-to-end circuit that involves wavelength routers at nodes in between is referred to as an optical path.

In the NwGN architecture, we take advantage of our strength in optical technology and propose an architecture that integrates an optical packet switching system and an optical path circuit switching system.

Slide 24: Integrated optical packet and optical path system

As shown in this slide, telemedicine, which requires real-time transmission of high-definition video, is an ideal application example of an optical path system. DCN (Dynamic Circuit Network), which is also supported by the JGN-X testbed, is another network that integrates the Internet with packet switching and optical circuit switching.

Slide 25: JGN-X Network Overview

NICT’s test bed effort for NwGN is called JGN-X, which is an evolutionary outgrowth of JGN (Japan Gigabit Network) that started in year 2000 as a testbed for large capacity networking. As its speed and capacity increased, the name changed to JGN2 (which supported multicast environment and IPv6), then JGN2 plus, and finally the JGN-X project started in the fiscal year 2011, where X stands for ”eXtreme.”

The JGN-X testbed of NICT implements network control by “Open Flow” and DCN (dynamic circuit network) as well as the network controlled by the virtual nodes (which is also called the VNode plane”). Here the term “plane” is used as an abbreviation of a“control plane architecture.”

In other words, the JGN-X allows us to pursue an architectural study of the above three types of virtual networks.

DCN integrates the packet switching based Internet and an all-optical network that performs on-demand circuit switching using the aforementioned wavelength division multiplexing (WDM). It is used in such applications as remote medical systems (i.e., telemedicine), the Large Hadron Collider (LHC) project at Cern in Switzerland, and other advanced science fields.

Slide 26: JGN-X International Circuits

As this slide indicates, JGN-X is connected not only with various groups within Japan but also with the networking communities of the world.

Slide 27: Research around JGN-X

The JGN-X group also collaborates with the communities of advanced networking and cloud computing. It also provides an emulation environment for HPC (high performance computing). The objective of JGN-X is to provide an environment for research and development of the NwGN technologies, but also that for development of network applications for the future.

 

III. Modelling and Analysis Issues in the Future Internet Research

Slide 28: Now I change gears and present my personal observations and suggestions to this audience, concerning opportunities and challenges of the future Internet research.

Although I talked exclusively about the NwGN project of NICT, there are a number of significant, perhaps more significant, research efforts taking place in the U.S., Europe and elsewhere, but in the interest of time, I will have to skip them in my presentation. But I do provide a brief summary and a reference in the text I will post in my blog.

The NSF’s FIA (Future Internet Architecture) program supports MobilityFirst (Rutgers and 7 other universities), Named Data Networking (NDN; UCLA and 10 other universities), eXpressive Internet Architecture (XIA:CMU and 2 other universities), and NEBULA (U. of Penn and 11 universities). Each FIA program has its own comprehensive website where you can find more information than you could possibly digest. A recent survey paper in the July 2011 issue of IEEE Communications Magazine provides a good introduction to the FIA, GENI and EU’programs. The article also allocates about a half page to AKARI and JGN-X. See,J. Pan, S. Paul and R. Jain, “A Survey of the Research on Future Internet Architectures,” IEEE Communications Magazine, July 2011, pp. 26-35.

NSF also funds a testbed program called the GENI (Global Environment for Network Innovations) program (2005-present) , which is managed by Mr. Chip Elliot of BBN Technologies, who holds quarterly meetings/workshops, called GEC (GENI Engineering Conference). I have attended several GEC meetings in the past three years, and I have been impressed by how fast each of the four testbed groups (called “GENI Control Framework” or simply “clusters”) has been making progress. The following four clusters (lead institutions) are currently supported: PlanetLab (Princeton University), ProtoGENI (Univ. of Utah), ORCA (Duke University and RENCI-Renaissance Computing Institute) and ORBIT (Rutgers University).

In Europe, a collaboration of FP7 (the Seventh Framework Programme) on Future Internet research is referred to, somewhat confusingly, as Future Internet Assembly (FIA). The EIFFEL (European Internet Future for European Leadership) program and the Future Internet Private-Public Partnership (FI-PPP) were launched in 2006 and 2011, respectively. As we assemble here, Germany has been sponsoring the G-Lab (German Laboratory) through BFBM (Bundesministrerium für Bildung und Forschung; Federal Ministry of Education and Research) in addition to their participation in the aforementioned EU efforts.

To come up with a quantitative comparison of one network architecture against another is a rather difficult proposition. Will the complexity of any of the candidate future networks be too great for us to comprehend? Our inability to quantitatively characterize the present Internet seems to come, not only form the limited state-of-affairs in mathematical modeling techniques, but also from the character, culture and history of the Internet community, where many researchers do not seem interested in the modeling and analysis aspect.

Slide 29: Modeling and Analysis Issues-Cont’d

The original TCP/IP network provides merely “best effort” services and its performance guarantee was not an issue of much concern, and this historical aspect seems to dictate the culture and mentality of the Internet community even today. There have been very few textbooks and papers that present modeling and analysis of the Internet. Most books and papers are primarily concerned about description of what the network or its susbsystem does, but not so much about discussing how well or poorly the network performs compared with some analytical results or “theoretical” bounds. Quantitative results are usually limited to simple plots of measurements data or simulation results. There are, of course, some few exceptions. The book by Profs. Kumar, Manjunath and Kuri [21] provides a fair amount of mathematical models of the Internet and its protocols, and a forthcoming book by Prof. Mung Chiang of Princeton [6] will be an excellent textbook, relating quantitative techniques to practical issues. It’s selected annotated bibliography will be also useful.

Slide 30: Testbed and Overdimensioning:

Although the research of future Internet seems still dominated by this traditional Internet culture, my own conviction is that prototyping and testbeds alone will never lead us to satisfactory understanding of system performance, reliability and security. Up to now, our limited capability to analyze and improve the network performance has been compensated for by over-dimensioning, which has been possible because the technological improvements and cost reduction in network components such as processors, memory and communication bandwidths have been able to match the phenomenal growth in the Internet users and insatiable appetites for resources by new applications. But there is no guarantee that the cost/performance of network components will continue to improve in a geometrical fashion as they have had in the past. We should also note that the energy consumption of IT systems is now a serious concern, as listed in an earlier slide.

Slide 31: Virtual Network as a network of processor sharing servers

Network virtualization is certainly a very powerful tool that allows us to test multiple candidates of new network architectures and protocols in parallel. This technology should ultimately help us migrate from the existing Internet to new one(s). But as it stands now, very little attention and effort seem to be paid to the performance aspect of each “slice” network, as well as, the performance limit and constraints of virtual networks. After all, network virtualization is nothing more than a form of (statistical) sharing of physical resources. A virtual network can be viewed as a network of processor sharing (PS) servers.

Slide 32: Processor sharing (PS) , a mathematical concept introduced by Prof. Len Kleinrock more than 40 years ago [22] (see also [23, 24]) has been proven to be a very powerful mathematical abstraction of “virtual processors” in a time-shared system. Similarly, it can represent “virtual links or circuits,” i.e., multiplexed streams of packets or data over acommunication channel. A link congested by TCP flows can be modeled also as a PS server.

Slide 33:Processor sharing (PS) –cont’d:

The so-called “fair scheduling” can be viewed as a discipline that emulates processor sharing. A processor sharing model often leads to a very simple performance analysis, because of its insensitivity to statistical properties of traffic load. A network of processor-sharing nodes lends itself to a closed expression for the steady state distribution.

N. Dukkipati et al. [5] mentioned earlier compares the performance of TCP/IP algorithms against the theoretical limit implied by a processor-sharing model. More of this kind of analysis should be practiced by networking researchers, and I believe the group here is the right audience whom I should encourage to work on it.

Slide 34: Loss Network Model

The loss network theory pioneered by Prof. Frank Kelly [25] (see also [ 23,24]) is a rather recent development, and it is a very general tool that can characterize a network with resource constraints which supports multiple end-to-end circuits with different resource requirements. It can be interpreted as a generalization of the classical Erlang or Engset loss models, and its insensitivity to network traffic or load, similar to the property of processor sharing, make this characterization very powerful.

Slide 35: Performance Analysis

The performance measures such as “blocking probability” and “call loss rate” can be represented in terms of the normalization constant of a loss network model, just like the performance measures such as server utilization, throughput and average queueing delay in a queueing network model are represented in terms of its normalization constant.

Computational complexity of an exact evaluation of the normalization constant may grow exponentially as the network size (in terms of the number of nodes and links, the bandwidths, buffer sizes, and router/switch’s speed) and/or the number of users (i.e., end-to-end connections to be supported by the network) incease. Fortunately, however, in such a regime as one for the future Internet of large size with a large number of users, an asymptotic analysis (see e.g. [26, 23] ) will become more accurate and often lend itself to a closed form expression.

Slides 36. Open Loss Network (OLN)

Here we show what we call an open loss network (OLN), where the path (or routing chain) of a call is open.

The number of links L in this example network is five, i.e., L=5

We define a call class as r=(c, τ), where c is a routing chain or path, and τ is a call type.

Slide 37. An OLN is equivalent to a Generalized Erlang Loss Station !

I will show you a very important observation. This observation should be rather simple, but is probably new to you, unless you have read my papers or books with Prof. Brian Mark (see [23, 24] and references therein) . For any given OLN, we can represent it by a single loss station as shown in this slide, where L, which was the number of links in the OLN, is now the number of serer types.

A call of class r holds Al, r lines simultaneously at link l, i.e., Al, r servers of type l.

ml = number of lines available at link l, i.e., the number of servers of type l.

rAl, rnl, rml

We have a simple product-form expression for the joint distribution of the number of different classes of calls processed in the network in terms of the normalization constants G’s. The blocking and call loss probabilities can be also expressed in terms of G ‘s.

Slide 38. Mixed Loss Network (MLN)

Similarly, a mixed loss network (MLN), which contains both open and closed networks as depicted here, can be also mathematically represented by a single loss station that generalizes the Erlang and Engset loss models. The network state distribution and the performance measures are again representable in terms of the normalization constants, and computational algorithms have been developed.

For large network parameters, recursive computation of the normalization constants may be prohibitive. However, the generating function of the normalization constant sequence can be obtained in a closed form and its inversion integral can be numerically evaluated. For very large network parameters, which will surely be the case for the future network, asymptotic approximation of the inversion integral will be applicable with high accuracy [26, 23].

Slide 39. Queueing and Loss Network

Finally, a diagram shown here is the concept of a queueing-loss network (QLN), which contains both queuing subnetwork(s) and loss subnetwork(s). The network state distribution and network performance measures can be again expressed in terms of the normalization constants.

The integrated optical-packet switching and optical paths system we discussed in Slide 24 can be formulated as a QLN. Please refer to [23] for detailed discussion.

Slide 40: Acknowledgments

I thank Prof. Brian L. Mark (George Mason University), Drs. Hiroaki Harai, Dr. Ved Kafle, Dr. Eiji Kawai (all at NICT, Japan) and Prof. Akihiro Nakao (University of Tokyo and NICT) for their help in preparing this speech and slides. I also thank Prof. Mung Chiang (Princeton University) for sharing the manuscript of his forthcoming textbook [6].

 

References

[1] H. Kobayashi, “An End to the End-to-End Arguments,” Euroview 2009, Würzburg, Germany, July 28, 2009. http://hp.hisashikobayashi.com/?p=122

[2] H. Kobayashi, “The New Generation Network (NwGN) Project: Its Promises and Challenges,”

Euroview 2009, Würzburg, Germany, July 23, 2012. http://hp.hisashikobayashi.com/?p=228

[3] J. H. Saltzer, D. P. Reed and D. D. Clark, “End-to-End Arguments in System Design,” ACM Trans. Comp. Sys., 2 (4), pp. 277-288, Nov. 1984.

[4] V. G. Cerf and R. E. Kahn, “A Protocol for Packet Network Intercommunications,” IEEE Trans. on Comms. 22(5), pp. 637-648, May 1974.

[5] N. Dukkipati, M. Kobayashi, R. Zhang-Shen and N. McKeown, “Processor Sharing Flows in the Internet,” in H. de Meer and N. Bhatti (Eds.) IWQoS 2005, pp. 267-281, 2005. http://pdf.aminer.org/000/465/981/processor_sharing_flows_in_the_internet.pdf

[6] M. Chiang, Networked Life: 20 Questions and Answers, Cambridge University Press, 2012 (to appear). ISBN 978-1-207-02494-6. http://www.cambridge.org/aus/catalogue/catalogue.asp?isbn=9781107024946

[7] L. G. Roberts, “The Next Generation of IP-Flow Routing,” SSGRR 2003 International Conference, L’Aquila, Italy, July 29, 2003, http://www.packet.cc/files/FlowPaper/NextGenerationofIP-FlowRouting.htm

[8] A. Bavier et al., “Increasing TCP Throughput with an Enhanced Internet Control Plane,” Proceedings of MILCOM, October 2006.

[9] J. Brassil et al., “The Chart System: A High-Performance, Fair Transport Architecture Based on Explicit Rate Signaling,” Operating Systems Review, Vol. 43, No.1, pp. 26-35, January 2009. http://napl.gmu.edu/pubs/JPapers/Brassil-SIGOPS09.pdf

[10] OpenFlow website; http://www.openflow.org/wp/learnmore/

[11] OpenFlow White Paper: N. McKeown et al., “OpenFlow: Enabling Innovation in Campus Networks,” ACM SIGCOM Computer Communication Review, Vol. 38, No.2, April 2008, pp. 69-74. Also available at http://www.openflow.org/documents/openflow-wp-latest.pdf

[12] A. Nakao, “Virtual Node Project: Virtualization Technology for Building New-Generation Networks,” NICT News, June 2010, No. 393, June 2010, pp. 1-6. http://www.nict.go.jp/en/data/pdf/NICT_NEWS_1006_E.pdf

[13] A. Nakao, A. Takahara, N. Takahashi, A. Motoki, Y. Kanada and K, Matoba, “VNode: A Deeply Programmable Network Testbed Through Network Virtualization,” submitted for publication. July 2012.

[14] NICT, New Generation Network Architecture AKARI: Its Concept and Design (ver2.0), NICT, Koganei, Tokyo, Japan, September, 2009. http://akari-project.nict.go.jp/eng/concept-design/AKARI_fulltext_e_preliminary_ver2.pdf

 

[15] T. Aoyama, “A New Generation Network: Beyond the Internet and NGN,” IEEE Commun. Mag., Vol. 47, No. 5, pp. 82-87, May 2008.

 

[16] N. Nishinaga, “NICT New-Generation Network Vision and Five Network Targets,” IEICE Trans. Commun., Vol. E93-B, No. 3, pp. 446-449, March 2010.

[17] J. P. Torregoza, P. Thai, W. Hwang, Y. Han, F. Teraoka, M. Andre, and H. Harai, “COLA: COmmon Layer Architecture for Adaptive Power Control and Access Technology Assignment in New Generation Networks,” IEICE Transactions on Communications, Vol. E94-6, No. 6, pp. 1526–1535, June 2011.

 

[18] V. P. Kafle, H. Otsuki, and M. Inoue, “An ID/Locator Split Architecture for Future Networks,” IEEE Communications Magazine, Vol. 48, No. 2, pp. 138–144, February 2010.

 

[19] ITU-T SG13, “Future Networks Including Mobile and NGN,” http://itu.int/ITU-T/go/sg13

 

[20] H. Furukawa et al. , H. Harai, T. Miyazawa, S. Shinada, W. Kawasaki, and N. Wada, “Development of Optical Packet and Circuit Integrated Ring Network Testbed”, Optics Express, Vol. 19, No. 26, pp. B242–B250, December 2011.

 

[21] A. Kumar, D. Manjunath and J. Kuri, Communication Networking: An Analytical Approach, Elsevier 2004.

[22] L. Kleinrock and R. R. Muntz, “Processor-sharing queueing models of mixed scheduling disciplines for time-sharing queuing systems,” J. ACM. Vol. 72 (1972), pp. 464-472.

[23] H. Kobayashi and B. L. Mark, System Modeling and Analysis: Foundations of System Performance Evaluation. Pearson-Prentice Hall, 2009

[24] H. Kobayashi, B. L. Mark and W. L. Turin, Probability, Random Processes and Statistical Analysis, Cambridge University Press, 2012.

[25] F. P. Kelly, “Loss Networks (invited paper),” Ann. Appl. Probab., Vol. 1, No. 3, pp. 319-378, 1991.

 

[26] Y. Kogan, “Asymptotic expansions for large closed and loss queueing networks,” Math. Prob. Eng. Vol. 8, No. 4-5, pp. 323-348, 2003.

 

 

 

Keynote speech at Euroview 2012

August 6th, 2012

I delivered a keynote address “The New Generation Network (NwGN) Project: Its Promises and Challenges,” at Euroview 2012 Conference with the general them “Visions of Future Generation Networks,” which was held at the University of Würzburg, Germany on July 23 & 24, 2012.  http://www.euroview2012.org/

Shown below are the full text (augmented with some background information) of the keynote and the slides.  A similar lecture, with more technical details on the latter half of this talk,  will be given at the 24th  International Teletraffic Congress (ITC 24) at Krakow, Poland on September 4th-7th, 2012, and I will post that speech when it is done.

For the slides in PDF form, please click here.

 

The New Generation Network (NwGN) Project:

Its Promises and Challenges

Keynote Speech presented at Euroview 2012

July 23, 2012, University of Würzburg

Hisashi Kobayashi

The Sherman Fairchild University Professor Emeritus of

Electrical Engineering and Computer Science,

Princeton University, Princeton, New Jersey, USA

and

Executive Advisor

National Institute of Communications and Information and Technology (NICT)

Koganei, Tokyo, Japan

 

Abstract: This presentation consists of two parts. The first part is an overview of the New Generation Network (NwGN) project, a future Internet research project at NICT (National Institute of Information and Communication Technology), Japan. Its architecture, named AKARI, has four main features: cross-layer optimization, ID/Locator split, virtual nodes, and integrated optical packet switching and optical paths. JGN-X is a testbed that provides an environment to implement the AKARI and other future Internet architectures and to develop applications that run on these virtual networks.

The second part of this talk is my personal observations about the future Internet research in general, including the efforts made in the U.S., Europe, Japan and elsewhere. I question how several candidate architectures for the Future Internet will converge to one good network architecture that is acceptable to all members in the research community and various stakeholders. The anticipated difficulty will be exasperated because the research community is not well equipped with quantitative characterizations of the network performance. We propose some ideas and approaches that may remedy the current state of affairs.

About the Speaker: Hisashi Kobayashi is the Sherman Fairchild University Professor Emeritus of Princeton University, where he was previously Dean of the School of Engineering and Applied Science (1986-91). Currently he is Executive Advisor of NICT, Japan, for their New Generation Network. Prior to joining the Princeton faculty, he spent 15 years at the IBM Research Center, Yorktown Heights, NY (1967-82), and was the Founding Director of IBM Research-Tokyo (1982-86).

He is an IEEE Life Fellow, an IEICE Fellow, was elected to the Engineering Academy of Japan (1992), and received the 2005 Eduard Rhein Technology Award.

He is the author or coauthor of three books, “Modeling and Analysis: An Introduction to System Performance Evaluation Methodology” (Addison-Wesley, 1978), “System Modeling and Analysis: Foundations of System Performance Evaluation” (Pearson/Prentice Hall, 2009), and “Probability, Random Processes and Statistical Analysis” (Cambridge University Press, 2012). He was the founding editor-in-chief of “An International Journal: Performance Evaluation” (North Holland/Elsevier).

Text of the Speech

Good morning, President Forchel, the conference participants and other guests. It is a great honor to be invited to the Euroview as a keynote speaker. I thank Prof. Phuoc Tran-Gia, Dr. Tobias Hossfeld, Dr. Rastin Pries and the organization committee for providing me with this opportunity. Tobias suggested me to give an English version of the keynote I gave at a NICT conference held in Tokyo last November. So I will speak about NwGN, Japan’s Future Internet project, in the first half of this talk. Then I would like to present some views that might be considered somewhat provocative. I raise some questions, speculations and make some suggestions for challenges in future networking research. Since the allocated time is rather short to cover details, I will post a full text in my blog, www.hisashikobayashi.com , where some background information and technical details will be given in an italic and smaller font.

Slide 2: Outline of the presentation

Here is the outline of my talk

  1. What is NwGN and Why?
  2. AKARI Architecture

– Cross-layer optimization

– ID/Locator split architecture

– Network virtualization

– Integration of optical packets and optical paths

  1. JGN-X Test Bed
  2. Challenges in Future Network Research

 

I. What is NwGN and Why?

 

Slide 3: What is NwGN ?

The NwGN project is a flagship project, so to speak, of the networking research in Japan. The NwGN intends to make a revolutionary jump from the current Internet. Its purpose is to design a new architecture and protocols, and implement and verify them on a testbed called JGN-X.

Slide 4: Why NwGN?

Consider the explosively growing network traffic, mounting cyber attacks, and mobile devices and sensors connected to the Internet. Then, it should be rather obvious that the NGN (Next Generation Network)—which is merely an extension of today’s IP based Internet will hit its performance limit sooner or later.

The NwGN project aims at a revolutionary change so as to meet societal needs of the future [1-3]. AKARI is the architecture of such a network and JGN-X is a testbed, on which we will implement and verify the new architecture and its protocols.

Slides 5 & 6: Requirements of NwGN

There are numerous requirements that we need to take into account concerning network services of the future. Here is a list of what I consider as requirements for the NwGN:

  1. Scalability (users, things, “big data)
  2. Heterogeneity and diversity (in “clouds”)
  3. Reliability and resilience ( against naturaldisasters)
  4. Security (against cyber attacks)
  5. Mobility management
  6. Performance
  7. Energy and Environment
  8. Societal needs
  9. Compatibility (withtoday’s Internet)
  10. Extensibility (for the unforeseenand unexpected)

 

II. AKARI Network Architecture

Slide 7: The AKARI network architecture takes a layered structure like all network architectures we know of, but instead of adhering to static and strict boundaries between the layers, it takes an adaptive approach, by adjusting layer boundaries, depending on the load placed on the network and resource usage. Such a design philosophy is referred to as “cross-layer optimization,” intended to improve quality of services under varying operational conditions. Such adaptive quality of service management is a subject pursued actively in the networking community at large.

Slide 8: ID and Locator in the Internet

In the current Internet, devices on the network are identified in terms of their “IP addresses,” which are their identification numbers on the network layer. In the original internet, i.e., ARPANET, all end devices were host machine, with their locations being fixed. Thus, there was no problem in interpreting the IP addresses as “locators,” namely, the devices’ location information. In designing a future Internet, however, we must take into account that a majority of end devices are mobile, with devices with fixed locations being exceptions.

Slide 9: ID/Locator Split Architecture

An end device or an enterprise network may be connected to the Internet via multiple links, and such a technique is referred to as “multihoming.” Its primary purposes are to increase the reliability and resilience and to mitigate a possible overload on one link or circuit.

In order to efficiently deal with the mobile devices and/or multihoming requirements, we should distinguish IDs and locators and assign two different sets of numbers to them. Then, even if a mobile or multihomed device’s locator changes in the network layer, its ID associated with communications in the upper layers will remain unchanged.

The set of mappings from IDs to locators is referred to as IDR (ID registry).

The development of mapping algorithms and a scheme for determining where and how to store the ID Registry are both important issues of the split architecture. The split architecture is also useful to solve the security issue.

In the split architecture, not only locators, but also IDs are present in packet headers. So using IDs to enforce security or packet filtering is possible, and remains applicable even when the locators are changed due to mobility/multihoming. In the current Internet, the IP address in each packet is used as a key to enforce security or packet filtering. IPsec is an example of this location-based security. See RFC 2401: http://www.ietf.org/rfc/rfc2401.txt .

The split architecture is also effective against denial-of-service (DoS) attacks and man-in-the-middle (MitM) attacks, by relating IDs to some security credentials such as public keys and certificates. When an unknown device wants to communicate with a server, the server may ask the device to prove that the ID is associated with a public key and that the association has been certified by a reliable third party, before the server sets aside any resource (e.g., memory) for the session. The server may also ask the device to solve a puzzle of middle-level complexity before setting up the session.

There are two approaches to the ID/Locator Split Architecture. One is a host-based approach, in which the ID/Locator split protocols are implemented in the end hosts only. Its objectives are to achieve secure communications over the unsecured Internet and also to support mobility. As an example, consider the Host Identity Protocol (HIP) described in RFC 5201http://www.ietf.org/rfc/rfc5201.txtand P. Nikander, A. Gurtov and T. R. Henderson, “Host Identity Protocol (HIP): Connectivity, Mobility, Multihoming, Security, and Privacy over IPv4 and IPv6 Networks,” IEEE Communications Survey & Tutorials, Vol. 12, no. 2, pp. 186-204, Second Quarter, 2010.

 

The other approach is a router-based approach in which the ID/Locator split protocols are implemented in routers, not in end hosts. Its primary objective is to make the BGP (Border Gateway Protocol) routing table size smaller by using two different addressing spaces in edge and core networks. It is known as LISP (Locator/ID Separation Protocol). LISP is about to become an RFC. See

http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=05451761 .

We can get information about its implementation/standardization status as well as tutorial documents from this site: http://www.lisp4.net/ . Both the HIP and LISP ideas were generated by the IETF (Internet Engineering Task Force).

 

In the NwGN project we are implementing the ID/locator split in both hosts and edge routers so that we can get benefits of both Host Identity Protocol or HIP (for security, mobility) and Locator/ID Separation Protocol or LISP (for core routing scalability) [5, 6]. Additionally, our approach supports heterogeneous protocols in the edge networks (e.g., a host in an IPv4 network can communicate with another host located in an IPv6 network, and the host can move across heterogeneous networks). We are making application and transport layer protocols independent of the network layer protocols so that the same application can be transported over various network protocols.

 

The way in which the Internet is used is shifting from “communications from a device to another device” to “communications from data to humans.” When we wish to retrieve data or information, using a web browser and a web server, the data or information itself is an object of our interest, and it is immaterial from which devicethe data or information is fetched. A network architecture based on such a philosophy is called a “data centric” architecture.

In the ID/Locator Split Architecture, data and information can be treated as “things,” and we can assign IDs to them. Thus, the split architecture has an advantage of being applicable to a data-centric architecture as well.

Network Virtualization

I suppose that a majority of the audience is familiar with the notion of network virtualization, so I will skip a detailed definition of this term.

The notion of “virtualization” in computer technologies goes back to circa 1960, when virtual memory was introduced in the Atlas machine of the University of Manchester, UK. In 1972, IBM introduced VM/370, a virtual machine (VM) operating system that ran on System/370.

In the last decade, IT (information technology) departments of enterprises have begun to adopt a variety of virtualization technologies available as commercial products, ranging from server virtualization, storage virtualization, client (or desktop) virtualization to software virtualization, e.g., allowing Linux to run as a guest on top of a PC that is natively running a Microsoft Windows operating system. Such virtualization techniques allow multiple users and applications to dynamically share physical resources. Thus, they increase resource utilization and/or reduce electric energy consumption, as well as simplify complex administrative operations of IT.

Slide 10: Simply put, network virtualization chooses a subset of a collection of real (or physical) resources (routers, links, etc.) and functionalities (routing, switching, transport) of a real network (or multiple real networks) and combines them to form a logical network called a virtual network.

Slide 11: Virtual networks take different forms, depending on specific layers to which virtualization is applied. Here we illustrate what is termed “overlaid networks” (also known as “overlay networks”). Nodes in an overlaid network are connected by virtual links (or logical links) which are comprised of paths that are formed by combining multiple links in the network underneath. Distributed systems such as cloud computing, peer-to-peer (P2P) networks, and client-server applications (e.g., web browser and web server), can be viewed as overlaid networks running on the Internet. And the Internet itself is an overlaid network built on top of the telephone network.

Slide 12: Configuration of a Virtual Node

This slide shows the configuration of a “virtual node” designed by Prof. Akihiro Nakao’s group (The University of Tokyo and NICT), and implemented on JGN-X. The virtual node consists of two parts: one is called “Redirector,” which handles the conventional routing function, and the other is “Programmer,” which runs a program that implements the virtual node functions. Here, each “slice” corresponds to each “virtual network.”

Slide 13: Virtual node project and participating companies

The virtual node project has industrial partners, who are greatly contributing in turning the theory into practice. NTT is working on the domain controller, Fujitsu on an access gateway which controls access to other networks (e.g., a cloud). Hitachi is responsible for a router with a custom hardware board for constructing virtual links and NEC is developing a programmable environment at a node for flexible creation of a network service. For details, see [7,8].

Slide 14:Optical packet and Optical path

As already remarked, in the future network environment, a majority of end devices will be mobile devices and sensors, which are connected by wireless access networks. But for a core network that requires broad bandwidth, optical links and optical networks will be very important. When we talk about a network architecture, we often say that the architecture should be independent of technologies, while its implementation may depend on available technologies. But this simplistic argument will not hold for an optical network architecture, since it is quite different from that of wired or wireless networks. The main reason is that unlike electric signals that wired and wireless networks deal with, optical signals do not yet have inexpensive random access memory or operation circuits to build an arithmetic logic unit (ALU).

Packet switching is based on asynchronous time division multiplexing (ATDM or statistical time division multiplexing), and in today’s optical technology, it is not possible to switch or route multiplexed optical signals as they are. While the “payload” portion of the signal may remain in the optical domain, the packet header must be translated into an electric signal. We often use optical delay circuits or lines as buffer and try to maintain the high speed of optical signals. In order to make best use of the speed of optical signal, wavelength division multiplexing (WDM) must be adopted. But WDM will provide circuit switching, like frequency division multiplexing (FDM) and synchronous time division multiplexing. An end-to-end circuit that involves wavelength routers at nodes in between is referred to as an optical path.

Slide 15: Integrated optical packet and optical path system

In the NwGN architecture, we take advantage of our strength in optical technology and propose an architecture that integrates an optical packet switching system and an optical path circuit switching system. As shown in this slide, telemedicine, which requires real-time transmission of high-definition video, is an ideal application example of an optical path system. DCN (Dynamic Circuit Network), which will be mentioned in the discussion of the JGN-X testbed, is also a network that integrates the Internet with packet switching and optical circuit switching.

III. JGN-X: Testbed for NwGN

Slide 16: JGN-X network overview

NICT’s test bed effort for NwGN is called JGN-X, which is an evolutionary outgrowth of JGN (Japan Gigabit Network) that started in year 2000 as a testbed for large capacity networking. As its speed and capacity increased, the name changed to JGN2 (which supported multicast environment and IPv6), then JGN2 plus, and finally the JGN-X project started in the fiscal year 2011, where X stands for ”eXtreme.”

The JGN-X testbed of NICT implements network control by “Open Flow” and DCN (dynamic circuit network) as well as the network controlled by the virtual nodes (which is also called the “virtual node plane”) .

Here the term “plane” is used as an abbreviation of a“control plane architecture.”

In other words, the JGN-X allows us to pursue an architectural study of the above three types of virtual networks.

The control scheme in the conventional Internet is primarily based on routing using the IP addresses, whereas that of OpenFlow intends to improve the quality of services and increase the efficiency of the network by doing routing control at the flow unit level, whereas a “flow” is defined as a communication that is determined by the combination of the MAC addresses, IP addresses and port numbers involved in the communication. NEC, which is a founding member of the Open Flow Consortium, is developing a ”programmable flow switch.” DCN integrates the packet switching based Internet and the all-optical network that performs on-demand circuit switching using the aforementioned wavelength division multiplexing (WDM). It is used in such applications as remote medical systems (i.e., telemedicine), the Large Hadron Collider (LHC) project at Cern in Switzerland, and other advanced science fields.

Slide 17: JGN-X International Circuits

As this slide indicates, JGN-X is connected not only with various groups within Japan but also with the networking communities of the world.

Slide 18: Research around JGN-X

The JGN-X group also collaborates with the communities of advanced networking and cloud computing. It also provides an emulation environment for HPC (high performance computing). The objective of JGN-X is to provide an environment for research and development of the NwGN technologies, but also that for development of network applications for the future.

 

Slide 19:

IV. Challenges in the Future Internet Research

The History and Culture of the Internet Research

Now I change gears and present my personal questions, speculations and suggestions concerning the challenges in the future Internet research.

Although I talked exclusively about the NwGN project of NICT, there are a number of significant, perhaps more significant, research efforts taking place in the U.S., Europe and elsewhere, and I will defer to Mr. Chip Elliot, Dr. Peter Freeman, Prof. Raychaudhuri and other speakers in this conference and workshop for discussion of some of these efforts.

The NSF’s FIA (Future Internet Architecture) program supports MobilityFirst (Rutgers and 7 other universities), Named Data Networking (NDN; UCLA and 10 other universities), eXpressive Internet Architecture (XIA:CMU and 2 other universities), and NEBULA (U. of Penn and 11 universities). Each FIA program has its own comprehensive website where you can find more information than you could possibly digest. A recent survey paper in the July 2011 issue of IEEE Communications Magazine provides a good introduction to the FIA, GENI and EU’programs. The article also allocates about a half page to AKARI and JGN-X. See, J. Pan, S. Paul and R. Jain, “A Survey of the Research on Future Internet Architectures,” IEEE Communications Magazine, July 2011, pp. 26-35.

NSF also funds a testbed program called the GENI (Global Environment for Network Innovations) program (2005-present) , which is managed by Mr. Chip Elliot of BBN Technologies, who holds quarterly meetings/workshops, called GEC (GENI Engineering Conference). I have attended several GEC meetings in the past three years, and I have been impressed by how fast each of the four testbed groups (called “GENI Control Framework” or simply “clusters”) has been making progress. The following four clusters (lead institutions) are currently supported: PlanetLab (Princeton University), ProtoGENI (Univ. of Utah), ORCA (Duke University and RENCI-Renaissance Computing Institute) and ORBIT (Rutgers University).

In Europe, a collaboration of FP7 (the Seventh Framework Programme) on Future Internet research is referred to, somewhat confusingly, as Future Internet Assembly (FIA). The EIFFEL (European Internet Future for European Leadership) program and the Future Internet Private-Public Partnership (FI-PPP) were launched in 2006 and 2011, respectively. As we assemble here, Germany has been sponsoring the G-Lab (German Laboratory) through BFBM (Bundesministrerium für Bildung und Forschung; Federal Ministry of Education and Research) in addition to their participation in the aforementioned EU efforts.

As the slide on the “JGN-X international circuits” (Slide 17) implies, some of the testbeds of GENI and G-Lab are already connected to JGN-X, and many more testbeds will be connected, and I am sure that the same can be said about GENI, G-Lab, etc. Valuable exchanges of information on novel architectures and protocols have been regularly held, like the one we have at this Euroview.

Nevertheless, I fear that it will be extremely difficult, if not impossible, for all key players in the future Internet research community to come up with a universally agreeable architecture. First, there is an issue of the so-called NIH syndrome, where NIH stands for “Not Invented Here.” In other words, our ego problem, as well as our economical and political considerations, tends to dictate so that we may be reluctant to admit that ideas of other people may be better than our own.

Another question: Will the backward compatibility with the existing Internet and applications be a decisive factor? Or can we agree on an “optimal” clean-slate architecture first, and then try to figure out the best feasible migration strategy? Or do we continue letting the existing Internet run, at least for a while, as one of the virtual networks to be supported together with the new Internet(s)?

Slide 20: How to Evaluate Architectures?

Coming up with a quantitative comparison of one network architecture against another is a rather difficult proposition. Will the complexity of any of the candidate future networks be too great for us to comprehend? Our inability to quantitatively characterize a network architecture seems to come from not only the limited state-of –affairs in mathematical modeling techniques, but also the character, culture and history of the Internet community.

Slide 21: TCP-IP Networks

As you are well familiar, the original TCP/IP network provides merely “best effort” service and its performance guarantee was not an important issue, and this historical aspect seems to dictate the culture and mentality of the Internet community even today. In the Internet literature, there have been very few quantitative characterizations and discussions. The researchers and implementers are primarily concerned about what the system and applications deliver, but not so much interested in evaluating how well or poorly the system works as compared with other alternatives or against some “theoretical” limit. There are, of course, some exceptions such as [14] as I will refer to in a minute.

The fields of performance analysis and optimal control of resource allocations were active and thriving in the 1970s through 1990s. There was a strong need and a big payoff in designing and operating an optimal multiprogramming time-sharing computer, which had to serve many users under the constraints of physical resources.

As the computing paradigm shifted from the client-server model to the peer-to-peer model, and as powerful workstations and PCs with fast processors, abundant storage, and much broader communication bandwidth became available for a fraction of cost compared with a generation ago, there has been very little need and incentives to attain an optimal performance by insightful analysis and clever control algorithms. Quantitative analysis of the system (even a back-of –envelope type calculation) or simulation of a network system in a controlled environment have been replaced by quick prototyping of a target system.

Slide 22: B-ISDN vs. the Internet

In the research and development of B-ISDN centered around the ATM fast packet switching, which was hailed as the vision of multimedia services of the 21st century in the 1980s and 1990s, performance modeling and analysis of networks was very active. Unfortunately, the B-ISDN camp lost the race against the Internet camp, not because they were interested in performance modeling and analysis, but they were slow in coming up with what were called “killer applications,” i.e., new and attractive applications. The closed and centrally controlled architecture of B-ISDN lost the game to the open architecture of the IP network, i.e., the so-called “end-to-end design principle” allowed many applications around the WWW to be rolled out to the consumer market [10].

It would be an interesting “Gedankenexperiment” to ponder what the world would be like today, if the B-ISDN should have taken control of the telecommunications market of the 21st century, as was once envisioned by the telecommunication carriers of the world. Social networks such as Facebook and Twitter might not exist yet, and hence the revolutions in Egypt and other countries with dictatorial regime might not have occurred.

The radical “computer trading” that ran on computers connected to the Internet might not have developed without safeguard, and might not have triggered the market to crash (so-called Lehman Shock”) on September 15, 2008, and the “flash crash” of May 6, 2010? The world would not be threatened by the kind of cyber attacks we witness today. There would be no need for working on the future Internet, and most of us who assembled today would be working on research papers on better algorithms and performance improvement of ATM switches. I would be rich with a lot of royalty flowing in, with my books selling like hotcakes. It is too bad.

Slide 23: Over-reliance on Testbeds ?

Although we are now dominated by the Internet and its culture, my own conviction is that prototyping and testbeds alone will never lead us to quantitative understanding of system performance, reliability and security. Up to now, our inability to analyze and tune the network performance has been aided by over-dimensioning of the network, which was possible because the technological improvements and cost reduction in network components such as processors, memory and communication bandwidths have been able to match the phenomenal growth in the Internet users and insatiable appetites for resources by new applications. But there is no guarantee that the cost /performance figures of network components will continue to improve in a geometrical fashion as they have had in the past, and the energy consumption of IT systems is now a serious concern, as listed in Slide 5.

Modeling and Analysis Issues of the Future Internet

Slide 24: Modeling and Analysis Issue

Network virtualization is certainly a very powerful tool that allows us to test multiple candidates of new network architectures and protocols in parallel. This technology should ultimately help us migrate from the existing Internet to new one(s). But as it stands now, very little attention and effort seem to be paid to the performance aspect of each “slice” network, as well as, the performance limit and constraints of virtual networks. After all, network virtualization is nothing more than a form of (statistical) sharing of physical resources. A virtual network can be viewed as a network of processor sharing (PS) servers.

Slide 25: Processor sharing (PS) (see e.g., [11, 12, 13]) has been proven to be a very powerful mathematical abstraction of “virtual processors” in a time-shared system. Similarly, it can represent “virtual links or circuits,” i.e., multiplexed streams of packets or data over acommunication channel. A link congested by TCP flows can be modeled as a PS server.

Slide 26:Processor sharing (PS) –cont’d:

The so-called “fair scheduling” can be viewed as a discipline that emulates processor sharing.

Processor sharing often leads to a very simple performance analysis, because of its robustness or insensitivity to statistical properties of traffic load.

N. Dukkipati et al. [13] compares the performance of TCP/IP algorithms against the theoretical limit implied by a processor-sharing model . I believe that more of this kind of analysis should be practiced by networking researchers.

Slide 27: Loss Network Model

The loss network theory (see e.g. [15, 12, 13]) is a rather recent development, and it is a very general tool that can characterize a network with resource constraints which supports multiple end-to-end circuits with different resource requirements. It can be interpreted as a generalization of the classical Erlang or Engset loss models, and its insensitivity and robustness against the network traffic or load characteristics make this characterization very powerful.

Slide 28: Performance Analysis

The performance measures such as “blocking probability” or “call loss rate” are represented in terms of the normalization constant (or the “partition function” in thermodynamics or statistical mechanics) , just like the performance measures such as server utilization, throughput and average queueing delay in a queueing network model are represented in terms of its normalization constant.

The complexity of an exact computation of the normalization constant grows exponentially as the network size (i.e., in terms of the number of nodes, links, the bandwidths, buffer size, and router or switch’s speed) and/or the number of users (i.e., end-to-end connections to be supported by the network) grow. Fortunately, however, in such a regime, an asymptotic analysis (see e.g. [16, 12] ) becomes more accurate and often lends itself to a closed form expression.

Slides 29. Open Loss Network (OLN)

Here we show what we call an open loss network (OLN), where the path of a call is open.

The number of links L in this example network is five, i.e., L=5

We define a call class as r=(c, τ), where c is the routing chain, and τ is a call type.

Slide 30. Generalized Erlang Loss Model

Then, for any given OLN, we can represent it by a loss station given in this slide, where L, which was the number of links in the OLN, is now the number of serer types.

A call of class r holds Al, r lines simultaneously at link l, i.e., Al, r servers of type l.

ml = number of lines available at link l, i.e., the number of servers of type l.

rAl, rnl, rml

We have a simple closed form expression for the joint distribution of the number of different classes of calls in progress in the network in terms of the normalization constants G. The blocking and call loss probabilities can be also found in terms of G ‘s.

Slide 31. Mixed Loss Network (MLN)

A mixed loss network (MLN) depicted here can be viewed as a generalized Engset loss model. The network state distribution and the performance measures are again representable in terms of the normalization constants, and computational algorithms have been developed.

For large network parameters, recursive computation of the normalization constants may be prohibitive. However, the generating function of the normalization constant sequence can be obtained in a closed form and its inversion integral can be numerically evaluated. For very large network parameters, which will surely be the case for the future network, asymptotic approximation of the inversion integral will be applicable with high accuracy [16, 12].

Slide 32. Queueing and Loss Network

Finally, a diagram shown here is the concept of a queueing-loss network (QLN), which contains both queuing subnetwork(s) and loss subnetwork(s). The network state distribution and network performance measures can be again expressed in terms of the normalization constants.

The integrated optical-packet switching and optical paths system we discussed in Slide 15 can be formulated as a QLN. Please refer to [12] for detailed discussion.

Slide 33: For Further Information

For my further discussion on the modeling and analysis aspects, please refer to a forthcoming presentation at the ITC 24 to be held in Krakow, Poland in September 2012 [17].

Acknowlegments:

I thank Drs. Hiroaki Harai, Dr. Ved Kafle, Dr. Eiji Kawai of NICT and Prof. Akihiro Nakao of the University of Tokyo and NICT, for their great help in preparing and improving this speech.

References

[1] NICT, “New Generation Network Architecture AKARI: Its Concept and Design (ver2.0),” NICT, Kognei, Tokyo, Japan, September, 2009. http://akari-project.nict.go.jp/eng/concept-design/AKARI_fulltext_e_preliminary_ver2.pdf

 

[2] T. Aoyama, “A New Generation Network: Beyond the Internet and NGN,” IEEE Commun. Mag., Vol. 47, No. 5, pp. 82-87, May 2008.

 

[3] N. Nishinaga, “NICT New-Generation Network Vision and Five Network Targets,” IEICE Trans. Commun., Vol. E93-B, No. 3, pp. 446-449, March 2010.

[4] J. P. Torregoza, P. Thai, W. Hwang, Y. Han, F. Teraoka, M. Andre, and H. Harai, “COLA: COmmon Layer Architecture for Adaptive Power Control and Access Technology Assignment in New Generation Networks,” IEICE Transactions on Communications, Vol. E94-6, No. 6, pp. 1526–1535, June

2011.

 

[5] V. P. Kafle, H. Otsuki, and M. Inoue, “An ID/Locator Split Architecture for Future Networks,” IEEE Communications Magazine, Vol. 48, No. 2, pp. 138–144, February 2010.

 

[6] ITU-T SG13, “Future Networks Including Mobile and NGN,” http://itu.int/ITU-T/go/sg13

 

[7] A. Nakao, “Virtual Node Project: Virtualization Technology for Building New-Generation Networks,” NICT News, June 2010, No. 393, June 2010, pp. 1-6. http://www.nict.go.jp/en/data/pdf/NICT_NEWS_1006_E.pdf

[8] A. Nakao, A. Takahara, N. Takahashi, A. Motoki, Y. Kanada and K, Matoba, “VNode: A Deeply Programmable Network Testbed Through Network Virtualization,” submitted for publication. July 2012.

[9] H. Furukawa et al. , H. Harai, T. Miyazawa, S. Shinada, W. Kawasaki, and N. Wada, “Development of Optical Packet and Circuit Integrated Ring Network Testbed”, Optics Express, Vol. 19, No. 26,

pp. B242–B250, December 2011.

[10] H. Kobayashi, “An End to the End-to-End Arguments,” Euroview 2009, Wurzburg, Germany, July 2009.

[11] L. Kleinrock and R. R. Muntz, “Processor-sharing queueing models of mixed scheduling disciplines for time-sharing queuing systems,” J. ACM. Vol. 72 (1972), pp. 464-472.

[12] H. Kobayashi and B. L. Mark, System Modeling and Analysis: Foundations of System Performance Evaluation. Pearson-Prentice Hall, 2009

[13] H. Kobayashi, B. L. Mark and W. L. Turin, Probability, Random Processes and Statistical Analysis, Cambridge University Press, 2012.

[14] N. Dukkipati, M. Kobayashi, R. Zhang-Shen and N. McKeown, “Processor Sharing Flows in the Internet,” in H. de Meer and N. Bhatti (Eds.) IWQoS 2005, pp. 267-281, 2005.

 

[15] F. P. Kelly, “Loss Networks (invited paper),” Ann. Appl. Probab., Vol. 1, No. 3, pp. 319-378, 1991.

 

[16] Y. Kogan, “Asymptotic expansions for large closed and loss queueing networks,” Math. Prob. Eng. Vol. 8, No. 4-5, pp. 323-348, 2003.

 

[17] H. Kobayashi, “Modeling and Analysis Issues in the Future Internet,” Plenary Lecture at the 24th International Teletraffic Congress (ITC24), Krakow, Poland, September 4th-7th, 2012