Hisashi Kobayashi's Blog
Sherman Fairchild University Professor Emeritus of Electrical Engineering and Computer Science, Princeton University

Preface to the 35th Anniversary Issue of Performance Evaluation

This article will appear in the 35th Anniversary Issue of Performance Evaluation: An International Journal (Elsevier) in the fall of 2017.

Preface to the 35th Anniversary Issue of Performance Evaluation.

It is a great honor to write a preface for this 35th anniversary Issue of Performance Evaluation.  I take this opportunity to thank the late Martin Reiser (1986-1990), Werner Bux (1991-2007) and Philippe Nain (2008-2017) for their time and dedication in nurturing this journal to become a most important forum for researchers in the field of performance evaluation.  I would like to make some reflections for younger readers to see how this journal was born and has grown over the past 35 years.

In the fall of 1979, when I was a visiting professor at the Technische Hochschule Darmstadt, West Germany  as a recipient of the Alexander von Humboldt Foundation’s Senior U.S. Scientist Award, I was invited to Amsterdam by Mr. Frederickson,  the publisher of North-Holland. The discussion topic over the dinner was whether I would be interested in launching a new journal on performance evaluation as its editor-in-chief.  The publisher’s proposal was timely. There were already well respected journals such as the Journal of ACM and IEEE Transactions on Computers, where we could publish performance evaluation related work, and the IFIP Working Group 7.3 on “computer system modeling,” and ACM’s Special Interest Group on Measurement and Evaluation (SIGMETRICS) were regularly hosting biennial or annual conferences. There existed no journal, however, that was dedicated to performance evaluation.

Personally it was also an opportune time:  I had just published my first book, Modeling and Analysis: An Introduction to System Performance Evaluation Methodology (Addison Wesley 1978) in the IBM Systems Programming Series, and was granted a one-year sabbatical leave from IBM.  So, I gladly accepted the proposal and immediately contacted a score of distinguished individuals in the field, asking them to serve on the International Board of Editors. All individuals that I contacted graciously agreed to serve on the editorial board.

There were 20 members on the board including myself. They are listed below in alphabetical order, with their affiliations at that time. The years in parenthesis are the years of birth and death of those who have regrettably passed away since then:

  • Mátyás Arató (Univ. of Budapest, 1931-2015)
  • Edward G. Coffman, Jr. (Bell Labs)
  • J.Wim Cohen (Univ. of Utrecht, 1923-2000)
  • Pierre-Jaccque Courtois (Philips Research Lab)
  • Domenico Ferrari (Univ. California, Berkeley)
  • Erol Gelenbe (LRI-Université Paris Sud)
  • Ulrich Herzog (Univ. Erlangen-Nürnberg)
  • Philip Kiviat (SEI Computer Services.)
  • Leonard Kleinrock (UCLA)
  • Hisashi Kobayashi (IBM T.J. Watson Research Center)
  • Yakov A, Kogan (Inst. of Control Sciences)
  • Simon S. Lam (Univ. of Texas, Austin)
  • Guy Louchard (Université Libre de Bruxelles)
  • Tohru Moto-Oka (Univ. of Tokyo, 1929-1985)
  • Martin Reiser (IBM Zurich Res. Lab, 1943-2017)
  • Mischa Schwartz (Columbia Univ.)
  • Ken C. Sevcik (Univ. of Toronto, 1944-2005)
  • Iwao Toda (Electrical Comm. Lab, NTT)
  • Troy Wilson (IBM Corp., Poughkeepsie)
  • Scott N. Yasler (Europ. Comp. Meas.Assn.)

The first issue [1], published in January 1981, contained five contributed articles:

  • M. Reiser (1943-2017), “Mean-Value Analysis and Convolution Method for Queue-Dependent Servers in Closed Queueing Networks”;
  • Paul J. Kuehn, “Performance of ARQ-Protocol for HDX-Transmission in Hierarchical Polling Systems”;
  • Manfred Ruschitzka (1943-2009), “Policy Function Scheduling”;
  • Simon S. Lam and A. Udaya Shankar, “A Derivation of Response Time Distributions for a Multi-Class Feedback Queueing Systems”;
  • E. G. Coffman, Jr. H. O. Pollak, E. Gelenbe and R. C. Wood (1932-2013), “An Analysis of Parallel-Read Sequential-Writing Systems”;
  • James Wittneben and Dennis Kafura, “Working Set Measurements Based on Sampled Reference String Information.”

In addition there was a timely book-review article

  • R. Schassberger, on F. P. Kelly, Reversibility and Stochastic Networks (Wiley, 1980).

In the 1970s and early 1980s, performance evaluation of computers was concerned with such issues as  allocation and scheduling of  limited physical resources, i.e., processors and memory/storage. A multi-programmed system was often modeled by a central server model, or a closed queueing network.  Efficient computational algorithms such as the convolution algorithm and MVA (mean value analysis) to evaluate the system throughput and user response time were hot research topics.  An optimum design of storage hierarchy and page replacement algorithms were investigated, together with empirical study of program behaviors, using such notions as the “working set.”

Data networks or computer networks began to play an important role in information processing since the mid 1970s, when IBM introduced SNA (the Systems Network Architecture) followed by the establishment of the OSI (Open System Interconnection) Reference Model by ISO (the International Organization for Standardization) in 1983.

The vision for the 21st century’s digital information services set by the telecommunications industry in the 1980s was an end-to-end virtual circuit services, named B-ISDN (Broadband Integrated  Service Digital Network), and its underlying multiplexing mechanism was a fast packet-switching technique called ATM (Asynchronous Transfer Mode), standardized in 1988.  The telecommunications industry expected to provide digital services through an evolution of circuit switching previously used for voice services to virtual circuit switching, a technology that provided benefits of statistical multiplexing while retaining end-to-end quality-of-service control.

Numerous papers on B-ISDN network design and analysis were published in the Performance Evaluation journal during the 1980s-90s.  But the world did not follow the path set by the telecom industry.  A major reason was that in the LAN (Local Area Network) world, IP (Internet Protocol) and Ethernet had already taken hold as the de facto standards, and equipment based on those standards were much simpler, easier to deploy and generally cheaper than ATM equipment, although ATM switch prices dropped very close to that of Ethernet switches after publication of the ATM Forum UNI/NNI specs.  The distributed architecture of the Internet enabled its rapid expansion, whereas the growth of B-ISDN was somewhat stifled by a strict switching hierarchy and more complex operations.  If the B-ISDN camp had succeeded in conquering the LAN world by ATM equipment and come up with killer applications faster than the Internet camp did, the research environment for the performance evaluation community today would likely be much more agreeable to my taste.

The telecommunication industry has been traditionally run by electrical communication engineers who are aptly trained in probability theory and random processes, because modern analog and digital communications are based on statistical communication theory, which has its origin in detection and estimation of radar signals. It is relatively easy for electrical engineers to pick up traffic and queueing theory so as to analyze or predict the performance of a functions or service to be introduced.

The Internet community, on the other hand, has been led by computer scientists who are skilled in   programming. They are generally inclined to simply implement a new function or service and see how it performs, instead of analyzing its performance before it is implemented.

This cultural difference between the telecommunication and the Internet communities has had, in my opinion, profound impacts on the performance evaluation community represented by the readership of this journal.  Except for a handful of prominent schools, most computer science departments in the U.S., to my knowledge, have not offered decent training in probability theory, let alone queuing theory.  It seems that the Internet community today does recognize the need for performance analysis, which is viewed as supplementary information rather than central to the design process.  In my view the Internet still has many bottlenecks and performance problems which would benefit from better performance analysis and a more quantitative culture among researchers and engineers working in the area.

With the rapid decline in the cost of processors, memory and communication bandwidth, as we have witnessed in the past several decades, performance has not been a major concern of the Internet community in general.  The Internet researchers have largely been content with “best effort” service instead of performance guarantees. Few in the Internet community seemed to care about the performance of virtualized networks in the way that we seriously analyzed virtualized memory in the 1970s. But according to Prof. Dipankar Raychaudhuri of Rutgers University, there is an increasing number of papers on virtual network performance and cloud computing performance, which is an encouraging sign.  Thus far, the weak academic training in stochastic modeling is not a serious issue, as long as the decline in the cost of hardware resources continues to outpace the demands for these resources, allowing over-provisioning of any system to be designed or installed. For instance, until a few years ago, we used to receive warning messages from the computer center, saying the memory space allocated on the server was getting full, but such warnings are rare these days.

But this abundant resource situation might change, if the forthcoming IoT (Internet of Things) era should begin to place some burden on the resources.  Prof. Raychaudhuri observes that emerging ultra-high bandwidth and low-latency applications such as augmented reality, virtual reality, connected car and industrial control will also drastically impact the performance environment of the Internet.  These scenarios require careful analysis of latency in addition to the traditional focus on network throughput and efficiency.  It will not be possible to over-provision future “real-time” IoT systems using heterogeneous edge networks and edge clouds where statistical multiplexing gains are intrinsically limited, so performance evaluation will become important in this setting.

Another noticeable change in recent years is that an increasing number of computer science majors have been getting serious about studying probability theory and statistical analysis, primarily because the field of machine learning is attracting talented students.  My graduate course “Random Processes in Information Systems,” was drawing many able computer science majors, partly because the textbook [2] has a chapter in machine learning.  I believe that advancing the state-of-the art in machine learning and data analytics would be a fertile ground for the performance evaluation community.  After all, the ultimate goal of performance evaluation is to help one find the best solution to a given problem most efficiently, rather than optimal use of resources per se, which has been a primary guiding principle of the performance evaluation in the past.

Acknowledgments:  I would like to thank Prof. Dipankar Raychaudhuri of Rutgers University and Prof. Brian L. Mark of George Mason University for their valuable comments and suggestions.

Hisashi Kobayashi
Dean of Engineering emeritus, and The Sherman Fairchild University Professor emeritus of Electrical Engineering and Computer Science, Princeton University

[1]   Performance Evaluation: An International Journal, Volume 1, Number 1, January 1981, North-Holland.

[2]   H. Kobayashi, B. L. Mark and W. Turin, Probability, Random Processes and Statistical Analysis,  Chapter 21. Probabilistic models in machine learning, Cambridge University Press, 2012, pp.615-644.

 

Leave a Reply

Your email address will not be published. Required fields are marked *