<br><h3> Chapter One </h3> <b>Introduction to Ranking</b> <p> <p> It was right around the turn of the millennium when we first became involved in the field of ranking items. The now seminal 1998 paper <i>Anatomy of a Search Engine</i> used Markov chains, our favorite mathematical tool, to rank webpages. Two little known graduate students had used Markov chains to improve search engine rankings and their method was so successful that it became the foundation for their fledgling company, which was soon to become the extremely well-known search engine Google. The more we read about ranking, the more involved in the field we became. Since those early days, we've written one book <i>Google's PageRank and Beyond: The Science of Search Engine Rankings</i> and a few dozen papers on the topic of ranking. In 2009, we even hosted the first annual Ranking and Clustering Conference for the southeastern region of the U.S. <p> As applied mathematicians, our attraction to the science of ranking was natural, but we also believe ranking has a universal draw for the following reasons. <p> The problem of ranking is elegantly simple—arrange a group of items in order of importance—yet some of its solutions can be complicated and full of paradoxes and conundrums. We introduce a few of these intriguing complications later in this chapter as well as throughout the book. <p> There is a long and glorious tradition associated with the problem of ranking that dates back at least to the 13th century. The asides sprinkled throughout this book introduce some of the major and more colorful players in the field of ranking. <p> It appears that today the tradition of progress on the ranking problem is reaching a peak in terms of activity and interest. One reason for the increased interest comes from today's data collection capabilities. There is no shortage of interesting and real datasets in nearly every field imaginable from the real estate and stock markets to sports and politics to education and psychology. Another reason for the increased interest in ranking relates to a growing cultural trend fostered in many industrialized nations. In America, especially, we are evaluation-obsessed, which thereby makes us ranking-obsessed given the close relationship between ranking and evaluation. On a daily basis you probably have firsthand contact with dozens of rankings. For example, on the morning drive, there is the top-10 list of songs you hear on the radio. Later you receive your quarterly numerical evaluations from your boss or teacher (and leave happy because you scored in the 95th percentile.) During your lunch break, you peek at the sports section of the newspaper and find the rankings of your favorite team. A bit later during your afternoon break, you hop online to check the ranking of your fantasy league team. Meanwhile at home in your mailbox sits the next Netflix movie from your queue, which arrived courtesy of the postal system and the company's sophisticated ranking algorithm. Last and probably most frequent are the many times a day that you let a search engine help you find information, buy products, or download files. Any time you use one of these virtual machines, you are relying on advanced ranking techniques. Ranking is so pervasive that the topic even appeared in a recent xkcd comic. <p> Several chapters of this book are devoted to explaining about a dozen of the most popular ranking techniques underlying some common technologies of our day. However, we note that the ranking methods we describe here are largely based in matrix analysis or optimization, our specialties. Of course, there are plenty of ranking methods from other specialties such as statistics, game theory, economics, etc. <p> Netflix, the online movie rental company that we referred to above, places such importance on their ability to accurately rank movies for their users that in 2007 they hosted a competition with one of the largest purses ever offered in our capitalistic age. Their Netflix Prize awarded $1 million to the individual or group that improved their recommendation by 10%. The contest captivated us as it did many others. In fact, we applauded the spirit of their competition for it haled of the imperial and papal commissions from a much earlier time. Some of the greatest art of Italy was commissioned after an artist outdid his competition with a winning proposal to a sponsored contest. For example, the famous baptistery doors on the Cathedral in Florence were done by Lorenzo Ghiberti after he won a contest in 1401 that was sponsored by the Wool Merchants' Guild. It is nice to see that the mathematical problem of ranking has inspired a return to the contest challenge, wherein any humble aspiring scientist has a chance to compete and possibly win. It turns out that the Netflix Prize was awarded to a team of top-tier scientists from BelKor (page 134). Much more of the Netflix story is told in the asides in this book. <p> Even non-scientists have an innate connection to ranking for humans seem to be psychologically wired for the comparisons from which rankings are built. It is well-known that humans have a hard time ranking any set of items greater than size 5. Yet, on the other hand, we are particularly adept at pair-wise comparisons. In his bestseller <i>Blink</i>, Malcolm Gladwell makes the argument that snap judgments made in the blink of an eye often make the difference between life and death. Evolution has recognized this pattern and rewarded those who make quick comparisons. In fact, such comparisons occur dozens of times a day as we compare ourselves to others (or former versions of ourselves). I look younger than her. She's taller than I am. Jim's faster than Bill. I feel thinner today. Such pair-wise comparisons are at the heart of this book because every ranking method begins with pair-wise comparison data. <p> Lastly, while this final fact may be of interest to just a few of you, it is possible now to get a Ph.D. in ranking sports. In fact, one of our students, Anjela Govan, recently did just that, graduating from N. C. State University with her Ph.D. dissertation on <i>Ranking Theory with Application to Popular Sports</i>. Well, she actually passed the final oral exam on the merit of the mathematical analysis she conducted on several algorithms for ranking football teams, but it still opens the door for those extreme sports fans who love their sport as much as they love data and science. <p> <p> <b>Social Choice and Arrow's Impossibility Theorem</b> <p> This book deals with ranking items and the classes of items can be as diverse as sports teams, webpages, and political candidates. This last class of items invites us to explore the field of social choice, which studies political voting systems. Throughout history, humans have been captivated by voting systems. Early democracies, such as Greece, used the standard plurality voting system, in which each voter submits just one lone vote for their top choice. The advantages and disadvantages of this simple standard have since been debated. In fact, French mathematicians Jean-Charles de Borda and Marie Jean Antoine Nicolas Caritat, the marquis de Condorcet, were both rather vocal about their objections to plurality voting, each proposing their own new methods, both of which are discussed later in this book—in particular see the aside concerning BCS rankings on page 17 and the discussion on page 165. <p> Plurality voting is contrasted with preference list voting in which each voter submits a <i>preference list</i> that places candidates in a ranked order. In this case, each voter creates a rank-ordered list of the candidates. These ranked lists are somehow aggregated (see Chapters 14 and 15 for more on rank aggregation) to determine the overall winner. Preferential voting is practiced in some countries, for instance, Australia. <p> Democratic societies have long searched for a perfect voting system. However, it wasn't until the last century that the mathematician and economist Kenneth Arrow thought to change the focus of the search. Rather than asking, "What is the perfect voting system?", Arrow instead asked, "Does a perfect voting system exist?" <p> <p> <b>Arrow's Impossibility Theorem</b> <p> The title of this section foreshadows the answer. Arrow found that his question about the existence of a perfect voting system has a negative answer. In 1951, as part of his doctoral dissertation, Kenneth Arrow proved his so-called Impossibility Theorem, which describes the limitations inherent in any voting system. This fascinating theorem states that no voting system with three or more candidates can simultaneously satisfy the following four very common sense criteria . <p> 1. Arrow's first requirement for a voting system demands that every voter be able to rank the candidates in any arrangement he or she desires. For example, it would be unfair that an incumbent should automatically be ranked in the top five. This requirement is often referred to as the <i>unrestricted domain</i> criterion. <p> 2. Arrow's second requirement concerns a subset of the candidates. Suppose that within this subset voters always rank candidate A ahead of candidate B. Then this rank order should be maintained when expanding back to the set of all candidates. That is, changes in the order of candidates outside the subset should not affect the ranking of A and B relative to each other. This property is called the <i>independence of irrelevant alternatives</i>. <p> 3. Arrow's third requirement, called the <i>Pareto principle</i>, states that if all voters choose A over B, then a proper voting system should always rank A ahead of B. <p> 4. The fourth and final requirement, which is called non-dictatorship, states that no single voter should have disproportionate control over an election. More precisely, no voter should have the power to dictate the rankings. <p> <p> This theorem of Arrow's and the accompanying dissertation from 1951 were judged so valuable that in 1972 Ken Arrow was awarded the Nobel Prize in Economics. While Arrow's four criteria seem obvious or self-evident, his result certainly is not. He proves that it is impossible for any voting system to satisfy all four common sense criteria simultaneously. Of course, this includes all existing voting systems as well as any clever new systems that have yet to be proposed. As a result, the Impossibility Theorem forces us to have realistic expectations about our voting systems, and this includes the ranking systems presented in this book. Later we also argue that some of Arrow's requirements are less pertinent in certain ranking settings and thus, violating an Arrow criterion carries little or no implications in such settings. <p> <p> <b>Small Running Example</b> <p> This book presents a handful of methods for ranking items in a collection. Typically, throughout this book, the items are sports teams. Yet every idea in the book extends to any set of items that needs to be ranked. This will become clear from the many interesting non-sports applications that appear in the asides and example sections of every chapter. In order to illustrate the ranking methods, we will use one small example repeatedly. Because sports data is so plentiful and readily available our running example uses data from the 2005 NCAA football season. In particular, it contains an isolated group of Atlantic Coast Conference teams, all of whom played each other, which allows us to create simple traditional rankings based on win-loss records and point differentials. Table 1.1 shows this information for these five teams. Matchups with teams not in Table 1.1 are disregarded. <p> This small example also allows us to emphasize, right from the start, a linguistic pet peeve of ours. We distinguish between the words <i>ranking</i> and <i>rating</i>. Though often used interchangeably, we use these words carefully and distinctly in this book. A <i>ranking</i> refers to a rank-ordered list of teams, whereas a <i>rating</i> refers to a list of numerical scores, one for each team. For example, from Table 1.1, using the elementary ranking method of win-loss record, we achieve the following ranked list. <p> meaning that Duke is ranked 5th, Miami, 1st, etc. On the other hand, we could use the point differentials from Table 1.1 to generate the following rating list for these five teams. <p> Sorting a rating list produces a ranking list. Thus, the ranking list, based on point differentials, is <p> which differs only slightly from the ranking list produced by win-loss records. Just how much these two differ is a tricky question and one we postpone until much later in the book. In fact, the difference between two ranked lists is actually a topic to which we devote an entire chapter, Chapter 16. <p> In closing, we note that every rating list creates a ranking list, but not vice versa. Further, a ranking list of length <i>n</i> is a permutation of the integers 1 to <i>n</i>, whereas a rating list contains real (or possibly complex) numbers. <p> <p> <b>Ranking vs. Rating</b> <p> A <i>ranking</i> of items is a rank-ordered list of the items. Thus, a ranking vector is a permutation of the integers 1 through <i>n</i>. <p> A <i>rating</i> of items assigns a numerical score to each item. A rating list, when sorted, creates a ranking list. <p> <p> <h3> Chapter Two </h3> <b>Massey's Method</b> <p> <p> The Bowl Championship Series (BCS) is a rating system for NCAA college football that was designed to determine which teams are invited to play in which bowl games. The BCS has become famous, and perhaps notorious, for the ratings it generates for each team in the NCAA. These ratings are assembled from two sources, humans and computers. Human input comes from the opinions of coaches and media. Computer input comes from six computer and mathematical models—details are given in the aside on page 17. The BCS ratings for the 2001 and 2003 seasons are known as particularly controversial among sports fans and analysts. The flaws in the BCS selection system as opposed to a tournament playoff are familiar to most, including the President of the United States—read the aside on page 19. <p> <p> <b>Initial Massey Rating Method</b> <p> In 1997, Kenneth Massey, then an undergraduate at Bluefield College, created a method for ranking college football teams. He wrote about this method, which uses the mathematical theory of least squares, as his honors thesis . In this book we refer to this as the Massey method, despite the fact that there are actually several other methods attributed to Ken Massey. Massey has since become a mathematics professor at Carson-Newman College and today continues to refine his sports ranking models. Professor Massey has created various rating methods, one of which is used by the Bowl Championship Series (or BCS) system to select NCAA football bowl matchups—see the aside on page 17. The Colley method, described in the next chapter, is also one of the six computer rating systems used by the BCS. Because the details of the Massey method that is used by the BCS are vague, we describe the Massey method for which we have complete details, the one from his thesis . Readers curious about Massey's newer, BCS-implemented model can peruse his sports ranking Web site (masseyratings.com) for more information. <p> <i>(Continues...)</i> <p> <p> <p> <!-- copyright notice --> <br></pre> <blockquote><hr noshade size='1'><font size='-2'> Excerpted from <b>Who's #1? The Science of Rating and Ranking</b> by <b>Amy N. Langville Carl D. Meyer</b> Copyright © 2012 by Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.<br>Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.