Roughgarden, Tim
Overview
Works:  35 works in 49 publications in 1 language and 286 library holdings 

Roles:  Author, Thesis advisor, Editor, Organizer of meeting 
Publication Timeline
.
Most widely held works by
Tim Roughgarden
Selfish routing and the price of anarchy by
Tim Roughgarden(
Book
)
8 editions published in 2005 in English and held by 221 WorldCat member libraries worldwide
"Most of us prefer to commute by the shortest route available, without taking into account the traffic congestion that we cause for others. Many networks, including computer networks, suffer from some type of this "selfish routing." In Selfish Routing and the Price of Anarchy, Tim Roughgarden studies the loss of social welfare caused by selfish, uncoordinated behavior in networks. He quantifies the price of anarchy  the worstpossible loss of social welfare from selfish routing  and also discusses several methods for improving the price of anarchy with centralized control."Jacket
8 editions published in 2005 in English and held by 221 WorldCat member libraries worldwide
"Most of us prefer to commute by the shortest route available, without taking into account the traffic congestion that we cause for others. Many networks, including computer networks, suffer from some type of this "selfish routing." In Selfish Routing and the Price of Anarchy, Tim Roughgarden studies the loss of social welfare caused by selfish, uncoordinated behavior in networks. He quantifies the price of anarchy  the worstpossible loss of social welfare from selfish routing  and also discusses several methods for improving the price of anarchy with centralized control."Jacket
Communication complexity (for algorithm designers) by
Tim Roughgarden(
Book
)
2 editions published in 2016 in English and held by 11 WorldCat member libraries worldwide
This text collects the lecture notes from the author's course "Communication Complexity (for Algorithm Designers)," taught at Stanford in the winter quarter of 2015. The two primary goals of the text are: (1) Learn several canonical problems in communication complexity that are useful for proving lower bounds for algorithms (disjointness, index, gaphamming, etc.). (2) Learn how to reduce lower bounds for fundamental algorithmic problems to communication complexity lower bounds. Along the way, readers will also: (3) Get exposure to lots of cool computational models and some famous results about them  data streams and linear sketches, compressive sensing, spacequery time tradeoffs in data structures, sublineartime algorithms, and the extension complexity of linear programs. (4) Scratch the surface of techniques for proving communication complexity lower bounds (fooling sets, corruption arguments, etc.)
2 editions published in 2016 in English and held by 11 WorldCat member libraries worldwide
This text collects the lecture notes from the author's course "Communication Complexity (for Algorithm Designers)," taught at Stanford in the winter quarter of 2015. The two primary goals of the text are: (1) Learn several canonical problems in communication complexity that are useful for proving lower bounds for algorithms (disjointness, index, gaphamming, etc.). (2) Learn how to reduce lower bounds for fundamental algorithmic problems to communication complexity lower bounds. Along the way, readers will also: (3) Get exposure to lots of cool computational models and some famous results about them  data streams and linear sketches, compressive sensing, spacequery time tradeoffs in data structures, sublineartime algorithms, and the extension complexity of linear programs. (4) Scratch the surface of techniques for proving communication complexity lower bounds (fooling sets, corruption arguments, etc.)
Proceedings of the Sixteenth ACM Conference on Economics and Computation by
Tim Roughgarden(
)
1 edition published in 2015 in English and held by 6 WorldCat member libraries worldwide
Annotation
1 edition published in 2015 in English and held by 6 WorldCat member libraries worldwide
Annotation
Algorithms illuminated : Part 2, graph algorithms and data structures by
Tim Roughgarden(
Book
)
1 edition published in 2018 in English and held by 6 WorldCat member libraries worldwide
1 edition published in 2018 in English and held by 6 WorldCat member libraries worldwide
Algorithmic game theory(
Book
)
6 editions published between 2007 and 2013 in English and held by 6 WorldCat member libraries worldwide
With contributions from major researchers in the field, 'Algorithmic Game Theory' presents a comprehensive treatment of this important practical application
6 editions published between 2007 and 2013 in English and held by 6 WorldCat member libraries worldwide
With contributions from major researchers in the field, 'Algorithmic Game Theory' presents a comprehensive treatment of this important practical application
Communication complexity (for algorithm designers) by
Tim Roughgarden(
Book
)
1 edition published in 2016 in English and held by 3 WorldCat member libraries worldwide
1 edition published in 2016 in English and held by 3 WorldCat member libraries worldwide
Algorithms illuminated by
Tim Roughgarden(
Book
)
2 editions published in 2017 in English and held by 3 WorldCat member libraries worldwide
2 editions published in 2017 in English and held by 3 WorldCat member libraries worldwide
Lightweight coloring and desynchronization for networks(
)
1 edition published in 2009 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2009 in English and held by 2 WorldCat member libraries worldwide
EC'15 : proceedings of the 16th ACM Conference on Economics and Computation, June 1519, 2015, Portland, OR, USA by EC(
Book
)
1 edition published in 2015 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2015 in English and held by 2 WorldCat member libraries worldwide
Algorithms Design and Analysis, Part 1 [Selbstlernkurs](
)
1 edition published in 2011 in English and held by 1 WorldCat member library worldwide
In this course you will learn several fundamental principles of algorithm design: divideandconquer methods, graph algorithms, practical data structures, randomized algorithms, and more
1 edition published in 2011 in English and held by 1 WorldCat member library worldwide
In this course you will learn several fundamental principles of algorithm design: divideandconquer methods, graph algorithms, practical data structures, randomized algorithms, and more
Algorithmic problems in social and geometric influence by Aneesh Sharma(
)
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
Online social and information networks promise to yield insights into social relationships and also open possibilities for new market paradigms. Moving forward towards these goals requires addressing several new computational challenges, and in this thesis we explore two themes: algorithmic challenges arising due to the massive scale of online social networks, and questions related to social network monetization. In the first part of the thesis, we study the algorithmic problem of finding nearest neighbors that is computationally challenging due to the large scale and dynamic nature of social networks. In particular, we map the relationship discovery problem to a geometric proximity problem and undertake a theoretical study of algorithms needed for solving these proximity problems in the highly dynamic online setting. Our contributions are new data structures for resolving these proximity queries that can be updated dynamically for a changing point set. In the second part of the thesis, we study two problems relating to social network monetization. First, we propose a new monetization paradigm for social networks that works via a referral scheme to market a product on social networks. In particular, we show that computational limitations limit optimal implementations of such referral schemes, and propose an algorithm that leverages the social network topology to achieve revenue that is guaranteed to be within a constant factor of the optimal revenue. The second monetization problem we study is that of inferring the quality of an advertisement (or of generic content) given the knowledge of user clicks. Our contribution for this problem is to introduce a new online learning algorithm (i.e. an algorithm with no prior knowledge of content quality) that can account for the position bias that is inherent in the presentation of the content to users
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
Online social and information networks promise to yield insights into social relationships and also open possibilities for new market paradigms. Moving forward towards these goals requires addressing several new computational challenges, and in this thesis we explore two themes: algorithmic challenges arising due to the massive scale of online social networks, and questions related to social network monetization. In the first part of the thesis, we study the algorithmic problem of finding nearest neighbors that is computationally challenging due to the large scale and dynamic nature of social networks. In particular, we map the relationship discovery problem to a geometric proximity problem and undertake a theoretical study of algorithms needed for solving these proximity problems in the highly dynamic online setting. Our contributions are new data structures for resolving these proximity queries that can be updated dynamically for a changing point set. In the second part of the thesis, we study two problems relating to social network monetization. First, we propose a new monetization paradigm for social networks that works via a referral scheme to market a product on social networks. In particular, we show that computational limitations limit optimal implementations of such referral schemes, and propose an algorithm that leverages the social network topology to achieve revenue that is guaranteed to be within a constant factor of the optimal revenue. The second monetization problem we study is that of inferring the quality of an advertisement (or of generic content) given the knowledge of user clicks. Our contribution for this problem is to introduce a new online learning algorithm (i.e. an algorithm with no prior knowledge of content quality) that can account for the position bias that is inherent in the presentation of the content to users
Proceedings of the 6th ACM Conference on Innovations in Theoretical Computer Science, January 1113, 2015, Rehovot, Israel(
)
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
Randomization and computation in strategic settings by Shaddin Faris Dughmi(
)
1 edition published in 2011 in English and held by 1 WorldCat member library worldwide
This thesis considers the following question: In largescale systems involving many selfinterested participants, how can we effectively allocate scarce resources among competing interests despite strategic behavior by the participants, as well as the limited computational power of the system? Work at the interface between computer science and economics has revealed a fundamental tension between the economic objective, that of achieving the goals of the system designer despite strategic behavior, and the computational objective, that of implementing aspects of the system efficiently. In particular, this tension has been most apparent in systems that allocate resources deterministically. The realization that careful use of randomization can reconcile economic and computational goals is the starting point for this thesis. Our contributions are twofold: (1) We design randomized mechanisms for several fundamental problems of resource allocation; our mechanisms perform well even in the presence of strategic behavior, and can be implemented efficiently. (2) En route to our results, we develop new and flexible techniques for exploiting the power of randomization in the design of computationallyefficient mechanisms for resource allocation in strategic settings
1 edition published in 2011 in English and held by 1 WorldCat member library worldwide
This thesis considers the following question: In largescale systems involving many selfinterested participants, how can we effectively allocate scarce resources among competing interests despite strategic behavior by the participants, as well as the limited computational power of the system? Work at the interface between computer science and economics has revealed a fundamental tension between the economic objective, that of achieving the goals of the system designer despite strategic behavior, and the computational objective, that of implementing aspects of the system efficiently. In particular, this tension has been most apparent in systems that allocate resources deterministically. The realization that careful use of randomization can reconcile economic and computational goals is the starting point for this thesis. Our contributions are twofold: (1) We design randomized mechanisms for several fundamental problems of resource allocation; our mechanisms perform well even in the presence of strategic behavior, and can be implemented efficiently. (2) En route to our results, we develop new and flexible techniques for exploiting the power of randomization in the design of computationallyefficient mechanisms for resource allocation in strategic settings
Auction design with robust guarantees by Peerapong Dhangwatnotai(
)
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
In this dissertation, we design and analyze auctions that are more practical than those in the traditional auction theory in several settings. The first setting is the search advertising market, in which the multikeyword sponsored search mechanism is the dominant platform. In this setting, a search engine sells impressions generated from various search terms to advertisers. The main challenge is the sheer diversity of the items for sale  the number of distinct items that an advertiser wants is so large that he cannot possibly communicate all of them to the search engine. To alleviate this communication problem, the search engine introduces a bidding language called broad match. It allows an advertiser to submit a single bid for multiple items at once. Popular models such as the GSP auction do not capture this aspect of sponsored search. We propose a model, named the broad match mechanism, for the sponsored search platform with broad match keywords. The analysis of the broad match mechanism produces many insights into the performance of the sponsored search platform. First, we identify two properties of the broad match mechanism, namely expressiveness and homogeneity, that characterize the performance of the mechanism. Second, we show that, unlike the GSP auction, the broad match mechanism does not necessarily have a pure equilibrium. Third, we analyze two variants of the broad match mechanism, the payperimpression variant and the payperclick variant. Under a common model of advertiser valuation, we show that the payperclick variant is more economically efficient than the payperimpression variant. This result justifies the prevalent use of the payperclick scheme in search advertising. In addition, the broad match mechanism can be viewed as an auction of which the bidding language is crucial to its performance. In the second part, we design and analyze approximately revenuemaximizing auctions in singleparameter settings. Bidders have publicly observable attributes and we assume that the valuations of bidders with the same attribute are independent draws from a common distribution. Previous works in revenuemaximizing auctions assume that the auctioneer knows the distributions from which the bidder valuations are drawn \cite{M81}. In this dissertation, we assume that the distributions are a priori unknown to the auctioneer. We show that a simple auction which does not require any knowledge of the distributions can obtain revenue comparable to what could be obtained if the auctioneer had the distributional knowledge in advance. Our most general auction has expected revenue at least a constant fraction of that of the optimal distributionaldependent auction in two settings. The first setting concerns arbitrary downwardclosed singleparameter environments and valuation distributions that satisfy a standard hazard rate condition, called monotone hazard rate. In this setting, the expected revenue of our auction is improved to a constant fraction of the expected optimal welfare. In the second setting, we allow a more general class of valuation distributions, called regular distributions, but require a more restrictive environment called the matroid environment. In our results, we assume that no bidder has a unique attribute value, which is obviously necessary with unknown and attributedependent valuation distributions. Our auction sets a reserve price for a bidder using the valuation of another bidder who has the same attribute. Conceptually, our analysis shows that even a single sample from a distribution  another bidder's valuation  is sufficient information to obtain nearoptimal expected revenue, even in quite general settings. In the third part, we design and analyze auctions that approximately maximize residual surplus in singleparameter settings. Residual surplus is defined to be the surplus less the sum of the bidders' payments. The guarantee of our auction is of the same type as the auctions in the second part, i.e., its expected residual surplus is a fraction of that of the optimal distributionaldependent auction. Instead of the nouniqueattribute assumption made in the second setting, in this setting we assume that the distributions of bidder valuations can be ordered, that is, the distribution of the first bidder stochastically dominates that of the second bidder and the distribution of the second bidder stochastically dominates that of the third and so on. In every downwardclosed stochasticdominance environment where the distributions of bidder valuations satisfy the monotone hazard rate condition, our auction produces residual surplus that is a $\Omega(\tfrac{1}{\log n})$ fraction of the optimal residual surplus, without taking any bid (although it makes use of the ordering), where $n$ is the number of bidders
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
In this dissertation, we design and analyze auctions that are more practical than those in the traditional auction theory in several settings. The first setting is the search advertising market, in which the multikeyword sponsored search mechanism is the dominant platform. In this setting, a search engine sells impressions generated from various search terms to advertisers. The main challenge is the sheer diversity of the items for sale  the number of distinct items that an advertiser wants is so large that he cannot possibly communicate all of them to the search engine. To alleviate this communication problem, the search engine introduces a bidding language called broad match. It allows an advertiser to submit a single bid for multiple items at once. Popular models such as the GSP auction do not capture this aspect of sponsored search. We propose a model, named the broad match mechanism, for the sponsored search platform with broad match keywords. The analysis of the broad match mechanism produces many insights into the performance of the sponsored search platform. First, we identify two properties of the broad match mechanism, namely expressiveness and homogeneity, that characterize the performance of the mechanism. Second, we show that, unlike the GSP auction, the broad match mechanism does not necessarily have a pure equilibrium. Third, we analyze two variants of the broad match mechanism, the payperimpression variant and the payperclick variant. Under a common model of advertiser valuation, we show that the payperclick variant is more economically efficient than the payperimpression variant. This result justifies the prevalent use of the payperclick scheme in search advertising. In addition, the broad match mechanism can be viewed as an auction of which the bidding language is crucial to its performance. In the second part, we design and analyze approximately revenuemaximizing auctions in singleparameter settings. Bidders have publicly observable attributes and we assume that the valuations of bidders with the same attribute are independent draws from a common distribution. Previous works in revenuemaximizing auctions assume that the auctioneer knows the distributions from which the bidder valuations are drawn \cite{M81}. In this dissertation, we assume that the distributions are a priori unknown to the auctioneer. We show that a simple auction which does not require any knowledge of the distributions can obtain revenue comparable to what could be obtained if the auctioneer had the distributional knowledge in advance. Our most general auction has expected revenue at least a constant fraction of that of the optimal distributionaldependent auction in two settings. The first setting concerns arbitrary downwardclosed singleparameter environments and valuation distributions that satisfy a standard hazard rate condition, called monotone hazard rate. In this setting, the expected revenue of our auction is improved to a constant fraction of the expected optimal welfare. In the second setting, we allow a more general class of valuation distributions, called regular distributions, but require a more restrictive environment called the matroid environment. In our results, we assume that no bidder has a unique attribute value, which is obviously necessary with unknown and attributedependent valuation distributions. Our auction sets a reserve price for a bidder using the valuation of another bidder who has the same attribute. Conceptually, our analysis shows that even a single sample from a distribution  another bidder's valuation  is sufficient information to obtain nearoptimal expected revenue, even in quite general settings. In the third part, we design and analyze auctions that approximately maximize residual surplus in singleparameter settings. Residual surplus is defined to be the surplus less the sum of the bidders' payments. The guarantee of our auction is of the same type as the auctions in the second part, i.e., its expected residual surplus is a fraction of that of the optimal distributionaldependent auction. Instead of the nouniqueattribute assumption made in the second setting, in this setting we assume that the distributions of bidder valuations can be ordered, that is, the distribution of the first bidder stochastically dominates that of the second bidder and the distribution of the second bidder stochastically dominates that of the third and so on. In every downwardclosed stochasticdominance environment where the distributions of bidder valuations satisfy the monotone hazard rate condition, our auction produces residual surplus that is a $\Omega(\tfrac{1}{\log n})$ fraction of the optimal residual surplus, without taking any bid (although it makes use of the ordering), where $n$ is the number of bidders
Potential functions and the inefficiency of equilibria by
Tim Roughgarden(
)
1 edition published in 2006 in English and held by 1 WorldCat member library worldwide
1 edition published in 2006 in English and held by 1 WorldCat member library worldwide
Algorithms for bipartite matching problems with connections to sparsification and streaming by Mikhail Kapralov(
)
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
The problem of finding maximum matchings in bipartite graphs is a classical problem in combinatorial optimization with a long algorithmic history. Graph sparsification is a more recent paradigm of replacing a graph with a smaller subgraph that preserves some useful properties of the original graph, perhaps approximately. Traditionally, sparsification has been used for obtaining faster algorithms for cutbased optimization problems. The contributions of this thesis are centered around new algorithms for bipartite matching problems, in which, surprisingly, graph sparsification plays a major role, and efficient algorithms for constructing sparsifiers in modern data models. In the first part of the thesis we develop sublinear time algorithms for finding perfect matchings in regular bipartite graphs. These graphs have been studied extensively in the context of expander constructions, and have several applications in combinatorial optimization. The problem of finding perfect matchings in regular bipartite graphs has seen almost 100 years of algorithmic history, with the first algorithm dating back to K\"onig in 1916 and an algorithm with runtime linear in the number of edges in the graph discovered in 2000. In this thesis we show that, even though traditionally the use of sparsification has been restricted to cutbased problems, in fact sparsification yields extremely efficient {\em sublinear time} algorithms for finding perfect matchings in regular bipartite graphs when the graph is given in adjacency array representation. Thus, our algorithms recover a perfect matching (with high probability) without looking the whole input. We present two approaches, one based on independent sampling and another on random walks, obtaining an algorithm that recovers a perfect matching in $O(n\log n)$ time, within $O(\log n)$ of output complexity, essentially closing the problem. In the second part of the thesis we study the streaming complexity of maximum bipartite matching. This problem is relevant to modern data models, where the algorithm is constrained in space and is only allowed few passes over the input. We are interested in determining the best tradeoff between the space usage and the quality of the solution obtained. We first study the problem in the single pass setting. A central object of our study is a new notion of sparsification relevant to matching problems: we define the notion of an $\e$matching cover of a bipartite graph as a subgraph that approximately preserves sizes of matchings between every two subsets of vertices, which can be viewed as a 'sparsifier' for matching problems. We give an efficient construction of a sparse subgraph that we call a 'matching skeleton', which we show is a linearsize matching cover for a certain range of parameters (in fact, for $\e> 1/2$). We then show that our 'sparsifier' can be applied repeatedly while maintaining a nontrivial approximation ratio in the streaming model with vertex arrivals, obtaining the first $11/e$ deterministic onepass streaming algorithm that uses linear space for this setting. Further, we show that this is in fact best possible: no algorithm can obtain a better than $11/e$ approximation in a single pass unless it uses significantly more than quasilinear space. This is a rather striking conclusion since a $11/e$ approximation can be obtained even in the more restrictive online model for this setting. Thus, we show that streaming algorithms can get no advantage over online algorithms for this problem unless they use substantially more than quasilinear space. Our impossibility results for approximating matchings in a single pass using small space exploit a surprising connection between the sparsifiers that we define and a family of graphs known as \rs graphs. In particular, we show that bounding the best possible size of $\e$covers for general $\e$ is essentially equivalent to determining the optimal size of an $\e$\rs graph. These graphs have received significant attention due to applications in PCP constructions, property testing and additive combinatorics, but determining their optimal size still remains a challenging open problem. Besides giving matching upper and lower bounds for single pass algorithms in the vertex arrival setting, we also consider the problem of approximating matchings in multiple passes. Here we give an algorithm that achieves a factor of $1e^{k}k^{k}/k!=1\frac{1}{\sqrt{2\pi k}}+o(1/k)$ in $k$ passes, improving upon the previously best known approximation. In the third part of the thesis we consider the concept of {\em spectral sparsification} introduced by Spielman and Teng. Here, we uncover a connection between spectral sparsification and spanners, i.e. subgraphs that approximately preserve shortest path distances. This connection allows us to obtain a quasilinear time algorithm for constructing spectral sparsifiers using approximate distance oracles and entirely bypassing linear system solvers, which was previously the only known way of constructing spectral sparsifiers in quasilinear time. Finally, in the last part of the thesis we design an efficient implementation of cutpreserving sparsification in a streaming setting with edge deletions using only one pass over the data
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
The problem of finding maximum matchings in bipartite graphs is a classical problem in combinatorial optimization with a long algorithmic history. Graph sparsification is a more recent paradigm of replacing a graph with a smaller subgraph that preserves some useful properties of the original graph, perhaps approximately. Traditionally, sparsification has been used for obtaining faster algorithms for cutbased optimization problems. The contributions of this thesis are centered around new algorithms for bipartite matching problems, in which, surprisingly, graph sparsification plays a major role, and efficient algorithms for constructing sparsifiers in modern data models. In the first part of the thesis we develop sublinear time algorithms for finding perfect matchings in regular bipartite graphs. These graphs have been studied extensively in the context of expander constructions, and have several applications in combinatorial optimization. The problem of finding perfect matchings in regular bipartite graphs has seen almost 100 years of algorithmic history, with the first algorithm dating back to K\"onig in 1916 and an algorithm with runtime linear in the number of edges in the graph discovered in 2000. In this thesis we show that, even though traditionally the use of sparsification has been restricted to cutbased problems, in fact sparsification yields extremely efficient {\em sublinear time} algorithms for finding perfect matchings in regular bipartite graphs when the graph is given in adjacency array representation. Thus, our algorithms recover a perfect matching (with high probability) without looking the whole input. We present two approaches, one based on independent sampling and another on random walks, obtaining an algorithm that recovers a perfect matching in $O(n\log n)$ time, within $O(\log n)$ of output complexity, essentially closing the problem. In the second part of the thesis we study the streaming complexity of maximum bipartite matching. This problem is relevant to modern data models, where the algorithm is constrained in space and is only allowed few passes over the input. We are interested in determining the best tradeoff between the space usage and the quality of the solution obtained. We first study the problem in the single pass setting. A central object of our study is a new notion of sparsification relevant to matching problems: we define the notion of an $\e$matching cover of a bipartite graph as a subgraph that approximately preserves sizes of matchings between every two subsets of vertices, which can be viewed as a 'sparsifier' for matching problems. We give an efficient construction of a sparse subgraph that we call a 'matching skeleton', which we show is a linearsize matching cover for a certain range of parameters (in fact, for $\e> 1/2$). We then show that our 'sparsifier' can be applied repeatedly while maintaining a nontrivial approximation ratio in the streaming model with vertex arrivals, obtaining the first $11/e$ deterministic onepass streaming algorithm that uses linear space for this setting. Further, we show that this is in fact best possible: no algorithm can obtain a better than $11/e$ approximation in a single pass unless it uses significantly more than quasilinear space. This is a rather striking conclusion since a $11/e$ approximation can be obtained even in the more restrictive online model for this setting. Thus, we show that streaming algorithms can get no advantage over online algorithms for this problem unless they use substantially more than quasilinear space. Our impossibility results for approximating matchings in a single pass using small space exploit a surprising connection between the sparsifiers that we define and a family of graphs known as \rs graphs. In particular, we show that bounding the best possible size of $\e$covers for general $\e$ is essentially equivalent to determining the optimal size of an $\e$\rs graph. These graphs have received significant attention due to applications in PCP constructions, property testing and additive combinatorics, but determining their optimal size still remains a challenging open problem. Besides giving matching upper and lower bounds for single pass algorithms in the vertex arrival setting, we also consider the problem of approximating matchings in multiple passes. Here we give an algorithm that achieves a factor of $1e^{k}k^{k}/k!=1\frac{1}{\sqrt{2\pi k}}+o(1/k)$ in $k$ passes, improving upon the previously best known approximation. In the third part of the thesis we consider the concept of {\em spectral sparsification} introduced by Spielman and Teng. Here, we uncover a connection between spectral sparsification and spanners, i.e. subgraphs that approximately preserve shortest path distances. This connection allows us to obtain a quasilinear time algorithm for constructing spectral sparsifiers using approximate distance oracles and entirely bypassing linear system solvers, which was previously the only known way of constructing spectral sparsifiers in quasilinear time. Finally, in the last part of the thesis we design an efficient implementation of cutpreserving sparsification in a streaming setting with edge deletions using only one pass over the data
STOC'13 : proceedings of the ACM Symposium on Theory of Computing Conference, Palo Alto, Ca, USA, June 14, 2013(
)
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
ITCS'15 : proceedings of the 6th ACM Conference on Innovations in Theoretical Computer Science, January 1113, 2015, Rehovot,
Israel by ITCS(
Book
)
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
Sharing costs to optimize network equilibria by Konstantinos Kollias(
)
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
Congestion games are a fundamental class of applications in the study of strategic behavior in large systems. In congestion games, selfish individuals act as consumers of resources at local parts of the system. These individuals typically obey their own selfinterests and will not necessarily adhere to the prescriptions of a socially optimal solution. Imposing centralized control upon these individuals is infeasible, but the deployment of simple rules at a local level can limit the discrepancy between the individual goals of the system users and the global optimization objectives of the system designer. Such rules can be abstracted as cost sharing methods that distribute the joint cost on a resource among those who generate it. This thesis aims to present a comprehensive study of cost sharing as a means of decentralized control in congestion games and their generalizations
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
Congestion games are a fundamental class of applications in the study of strategic behavior in large systems. In congestion games, selfish individuals act as consumers of resources at local parts of the system. These individuals typically obey their own selfinterests and will not necessarily adhere to the prescriptions of a socially optimal solution. Imposing centralized control upon these individuals is infeasible, but the deployment of simple rules at a local level can limit the discrepancy between the individual goals of the system users and the global optimization objectives of the system designer. Such rules can be abstracted as cost sharing methods that distribute the joint cost on a resource among those who generate it. This thesis aims to present a comprehensive study of cost sharing as a means of decentralized control in congestion games and their generalizations
Climate change policy : quantifying uncertainties for damages and optimal carbon taxes by
Tim Roughgarden(
)
1 edition published in 1999 in English and held by 1 WorldCat member library worldwide
1 edition published in 1999 in English and held by 1 WorldCat member library worldwide
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Stanford University Computer Science Department
 Goel, Ashish Thesis advisor
 ACM Digital Library
 Now Publishers
 ACM Special Interest Group on Electronic Commerce
 Tardos, Eva Editor
 Nisan, Noam Editor
 Vazirani, Vijay V. Editor
 Guibas, Leonidas J. Thesis advisor
 Plotkin, Serge A. Thesis advisor
Useful Links
Covers
Alternative Names
Roughgarden, Timothy
Tim Roughgarden Amerikaans onderzoeker
Tim Roughgarden computer scientist
Tim Roughgarden USamerikanischer Informatiker
Languages