Karlin, Anna R.
Overview
Works:  18 works in 37 publications in 1 language and 77 library holdings 

Roles:  Author, Thesis advisor 
Classifications:  QA76, 004.68 
Publication Timeline
.
Most widely held works by
Anna R Karlin
Sharing memory in distributed systems : methods and applications by
Anna R Karlin(
Book
)
8 editions published in 1987 in English and Undetermined and held by 16 WorldCat member libraries worldwide
8 editions published in 1987 in English and Undetermined and held by 16 WorldCat member libraries worldwide
Factors in the performance of the AN1 computer network by
Susan Speer Owicki(
Book
)
3 editions published in 1992 in English and held by 14 WorldCat member libraries worldwide
3 editions published in 1992 in English and held by 14 WorldCat member libraries worldwide
Experience with a regular expression compiler by
Anna R Karlin(
Book
)
6 editions published in 1983 in English and Undetermined and held by 13 WorldCat member libraries worldwide
The language of regular expressions is a useful one for specifying certain sequential processes at a very high level. They allow easy modification of designs for circuits, like controllers, that are described by patterns of events they must recognize and the responses they must make to those patterns. This paper discusses the compilation of such expressions into reasonably compact layouts. The translation of regular expressions into nondeterministic automata by two different methods is discussed, along with the advantages of each method. A major part of the compilation problem is selection of good state codes for the nondeterministic automata; one successful strategy is explained in the paper
6 editions published in 1983 in English and Undetermined and held by 13 WorldCat member libraries worldwide
The language of regular expressions is a useful one for specifying certain sequential processes at a very high level. They allow easy modification of designs for circuits, like controllers, that are described by patterns of events they must recognize and the responses they must make to those patterns. This paper discusses the compilation of such expressions into reasonably compact layouts. The translation of regular expressions into nondeterministic automata by two different methods is discussed, along with the advantages of each method. A major part of the compilation problem is selection of good state codes for the nondeterministic automata; one successful strategy is explained in the paper
Competitive snoopy caching by
Anna R Karlin(
Book
)
1 edition published in 1986 in English and held by 5 WorldCat member libraries worldwide
1 edition published in 1986 in English and held by 5 WorldCat member libraries worldwide
On online computation : Grace Hopper Celebration of Women in Computing by Grace Hopper Celebration of Women in Computing(
Visual
)
1 edition published in 1997 in English and held by 4 WorldCat member libraries worldwide
Karlin discusses a method of dealing with online problemscompetitive analysis of online algorithms, where performance of the online algorithm is compared to the performance of the optimal offline algorithm
1 edition published in 1997 in English and held by 4 WorldCat member libraries worldwide
Karlin discusses a method of dealing with online problemscompetitive analysis of online algorithms, where performance of the online algorithm is compared to the performance of the optimal offline algorithm
Empirical studies of competitive spinning for sharedmemory multiprocessors(
Book
)
1 edition published in 1991 in English and held by 3 WorldCat member libraries worldwide
Among the mixed strategies studied, adaptive algorithms perform better than nonadaptive ones."
1 edition published in 1991 in English and held by 3 WorldCat member libraries worldwide
Among the mixed strategies studied, adaptive algorithms perform better than nonadaptive ones."
Parallel hashing : an efficient implementation of shared memory by
Anna R Karlin(
Book
)
2 editions published in 1986 in English and held by 3 WorldCat member libraries worldwide
2 editions published in 1986 in English and held by 3 WorldCat member libraries worldwide
Game theory, alive by
Anna R Karlin(
Book
)
2 editions published in 2016 in English and held by 3 WorldCat member libraries worldwide
2 editions published in 2016 in English and held by 3 WorldCat member libraries worldwide
Online computation by Sandy Irani(
)
1 edition published in 1997 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 1997 in English and held by 2 WorldCat member libraries worldwide
Information economics in the age of ecommerce : models and mechanisms for informationrich markets by
L. Elisa Celis(
)
1 edition published in 2012 in English and held by 2 WorldCat member libraries worldwide
The internet has dramatically changed the landscape of both markets and computation with the advent of electronic commerce (ecommerce). It has accelerated transactions, informed buyers, and allowed interactions to be computerized, enabling unprecedented sophistication and complexity. Since the environment consists of multiple owners of a wide variety of resources, it is a distributed problem where participants behave strategically and selfishly. This led to the birth of algorithmic game theory, whose goal is to understand equilibria arising in these strategic environments, study their computational complexity, and design mechanisms accordingly. More recently, the prevalence and access to information about consumers and products has increased dramatically and changed the landscape accordingly. In this thesis I focus on two simple yet fundamental observations which require further investigation as the field of algorithmic game theory progresses in the context of information economics. Specifically: 1) Access to information widens an agent's strategy space, and 2) the generation and exchange of information between agents is itself a game. There are three specific problems we address in this thesis. Informed Valuations: Increasingly sophisticated consumer tracking technology gives advertisers a wealth of information which they use to reach narrowly targeted consumer demographics. With targeting, advertisers are able to bid differently depending on the age, location, computer, or even browsing history of the person viewing a website. This is preferable to bidding a fixed value since they can choose to only bid on consumers who are more likely to be interested in their product. This results in an increase in revenue to the advertiser. Notice, however, that this implies information has changed the distribution of bids. With this change, common assumptions, which are reasonable in the absence of information, no longer hold. Thus, the mechanisms currently in place are no longer optimal. Using historical bidding data from a large premium advertising exchange, we show that the bidding distributions are unfavorable to the standard mechanisms. This motivates our new BINTAC mechanism, which is simple and effective in extracting revenue in the presence of targeting information. Bidders can "buyitnow", or alternatively "takeachance" in an auction, where the top d> 1 bidders are equally likely to win. The randomized takeachance allocation incentivizes high valuation bidders to buyitnow. We show that for a large class of distributions, this mechanism outperforms the secondprice auction, and achieves revenue performance close to Myerson's optimal mechanism. We apply structural methods to our data to estimate counterfactual revenues, and find that our BINTAC mechanism improves revenue by 4.5% relative to a secondprice auction with optimal reserve. Information Acquisition: A prevalent assumption in traditional mechanism design is that the buyers know their precise value for an item; however, this assumption is rarely accurate in practice. Judging the value of a good is difficult since it may depend on many unknown parameters such as the intrinsic quality of the good, the saving it will yield in the future, or the buyer's emotional state. Additionally, the estimated value for a good is not static; buyers can "deliberate", i.e. spend money or time, in order to refine their estimates by acquiring additional information. This information, however, comes at some cost, either financial, computational or emotional. It is known that when deliberative agents participate in traditional auctions, surprising and often undesirable outcomes can occur. We consider optimal dominant strategy mechanisms for onestep deliberative setting where each user can determine their exact value for a fixed cost. We show that for singleitem auctions under mild assumptions these are equivalent to a sequential posted price mechanism. Additionally, we propose a new approach that allows us to leverage classical revenuemaximization results in deliberative environments. In particular, we use Myerson (1981) to construct the first nontrivial (i.e., dependent on deliberation costs) upper bound on revenue in deliberative auctions. This bound allows us to apply existing results in the classical environment to a deliberative environment; specifically, for singleparameter matroid settings, sequential posted pricing is a 2approximation or better. Exchange Networks: Information is constantly being exchanged in many forms; i.e. communication among friends, company mergers and partnerships, and more recently, selling of user information by companies such as BlueKai. Exchange markets capture the trade of information between agents for profit, and we wish to understand how these trades are agreed upon. Understanding information markets helps us determine the power and influence structure of the network. To do this, we consider a very general network model where nodes are people or companies, and weighted edges represent profitable potential exchanges of information, or any other good. Each node is capable of finalizing an exactly one of its possible transactions; this models the situation where some form of exclusivity is involved in the transaction. This model is in fact very general, and can capture everything from targeting information exchange to the marriage market. A balanced outcome in an exchange network is an equilibrium concept that combines notions of stability and fairness. In a recent paper, Kleinberg and Tardos introduced balanced outcomes to the computer science community and provided a polynomial time algorithm to compute the set of such outcomes. Their work left open a pertinent question: are there natural, local dynamics that converge to a balanced outcome? We answer this question in the affirmative by describing such a process on general weighted graphs, and show that it converges to a balanced outcome whenever one exists. In addition, we present a new technique for analyzing the rate of convergence of local dynamics in bargaining networks. The technique reduces balancing in a bargaining network to optimal play in a randomturn game, and allows a tight polynomial bound on the rate of convergence for a nontrivial class of unweighted graphs
1 edition published in 2012 in English and held by 2 WorldCat member libraries worldwide
The internet has dramatically changed the landscape of both markets and computation with the advent of electronic commerce (ecommerce). It has accelerated transactions, informed buyers, and allowed interactions to be computerized, enabling unprecedented sophistication and complexity. Since the environment consists of multiple owners of a wide variety of resources, it is a distributed problem where participants behave strategically and selfishly. This led to the birth of algorithmic game theory, whose goal is to understand equilibria arising in these strategic environments, study their computational complexity, and design mechanisms accordingly. More recently, the prevalence and access to information about consumers and products has increased dramatically and changed the landscape accordingly. In this thesis I focus on two simple yet fundamental observations which require further investigation as the field of algorithmic game theory progresses in the context of information economics. Specifically: 1) Access to information widens an agent's strategy space, and 2) the generation and exchange of information between agents is itself a game. There are three specific problems we address in this thesis. Informed Valuations: Increasingly sophisticated consumer tracking technology gives advertisers a wealth of information which they use to reach narrowly targeted consumer demographics. With targeting, advertisers are able to bid differently depending on the age, location, computer, or even browsing history of the person viewing a website. This is preferable to bidding a fixed value since they can choose to only bid on consumers who are more likely to be interested in their product. This results in an increase in revenue to the advertiser. Notice, however, that this implies information has changed the distribution of bids. With this change, common assumptions, which are reasonable in the absence of information, no longer hold. Thus, the mechanisms currently in place are no longer optimal. Using historical bidding data from a large premium advertising exchange, we show that the bidding distributions are unfavorable to the standard mechanisms. This motivates our new BINTAC mechanism, which is simple and effective in extracting revenue in the presence of targeting information. Bidders can "buyitnow", or alternatively "takeachance" in an auction, where the top d> 1 bidders are equally likely to win. The randomized takeachance allocation incentivizes high valuation bidders to buyitnow. We show that for a large class of distributions, this mechanism outperforms the secondprice auction, and achieves revenue performance close to Myerson's optimal mechanism. We apply structural methods to our data to estimate counterfactual revenues, and find that our BINTAC mechanism improves revenue by 4.5% relative to a secondprice auction with optimal reserve. Information Acquisition: A prevalent assumption in traditional mechanism design is that the buyers know their precise value for an item; however, this assumption is rarely accurate in practice. Judging the value of a good is difficult since it may depend on many unknown parameters such as the intrinsic quality of the good, the saving it will yield in the future, or the buyer's emotional state. Additionally, the estimated value for a good is not static; buyers can "deliberate", i.e. spend money or time, in order to refine their estimates by acquiring additional information. This information, however, comes at some cost, either financial, computational or emotional. It is known that when deliberative agents participate in traditional auctions, surprising and often undesirable outcomes can occur. We consider optimal dominant strategy mechanisms for onestep deliberative setting where each user can determine their exact value for a fixed cost. We show that for singleitem auctions under mild assumptions these are equivalent to a sequential posted price mechanism. Additionally, we propose a new approach that allows us to leverage classical revenuemaximization results in deliberative environments. In particular, we use Myerson (1981) to construct the first nontrivial (i.e., dependent on deliberation costs) upper bound on revenue in deliberative auctions. This bound allows us to apply existing results in the classical environment to a deliberative environment; specifically, for singleparameter matroid settings, sequential posted pricing is a 2approximation or better. Exchange Networks: Information is constantly being exchanged in many forms; i.e. communication among friends, company mergers and partnerships, and more recently, selling of user information by companies such as BlueKai. Exchange markets capture the trade of information between agents for profit, and we wish to understand how these trades are agreed upon. Understanding information markets helps us determine the power and influence structure of the network. To do this, we consider a very general network model where nodes are people or companies, and weighted edges represent profitable potential exchanges of information, or any other good. Each node is capable of finalizing an exactly one of its possible transactions; this models the situation where some form of exclusivity is involved in the transaction. This model is in fact very general, and can capture everything from targeting information exchange to the marriage market. A balanced outcome in an exchange network is an equilibrium concept that combines notions of stability and fairness. In a recent paper, Kleinberg and Tardos introduced balanced outcomes to the computer science community and provided a polynomial time algorithm to compute the set of such outcomes. Their work left open a pertinent question: are there natural, local dynamics that converge to a balanced outcome? We answer this question in the affirmative by describing such a process on general weighted graphs, and show that it converges to a balanced outcome whenever one exists. In addition, we present a new technique for analyzing the rate of convergence of local dynamics in bargaining networks. The technique reduces balancing in a bargaining network to optimal play in a randomturn game, and allows a tight polynomial bound on the rate of convergence for a nontrivial class of unweighted graphs
Random walks and undirected graph connectivity: a survey by
Anna R Karlin(
Book
)
2 editions published in 1994 in English and held by 2 WorldCat member libraries worldwide
2 editions published in 1994 in English and held by 2 WorldCat member libraries worldwide
Strongly competitive algorithms for paging with locality of reference by Sandy Irani(
)
1 edition published in 1992 in English and held by 1 WorldCat member library worldwide
1 edition published in 1992 in English and held by 1 WorldCat member library worldwide
Multilevel adaptive hashing by Andrei Z Broder(
)
1 edition published in 1990 in English and held by 1 WorldCat member library worldwide
1 edition published in 1990 in English and held by 1 WorldCat member library worldwide
On bestresponse bidding in GSP auctions by
Matthew Cary(
Book
)
3 editions published in 2008 in English and held by 1 WorldCat member library worldwide
How should players bid in keyword auctions such as those used by Google, Yahoo! and MSN? We model ad auctions as a dynamic game of incomplete information, so we can study the convergence and robustness properties of various strategies. In particular, we consider bestresponse bidding strategies for a repeated auction on a single keyword, where in each round, each player chooses some optimal bid for the next round, assuming that the other players merely repeat their previous bids. We focus on a strategy we call Balanced Bidding (bb). If all players use the bb strategy, we show that bids converge to a bid vector that obtains in a complete information static model proposed by Edelman, Ostrovsky and Schwarz (2007). We prove that convergence occurs with probability 1, and we compute the expected time until convergence
3 editions published in 2008 in English and held by 1 WorldCat member library worldwide
How should players bid in keyword auctions such as those used by Google, Yahoo! and MSN? We model ad auctions as a dynamic game of incomplete information, so we can study the convergence and robustness properties of various strategies. In particular, we consider bestresponse bidding strategies for a repeated auction on a single keyword, where in each round, each player chooses some optimal bid for the next round, assuming that the other players merely repeat their previous bids. We focus on a strategy we call Balanced Bidding (bb). If all players use the bb strategy, we show that bids converge to a bid vector that obtains in a complete information static model proposed by Edelman, Ostrovsky and Schwarz (2007). We prove that convergence occurs with probability 1, and we compute the expected time until convergence
38. Annual Symposium on Foundations of Computer Science : Miami Beach, Florida, October 2022, 1997 : Proceedings(
Book
)
1 edition published in 1997 in English and held by 1 WorldCat member library worldwide
1 edition published in 1997 in English and held by 1 WorldCat member library worldwide
On the fault tolerance of the butterfly by
Thomas J. Watson IBM Research Center(
Book
)
1 edition published in 1994 in English and held by 1 WorldCat member library worldwide
1 edition published in 1994 in English and held by 1 WorldCat member library worldwide
Competitive randomized algorithms for nonuniform problems(
)
1 edition published in 1990 in English and held by 1 WorldCat member library worldwide
1 edition published in 1990 in English and held by 1 WorldCat member library worldwide
Energyaware batch scheduling by
Jessica Chang(
)
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
The subject of this thesis concerns algorithmic aspects of energyrelated trends in computation. Within recent years, the computationally heaviest jobs are executed over massive corpora residing at data centers, whose energy profiles differ from those of standard desktops, and whose power consumption translates into an operational budget on the order of millions of dollars. Designing schedules to optimize notions of power consumption is sometimes at odds with standard techniques prevalent in the classical scheduling theory, which has traditionally focused on objectives capturing the interests of the scheduler (e.g., makespan) or of individual jobs (e.g., flow time). Thus, we study a fairly simple model of energy usage: the active time scheduling model. Formally, we are interested in scheduling a set of n jobs, each with release time, deadline and processing requirement, on a system comprised of a single batch machine, with batch parameter B. (The machine can be working on up to B jobs at the same time.) The cost of a schedule is simply the amount of time that the system, i.e., the machine, is working. This is akin to the magnitude of the schedule's projection onto the time axis. The underlying assumption is that the cost for the machine to be active is roughly the same, regardless of whether it is working on only one job or operating at full capacity. We give a fast algorithm for the basic case in which jobs are unit length and time is slotted. When time is not slotted, the problem still admits a polynomialtime solution, albeit of higher time complexity, via dynamic programming techniques. On the other hand, when time is slotted, but jobs may have multiple feasible intervals, the problem is NPhard for B > 2, and otherwise can be solved via bmatching techniques. When jobs are arbitrary in length and can be preempted, but have a single feasible interval, we also show that a large class of algorithms is 5approximate. We also empirically compare algorithms within this class with the incumbent greedy algorithm due to Wolsey; the latter algorithm is widely implemented in practice. This comparison is cast within a framework general enough to capture other canonical covering problems, most notably Capacitated Set Cover. In particular, we design a heuristic LPO that essentially complements Wolsey's algorithm and propose optimizations to both approaches. At a high level, our findings solidly establish LPO as a competitive, if not superior, alternative to Wolsey's algorithm, with respect to both solution quality and number of subroutine calls made. Finally, this thesis makes theoretical headway on the wellstudied busy time problem. The key assumption that sets this apart from the aforementioned active time problem is that under the busy time model, the system has access to an unlimited number of identical batch machines. For nonpreemptive jobs of arbitrary length, we give a 3approximation that leverages important insights for the special case where the instance consists of interval jobs. When preemption is permitted, we give an exact algorithm for unbounded B; this result yields a simple 2approximation for bounded B
1 edition published in 2013 in English and held by 1 WorldCat member library worldwide
The subject of this thesis concerns algorithmic aspects of energyrelated trends in computation. Within recent years, the computationally heaviest jobs are executed over massive corpora residing at data centers, whose energy profiles differ from those of standard desktops, and whose power consumption translates into an operational budget on the order of millions of dollars. Designing schedules to optimize notions of power consumption is sometimes at odds with standard techniques prevalent in the classical scheduling theory, which has traditionally focused on objectives capturing the interests of the scheduler (e.g., makespan) or of individual jobs (e.g., flow time). Thus, we study a fairly simple model of energy usage: the active time scheduling model. Formally, we are interested in scheduling a set of n jobs, each with release time, deadline and processing requirement, on a system comprised of a single batch machine, with batch parameter B. (The machine can be working on up to B jobs at the same time.) The cost of a schedule is simply the amount of time that the system, i.e., the machine, is working. This is akin to the magnitude of the schedule's projection onto the time axis. The underlying assumption is that the cost for the machine to be active is roughly the same, regardless of whether it is working on only one job or operating at full capacity. We give a fast algorithm for the basic case in which jobs are unit length and time is slotted. When time is not slotted, the problem still admits a polynomialtime solution, albeit of higher time complexity, via dynamic programming techniques. On the other hand, when time is slotted, but jobs may have multiple feasible intervals, the problem is NPhard for B > 2, and otherwise can be solved via bmatching techniques. When jobs are arbitrary in length and can be preempted, but have a single feasible interval, we also show that a large class of algorithms is 5approximate. We also empirically compare algorithms within this class with the incumbent greedy algorithm due to Wolsey; the latter algorithm is widely implemented in practice. This comparison is cast within a framework general enough to capture other canonical covering problems, most notably Capacitated Set Cover. In particular, we design a heuristic LPO that essentially complements Wolsey's algorithm and propose optimizations to both approaches. At a high level, our findings solidly establish LPO as a competitive, if not superior, alternative to Wolsey's algorithm, with respect to both solution quality and number of subroutine calls made. Finally, this thesis makes theoretical headway on the wellstudied busy time problem. The key assumption that sets this apart from the aforementioned active time problem is that under the busy time model, the system has access to an unlimited number of identical batch machines. For nonpreemptive jobs of arbitrary length, we give a 3approximation that leverages important insights for the special case where the instance consists of interval jobs. When preemption is permitted, we give an exact algorithm for unbounded B; this result yields a simple 2approximation for bounded B
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
Associated Subjects
Algorithms AuctionsEconometric models Compiling (Electronic computers) Computational complexity Computer algorithms Computer networks Computer scheduling Electronic commerce Electronic data processing Electronic data processingDistributed processing Game theory Knowledge economy Local area networks (Computer networks)Evaluation Mathematical optimization Memory Multiprocessors Parallel processing (Electronic computers) Translators (Computer programs)
Languages