Wierman, Adam
Overview
Works:  17 works in 19 publications in 1 language and 23 library holdings 

Genres:  Academic theses 
Roles:  Author 
Publication Timeline
.
Most widely held works by
Adam Wierman
Model predictive control for deferrable loads scheduling by Niangjun Chen(
)
2 editions published in 2014 in English and held by 2 WorldCat member libraries worldwide
Realtime demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation. In this thesis, we propose a realtime distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the averagecase performance. Finally, we evaluate the algorithm via tracebased simulations
2 editions published in 2014 in English and held by 2 WorldCat member libraries worldwide
Realtime demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation. In this thesis, we propose a realtime distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the averagecase performance. Finally, we evaluate the algorithm via tracebased simulations
Bounds on a fair policy with near optimal performance by
Adam Wierman(
Book
)
1 edition published in 2003 in English and held by 2 WorldCat member libraries worldwide
Abstract: "Providing fairness and providing good response times are often viewed as conflicting goals in scheduling. Scheduling policies that provide low response times, such as Shortest Running Processing Time (SRPT), are sometimes not fair, while fair policies like Processor Sharing (PS) provide response times far worse than SRPT. This seemingly inevitable tension between providing fairness and providing good response times was eliminated at last year's ACM Sigmetrics conference with the introduction of a new scheduling policy, Fair Sojourn Protocol (FSP), that appears to provide both [9]. The FSP policy is provably fair, as seen directly from its definition, and simulations show that FSP has a very low mean response time, close to that of SRPT in many cases [9]. Unfortunately, analyzing the mean response time of the FSP policy has proven to be difficult, and thus the queueing performance of FSP has only be [sic] assessed via simulation. In this work, we present the first queueing analysis of FSP. This analysis yields close upper and lower bounds on the mean response time and mean slowdown of the M/GI/1/FSP queue. Our upper bound shows that the improvement of FSP over PS is substantial: for all job size distributions, the mean response time and mean slowdown under FSP are a fraction (1  [rho]/2) of that under PS, where [rho] is the system load. For distributions with decreasing failure rate the improvement is even greater. We also prove that the mean response time of SRPT and FSP are quite close. Lastly, our bounds reveal that FSP has yet another desirable property: similarly to PS, the FSP policy is largely insensitive to the variability of the job size distribution."
1 edition published in 2003 in English and held by 2 WorldCat member libraries worldwide
Abstract: "Providing fairness and providing good response times are often viewed as conflicting goals in scheduling. Scheduling policies that provide low response times, such as Shortest Running Processing Time (SRPT), are sometimes not fair, while fair policies like Processor Sharing (PS) provide response times far worse than SRPT. This seemingly inevitable tension between providing fairness and providing good response times was eliminated at last year's ACM Sigmetrics conference with the introduction of a new scheduling policy, Fair Sojourn Protocol (FSP), that appears to provide both [9]. The FSP policy is provably fair, as seen directly from its definition, and simulations show that FSP has a very low mean response time, close to that of SRPT in many cases [9]. Unfortunately, analyzing the mean response time of the FSP policy has proven to be difficult, and thus the queueing performance of FSP has only be [sic] assessed via simulation. In this work, we present the first queueing analysis of FSP. This analysis yields close upper and lower bounds on the mean response time and mean slowdown of the M/GI/1/FSP queue. Our upper bound shows that the improvement of FSP over PS is substantial: for all job size distributions, the mean response time and mean slowdown under FSP are a fraction (1  [rho]/2) of that under PS, where [rho] is the system load. For distributions with decreasing failure rate the improvement is even greater. We also prove that the mean response time of SRPT and FSP are quite close. Lastly, our bounds reveal that FSP has yet another desirable property: similarly to PS, the FSP policy is largely insensitive to the variability of the job size distribution."
Special issue on the 2010 HotMetrics workshop(
Book
)
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
Speculationaware resource allocation for cluster schedulers by
Xiaoqi Ren(
)
2 editions published in 2015 in English and held by 2 WorldCat member libraries worldwide
Realtime demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation. In this thesis, we propose a realtime distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the averagecase performance. Finally, we evaluate the algorithm via tracebased simulations
2 editions published in 2015 in English and held by 2 WorldCat member libraries worldwide
Realtime demand response is essential for handling the uncertainties of renewable generation. Traditionally, demand response has been focused on large industrial and commercial loads, however it is expected that a large number of small residential loads such as air conditioners, dish washers, and electric vehicles will also participate in the coming years. The electricity consumption of these smaller loads, which we call deferrable loads, can be shifted over time, and thus be used (in aggregate) to compensate for the random fluctuations in renewable generation. In this thesis, we propose a realtime distributed deferrable load control algorithm to reduce the variance of aggregate load (load minus renewable generation) by shifting the power consumption of deferrable loads to periods with high renewable generation. The algorithm is model predictive in nature, i.e., at every time step, the algorithm minimizes the expected variance to go with updated predictions. We prove that suboptimality of this model predictive algorithm vanishes as time horizon expands in the average case analysis. Further, we prove strong concentration results on the distribution of the load variance obtained by model predictive deferrable load control. These concentration results highlight that the typical performance of model predictive deferrable load control is tightly concentrated around the averagecase performance. Finally, we evaluate the algorithm via tracebased simulations
Special issue on the 2010 HotMetrics workshop(
)
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
The foregroundbackground queue: a survey by
Misja Nuyens(
Book
)
1 edition published in 2006 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2006 in English and held by 2 WorldCat member libraries worldwide
Scheduling in polling systems by
Adam Wierman(
)
1 edition published in 2007 in Undetermined and held by 1 WorldCat member library worldwide
We present a simple mean value analysis (MVA) framework for analyzing the effect of scheduling within queues in classical asymmetric polling systems with gated or exhaustive service. Scheduling in polling systems finds many applications in computer and communication systems. Our framework leads not only to unification but also to extension of the literature studying scheduling in polling systems. It illustrates that a large class of scheduling policies behaves similarly in the exhaustive polling model and the standard M/GI/1 model, whereas scheduling policies in the gated polling model behave very differently than in an M/GI/1
1 edition published in 2007 in Undetermined and held by 1 WorldCat member library worldwide
We present a simple mean value analysis (MVA) framework for analyzing the effect of scheduling within queues in classical asymmetric polling systems with gated or exhaustive service. Scheduling in polling systems finds many applications in computer and communication systems. Our framework leads not only to unification but also to extension of the literature studying scheduling in polling systems. It illustrates that a large class of scheduling policies behaves similarly in the exhaustive polling model and the standard M/GI/1 model, whereas scheduling policies in the gated polling model behave very differently than in an M/GI/1
Analyzing the effect of prioritized background tasks in multiserver systems(
Book
)
1 edition published in 2003 in English and held by 1 WorldCat member library worldwide
Abstract: "Computer systems depend on high priority background processes to provide both reliability and security. This is especially true in multiserver systems where many such background processes are required for data coherence, fault detection, intrusion detection, etc. From a user's perspective, it is important to understand the effect that these many classes of high priority, backround tasks have on the performance of lower priority userlevel tasks. We model this situation as an M/GI/k queue with m preemptiveresume priority classes, presenting the first analysis of this system with more than two priority classes under a general phasetype service distribution. (Prior analyses of the M/GI/k with more than two priority classes are approximations that, we show, can be highly inaccurate.) Our analytical method is very different from the prior literature: it combines the technique of dimensionality reduction [10] with Neuts' technique for determining busy periods in multiserver systems [21], and then uses a novel recursive iteration technique. Our analysis is approximate, but, unlike prior techniques, can be made as accurate as desired, and is verified via simulation."
1 edition published in 2003 in English and held by 1 WorldCat member library worldwide
Abstract: "Computer systems depend on high priority background processes to provide both reliability and security. This is especially true in multiserver systems where many such background processes are required for data coherence, fault detection, intrusion detection, etc. From a user's perspective, it is important to understand the effect that these many classes of high priority, backround tasks have on the performance of lower priority userlevel tasks. We model this situation as an M/GI/k queue with m preemptiveresume priority classes, presenting the first analysis of this system with more than two priority classes under a general phasetype service distribution. (Prior analyses of the M/GI/k with more than two priority classes are approximations that, we show, can be highly inaccurate.) Our analytical method is very different from the prior literature: it combines the technique of dimensionality reduction [10] with Neuts' technique for determining busy periods in multiserver systems [21], and then uses a novel recursive iteration technique. Our analysis is approximate, but, unlike prior techniques, can be made as accurate as desired, and is verified via simulation."
Asymptotic convergence of scheduling policies with respect to slowdown by
Mor HarcholBalter(
Book
)
1 edition published in 2002 in English and held by 1 WorldCat member library worldwide
Abstract: "We explore the performance of an M/GI/1 queue under various scheduling policies from the perspective of a new metric: the slowdown experienced by largest jobs. We consider scheduling policies that bias against large jobs, towards large jobs, and those that are fair, e.g., ProcessorSharing. We prove that as job size increases to infinity, all work conserving policies converge almost surely with respect to this metric to no more than 1/(1[rho]), where [rho] denotes load. We also find that the expected slowdown under any work conserving policy can be made arbitrarily close to that under ProcessorSharing, for all job sizes that are sufficiently large."
1 edition published in 2002 in English and held by 1 WorldCat member library worldwide
Abstract: "We explore the performance of an M/GI/1 queue under various scheduling policies from the perspective of a new metric: the slowdown experienced by largest jobs. We consider scheduling policies that bias against large jobs, towards large jobs, and those that are fair, e.g., ProcessorSharing. We prove that as job size increases to infinity, all work conserving policies converge almost surely with respect to this metric to no more than 1/(1[rho]), where [rho] denotes load. We also find that the expected slowdown under any work conserving policy can be made arbitrarily close to that under ProcessorSharing, for all job sizes that are sufficiently large."
A note on comparing response times in the M/GI/1/FB and M/GI/1/PS queues by
Adam Wierman(
Book
)
1 edition published in 2002 in English and held by 1 WorldCat member library worldwide
Abstract: "Two widely used scheduling policies used in the absence of knowledge of job sizes are Processor Sharing (PS) and Feedback (FB). While a lot of work has been done on comparing their performance on particular job size distributions, a general comparison has not been done. We compare the overall mean response time (a.k.a. sojourn time) of the PS and FB queues under an M/GI/1 system. We show that FB outperforms PS when the service distribution has a decresing failure rate; but that when the failure rate of the service distribution is increasing, PS outperforms FB. This answers a question posed by Coffman and Denning [1]."
1 edition published in 2002 in English and held by 1 WorldCat member library worldwide
Abstract: "Two widely used scheduling policies used in the absence of knowledge of job sizes are Processor Sharing (PS) and Feedback (FB). While a lot of work has been done on comparing their performance on particular job size distributions, a general comparison has not been done. We compare the overall mean response time (a.k.a. sojourn time) of the PS and FB queues under an M/GI/1 system. We show that FB outperforms PS when the service distribution has a decresing failure rate; but that when the failure rate of the service distribution is increasing, PS outperforms FB. This answers a question posed by Coffman and Denning [1]."
Special issue on the Workshop on Distributed Cloud Computing (DCC 2015)(
Book
)
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
1 edition published in 2015 in English and held by 1 WorldCat member library worldwide
Preventing large sojourn times using SMART scheduling by
M Nuyens(
Book
)
1 edition published in 2005 in English and held by 1 WorldCat member library worldwide
1 edition published in 2005 in English and held by 1 WorldCat member library worldwide
Competitive analysis of M/GI/1 queueing policies by
Nikhil Bansal(
Book
)
1 edition published in 2002 in English and held by 1 WorldCat member library worldwide
Abstract: "We propose a framework for comparing the performance of two queueing policies. Our framework is motivated by the notion of competitive analysis, widely used by the computer science community to analyze the performance of online algorithms. We apply our framework to compare M/GI/1/FB and M/GI/1/SJF with M/GI/1/SRPT, and obtain new results about the performance of M/GI/1/FB and M/GI/1/SJF."
1 edition published in 2002 in English and held by 1 WorldCat member library worldwide
Abstract: "We propose a framework for comparing the performance of two queueing policies. Our framework is motivated by the notion of competitive analysis, widely used by the computer science community to analyze the performance of online algorithms. We apply our framework to compare M/GI/1/FB and M/GI/1/SJF with M/GI/1/SRPT, and obtain new results about the performance of M/GI/1/FB and M/GI/1/SJF."
A unified framework for modeling TCPVegas, TCPSACK, and TCPReno by
Adam Wierman(
Book
)
1 edition published in 2003 in English and held by 1 WorldCat member library worldwide
Abstract: "We present a general analytical framework for the modeling and analysis of TCP variations. The framework is quite comprehensive and allows the modeling of multiple variations of TCP, i.e. TCPVegas, TCPSACK, and TCPReno, under very general network situations. In particular, the framework allows us to propose the first analytical model of TCPVegas under onoff traffic  all existing analytical models of TCPVegas assume bulk transfer only. All TCP models are validated against event driven simulations (ns2) and existing stateoftheart analytical models. Finally, the analysis provided by our framework leads to many interesting observations with respect to both the behavior of bottleneck links that are shared by TCP sources and the effectiveness of the design decisions in TCPSACK and TCPVegas."
1 edition published in 2003 in English and held by 1 WorldCat member library worldwide
Abstract: "We present a general analytical framework for the modeling and analysis of TCP variations. The framework is quite comprehensive and allows the modeling of multiple variations of TCP, i.e. TCPVegas, TCPSACK, and TCPReno, under very general network situations. In particular, the framework allows us to propose the first analytical model of TCPVegas under onoff traffic  all existing analytical models of TCPVegas assume bulk transfer only. All TCP models are validated against event driven simulations (ns2) and existing stateoftheart analytical models. Finally, the analysis provided by our framework leads to many interesting observations with respect to both the behavior of bottleneck links that are shared by TCP sources and the effectiveness of the design decisions in TCPSACK and TCPVegas."
Simple bounds on SMART scheduling by
Adam Wierman(
Book
)
1 edition published in 2003 in English and held by 1 WorldCat member library worldwide
Abstract: "We define the class of SMART scheduling policies. These are policies that bias towards jobs with short remaining service times, jobs with small original sizes, or both, with the motivation of minimizing mean response time and/or mean slowdown. Examples of SMART policies include PSJF, SRPT, and hybrid policies such as RS (which biases according to the product of the response time and size of a job. For many policies in the SMART class, the mean response time and mean slowdown are not known or have complex representations involving multiple nested integrals, making evaluation difficult. In this work, we prove three main results. First, for all policies in the SMART class, we prove simple upper and lower bounds on mean response time. In particular, we focus on the SRPT and PSJF policies and prove even tighter bounds in these cases. Second, we show that all policies in the SMART class, surprisingly, have very similar mean response times. Third, we show that the response times of SMART policies are largely invariant to the variability of the job size distribution."
1 edition published in 2003 in English and held by 1 WorldCat member library worldwide
Abstract: "We define the class of SMART scheduling policies. These are policies that bias towards jobs with short remaining service times, jobs with small original sizes, or both, with the motivation of minimizing mean response time and/or mean slowdown. Examples of SMART policies include PSJF, SRPT, and hybrid policies such as RS (which biases according to the product of the response time and size of a job. For many policies in the SMART class, the mean response time and mean slowdown are not known or have complex representations involving multiple nested integrals, making evaluation difficult. In this work, we prove three main results. First, for all policies in the SMART class, we prove simple upper and lower bounds on mean response time. In particular, we focus on the SRPT and PSJF policies and prove even tighter bounds in these cases. Second, we show that all policies in the SMART class, surprisingly, have very similar mean response times. Third, we show that the response times of SMART policies are largely invariant to the variability of the job size distribution."
Stochastic Analysis of PowerAware Scheduling by
Adam Wierman(
)
1 edition published in 2009 in Undetermined and held by 0 WorldCat member libraries worldwide
Energy consumption in a computer system can be reduced by dynamic speed scaling, which adapts the processing speed to the current load. This paper studies the optimal way to adjust speed to balance mean response time and mean energy consumption, when jobs arrive as a Poisson process and processor sharing scheduling is used. Both bounds and asymptotics for the optimal speeds are provided. Interestingly, a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, dynamic speed scaling which allocates a higher speed when more jobs are present significantly improves robustness to bursty traffic and misestimation of workload parameters
1 edition published in 2009 in Undetermined and held by 0 WorldCat member libraries worldwide
Energy consumption in a computer system can be reduced by dynamic speed scaling, which adapts the processing speed to the current load. This paper studies the optimal way to adjust speed to balance mean response time and mean energy consumption, when jobs arrive as a Poisson process and processor sharing scheduling is used. Both bounds and asymptotics for the optimal speeds are provided. Interestingly, a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, dynamic speed scaling which allocates a higher speed when more jobs are present significantly improves robustness to bursty traffic and misestimation of workload parameters
PowerAware Speed Scaling in Processor Sharing Systems by
Adam Wierman(
)
1 edition published in 2009 in Undetermined and held by 0 WorldCat member libraries worldwide
Energy use of computer communication systems has quickly become a vital design consideration. One effective method for reducing energy consumption is dynamic speed scaling, which adapts the processing speed to the current load. This paper studies how to optimally scale speed to balance mean response time and mean energy consumption under processor sharing scheduling. Both bounds and asymptotics for the optimal speed scaling scheme are provided. These results show that a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, the results also highlight that dynamic speed scaling provides at least one key benefit  significantly improved robustness to bursty traffic and misestimation of workload parameters
1 edition published in 2009 in Undetermined and held by 0 WorldCat member libraries worldwide
Energy use of computer communication systems has quickly become a vital design consideration. One effective method for reducing energy consumption is dynamic speed scaling, which adapts the processing speed to the current load. This paper studies how to optimally scale speed to balance mean response time and mean energy consumption under processor sharing scheduling. Both bounds and asymptotics for the optimal speed scaling scheme are provided. These results show that a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, the results also highlight that dynamic speed scaling provides at least one key benefit  significantly improved robustness to bursty traffic and misestimation of workload parameters
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 HarcholBalter, Mor Author
 Smirni, Evgenia
 Nuyens, Misja Author
 Low, Steven H. (Steven Hwye)
 Bansal, Nikhil Author
 Chen, Niangjun Author
 Ren, Xiaoqi Author
 California Institute of Technology Division of Engineering and Applied Science
 California Institute of Technology Division of Division of Engineering and Applied Science
 Zwart, Bert
Useful Links