Raman, Rajeev
Overview
Works:  27 works in 67 publications in 2 languages and 501 library holdings 

Genres:  Conference papers and proceedings 
Roles:  Author, Editor 
Classifications:  QA76.9.A43, 005.1 
Publication Timeline
.
Most widely held works by
Rajeev Raman
AlgorithmsESA 2002 : 10th annual European symposium, Rome, Italy, September 1721, 2002 : proceedings by
R. H Möhring(
Book
)
17 editions published in 2002 in English and held by 208 WorldCat member libraries worldwide
Annotation
17 editions published in 2002 in English and held by 208 WorldCat member libraries worldwide
Annotation
Proceedings of the eighth Workshop on Algorithm Engineering and Experiments and the third Workshop on Analytic Algorithmics
and Combinatorics by
2006, Miami, Fla.> ALENEX. <8(
Book
)
5 editions published in 2006 in English and held by 46 WorldCat member libraries worldwide
5 editions published in 2006 in English and held by 46 WorldCat member libraries worldwide
Eliminating amortization : on data structures with guaranteed response time by
Rajeev Raman(
Book
)
4 editions published in 1992 in English and held by 9 WorldCat member libraries worldwide
"An efficient amortized data structure is one that ensures that the average time per operation spent on processing any sequence of operations is small. Amortized data structures typically have very nonuniform response times, i.e., individual operations can be occasionally and unpredictably slow, although the average time over the sequence is kept small by completing most of the other operations quickly. This makes amortized data structures unsuitable in many important contexts, such as realtime systems, parallel programs, persistent data structures and interactive software. On the other hand, an efficient (singleoperation) worstcase data structure guarantees that every operation will be processed quickly. The construction of worstcase data structures from amortized ones is a fundamental problem which is also of pragmatic interest. Progress has been slow so far, both because the techniques used were of a limited nature and because the resulting data structures had much larger hidden constant factors. I try to address both these issues in this thesis."Page [i]
4 editions published in 1992 in English and held by 9 WorldCat member libraries worldwide
"An efficient amortized data structure is one that ensures that the average time per operation spent on processing any sequence of operations is small. Amortized data structures typically have very nonuniform response times, i.e., individual operations can be occasionally and unpredictably slow, although the average time over the sequence is kept small by completing most of the other operations quickly. This makes amortized data structures unsuitable in many important contexts, such as realtime systems, parallel programs, persistent data structures and interactive software. On the other hand, an efficient (singleoperation) worstcase data structure guarantees that every operation will be processed quickly. The construction of worstcase data structures from amortized ones is a fundamental problem which is also of pragmatic interest. Progress has been slow so far, both because the techniques used were of a limited nature and because the resulting data structures had much larger hidden constant factors. I try to address both these issues in this thesis."Page [i]
Approximate and exact deterministic parallel selection by
Shiva Chaudhuri(
Book
)
4 editions published in 1993 in English and German and held by 9 WorldCat member libraries worldwide
Abstract: "The selection problem of size n is, given a set of n elements drawn from an ordered universe and an integer r with 1 [<or =] r [<or =] n, to identify the rth smallest element in the set. We study approximate and exact selection on deterministic concurrentread concurrentwrite parallel RAMs, where approximate selection with relative accuracy [lambda]> 0 asks for any element whese true rank differs from r by at most [lambda]n. Our main results are: (1) For all t [> or =] (log log n)⁴, approximate selection problems of size n can be solved in O(t) time with optimal speedup with relative accuracy 2[superscript t/(log log n)⁴]; no deterministic PRAM algorithm for approximate selection with a running time below [theta](log n/log log n) was previously known
4 editions published in 1993 in English and German and held by 9 WorldCat member libraries worldwide
Abstract: "The selection problem of size n is, given a set of n elements drawn from an ordered universe and an integer r with 1 [<or =] r [<or =] n, to identify the rth smallest element in the set. We study approximate and exact selection on deterministic concurrentread concurrentwrite parallel RAMs, where approximate selection with relative accuracy [lambda]> 0 asks for any element whese true rank differs from r by at most [lambda]n. Our main results are: (1) For all t [> or =] (log log n)⁴, approximate selection problems of size n can be solved in O(t) time with optimal speedup with relative accuracy 2[superscript t/(log log n)⁴]; no deterministic PRAM algorithm for approximate selection with a running time below [theta](log n/log log n) was previously known
On relaxation algorithms based on Markov random fields by P. B Chou(
Book
)
4 editions published in 1987 in English and held by 8 WorldCat member libraries worldwide
Many computer vision problems can be formulated as computing the minimum energy states of thermal dynamic systems. However, due to the complexity of the energy functions, the solutions to the minimization problem are very difficult to acquire in practice. Stochastic and deterministic methods exist to approximate the solutions, but they fail to be both efficient and robust. This paper describes a new deterministic methodthe Highest Confidence First algorithmto approximate the minimum energy solution to the image labeling problem under the Maximum A Posteriori (MAP) criterion. This method uses Markov Random Fields to model spatial prior knowledge of images and likelihood probabilities to represent external observations regarding hypotheses of image entities. Following an order decided by a dynamic stability measure, the image entities make make local estimates based on the combined knowledge of priors and observations. The solutions so constructed compare favorably to the ones produced by existing methods and that the computation is more predictable and less expensive. Keywords: Image segmentation; Bayesian approach
4 editions published in 1987 in English and held by 8 WorldCat member libraries worldwide
Many computer vision problems can be formulated as computing the minimum energy states of thermal dynamic systems. However, due to the complexity of the energy functions, the solutions to the minimization problem are very difficult to acquire in practice. Stochastic and deterministic methods exist to approximate the solutions, but they fail to be both efficient and robust. This paper describes a new deterministic methodthe Highest Confidence First algorithmto approximate the minimum energy solution to the image labeling problem under the Maximum A Posteriori (MAP) criterion. This method uses Markov Random Fields to model spatial prior knowledge of images and likelihood probabilities to represent external observations regarding hypotheses of image entities. Following an order decided by a dynamic stability measure, the image entities make make local estimates based on the combined knowledge of priors and observations. The solutions so constructed compare favorably to the ones produced by existing methods and that the computation is more predictable and less expensive. Keywords: Image segmentation; Bayesian approach
Optimal sublogarithmic time integer sorting on the CRCW PRAM (note) by
Rajeev Raman(
Book
)
4 editions published in 1991 in English and held by 7 WorldCat member libraries worldwide
Abstract: "Rajasekaran and Reif considered the problem of sorting n integers, each in the range [1 ..., n], in parallel on the CRCW PRAM, and gave a nonoptimal sublogarithmic time algorithm for this problem [7]. They left open the question of whether an optimal algorithm for this problem could be constructed. We show that a modification of their algorithm runs with optimal speedup, thus settling their open question."
4 editions published in 1991 in English and held by 7 WorldCat member libraries worldwide
Abstract: "Rajasekaran and Reif considered the problem of sorting n integers, each in the range [1 ..., n], in parallel on the CRCW PRAM, and gave a nonoptimal sublogarithmic time algorithm for this problem [7]. They left open the question of whether an optimal algorithm for this problem could be constructed. We show that a modification of their algorithm runs with optimal speedup, thus settling their open question."
Generating random graphs efficiently by
Rajeev Raman(
Book
)
2 editions published in 1991 in English and held by 7 WorldCat member libraries worldwide
Abstract: "We consider the algorithmic complexity of generating labeled (directed and undirected) graphs under various distributions. We describe three natural optimality criteria for graph generating algorithms, and show algorithms that are optimal for many distributions."
2 editions published in 1991 in English and held by 7 WorldCat member libraries worldwide
Abstract: "We consider the algorithmic complexity of generating labeled (directed and undirected) graphs under various distributions. We describe three natural optimality criteria for graph generating algorithms, and show algorithms that are optimal for many distributions."
Waste makes haste : tight bounds for loose parallel sorting by
t Hagerup(
Book
)
3 editions published in 1992 in English and German and held by 7 WorldCat member libraries worldwide
We also show how to paddedsort n independent random numbers in O(log[superscript *] n) time whp with O(n) work, which matches a recent lower bound, and how to paddedsort n integers in the range 1.n in constant time whp using n processors. If the integer sorting is required to be stable, we can still solve the problem in O(log log n/log k) time whp using kn processors, for any k with 2 [<or =] k [<or =] log n. The integer sorting results require the nonstandard OR PRAM; alternative implementations on standard PRAM variants run in O(log log n) time whp. As an application of our paddedsorting algorithms, we can solve approximate prefix summation problems of size n with O(n) work in constant time whp on the OR PRAM, and in O(log log n) time whp on standard PRAM variants."
3 editions published in 1992 in English and German and held by 7 WorldCat member libraries worldwide
We also show how to paddedsort n independent random numbers in O(log[superscript *] n) time whp with O(n) work, which matches a recent lower bound, and how to paddedsort n integers in the range 1.n in constant time whp using n processors. If the integer sorting is required to be stable, we can still solve the problem in O(log log n/log k) time whp using kn processors, for any k with 2 [<or =] k [<or =] log n. The integer sorting results require the nonstandard OR PRAM; alternative implementations on standard PRAM variants run in O(log log n) time whp. As an application of our paddedsorting algorithms, we can solve approximate prefix summation problems of size n with O(n) work in constant time whp on the OR PRAM, and in O(log log n) time whp on standard PRAM variants."
Fast deterministic selection on a meshconnected processor array by
Danny Krizanc(
Book
)
2 editions published in 1991 in English and held by 7 WorldCat member libraries worldwide
Abstract: "We present a deterministic algorithm for selecting the element of rank k among N = n² elements, 1 [<or =] k [<or =] N, on an n x n meshconnected processor array in (1.44 + [epsilon])n parallel computation steps, for any constant [epsilon]> 0, using constant sized queues. This is a considerable improvement over the best previous deterministic algorithm, which was base dupon sorting and required 3n steps. Our algorithm can be generalized to solve the problem of selection on higher dimensional meshes, achieving time bounds better than the known results in each case."
2 editions published in 1991 in English and held by 7 WorldCat member libraries worldwide
Abstract: "We present a deterministic algorithm for selecting the element of rank k among N = n² elements, 1 [<or =] k [<or =] N, on an n x n meshconnected processor array in (1.44 + [epsilon])n parallel computation steps, for any constant [epsilon]> 0, using constant sized queues. This is a considerable improvement over the best previous deterministic algorithm, which was base dupon sorting and required 3n steps. Our algorithm can be generalized to solve the problem of selection on higher dimensional meshes, achieving time bounds better than the known results in each case."
Persistence amortization and randomization by
Paul Frederick Dietz(
Book
)
2 editions published in 1991 in English and held by 6 WorldCat member libraries worldwide
2 editions published in 1991 in English and held by 6 WorldCat member libraries worldwide
A constant update time finger search tree by
Rajeev Raman(
Book
)
2 editions published in 1989 in English and held by 6 WorldCat member libraries worldwide
Abstract: "Levcopolous and Overmars [LO88] described a search tree in which the time to insert or delete a key was O(1) once the position of the key to be inserted or deleted was known. Their data structure did not support fingers, pointers to points of high access or update activity in the set such that access and update operations in the vicinity of a finger are particularly efficient [GMPR77, BT80, Kos81, HM82, Tsa85]. Levcopolous and Overmars left open the question of whether a data structure could be designed which allowed updates in constant time and supported fingers. We answer the question in the affirmative by giving an algorithm in the RAM with logarithmic word size."
2 editions published in 1989 in English and held by 6 WorldCat member libraries worldwide
Abstract: "Levcopolous and Overmars [LO88] described a search tree in which the time to insert or delete a key was O(1) once the position of the key to be inserted or deleted was known. Their data structure did not support fingers, pointers to points of high access or update activity in the set such that access and update operations in the vicinity of a finger are particularly efficient [GMPR77, BT80, Kos81, HM82, Tsa85]. Levcopolous and Overmars left open the question of whether a data structure could be designed which allowed updates in constant time and supported fingers. We answer the question in the affirmative by giving an algorithm in the RAM with logarithmic word size."
The power of collision : randomized parallel algorithms for chaining and integer sorting by
Rajeev Raman(
Book
)
3 editions published between 1990 and 1991 in English and held by 6 WorldCat member libraries worldwide
The techniques used to extend to improved randomized algorithms for the problem of chaining [11,15], which is the following: given an array x1 ..., xn, such that m of the locations contain nonzero elements, to chain together all the nonzero elements into a linked list. We give randomized algorithms that run in O(1) time using n processors, whenever m is not too close to n."
3 editions published between 1990 and 1991 in English and held by 6 WorldCat member libraries worldwide
The techniques used to extend to improved randomized algorithms for the problem of chaining [11,15], which is the following: given an array x1 ..., xn, such that m of the locations contain nonzero elements, to chain together all the nonzero elements into a linked list. We give randomized algorithms that run in O(1) time using n processors, whenever m is not too close to n."
A simpler analysis of algorithm 65 : find by
Rajeev Raman(
Book
)
1 edition published in 1994 in English and held by 4 WorldCat member libraries worldwide
Abstract: "We present a simpler analysis of the expected number of comparisons made by Hoare's selection algorithm (Comm. ACM 4 (1961), pp. 321322)."
1 edition published in 1994 in English and held by 4 WorldCat member libraries worldwide
Abstract: "We present a simpler analysis of the expected number of comparisons made by Hoare's selection algorithm (Comm. ACM 4 (1961), pp. 321322)."
Combinatorial algorithms(
Book
)
1 edition published in 1999 in English and held by 3 WorldCat member libraries worldwide
1 edition published in 1999 in English and held by 3 WorldCat member libraries worldwide
Very fast optimal parallel algorithms for heap construction by
Paul Frederick Dietz(
Book
)
1 edition published in 1994 in English and held by 3 WorldCat member libraries worldwide
Abstract: "We give two algorithms for permuting n items in an array into heap order on a CRCW PRAM. The first is deterministic and runs in O(loglogn) time and performs O(n) operations. This runtime is the best possible for any comparisonbased algorithm using n processors. The second is randomized and runs in O(logloglog n) time with high probability, performing O(n) operations. No PRAM algorithm with o(log n) runtime was previously known for this problem. In order to obtain the deterministic result we study the parallel complexity of selecting the kth smallest of n elements on the CRCW PRAM, a problem that is of independent interest. We give an algorithm that is superior to existing ones when k is small compared to n. Consequently, we show that this problem can be solved in O(loglog n + log k/loglog n) time and O(n) operations for all 1 [<or =] k [<or =] n/2. A matching time lower bound is shown for all algorithms that use n or fewer processors to solve this problem."
1 edition published in 1994 in English and held by 3 WorldCat member libraries worldwide
Abstract: "We give two algorithms for permuting n items in an array into heap order on a CRCW PRAM. The first is deterministic and runs in O(loglogn) time and performs O(n) operations. This runtime is the best possible for any comparisonbased algorithm using n processors. The second is randomized and runs in O(logloglog n) time with high probability, performing O(n) operations. No PRAM algorithm with o(log n) runtime was previously known for this problem. In order to obtain the deterministic result we study the parallel complexity of selecting the kth smallest of n elements on the CRCW PRAM, a problem that is of independent interest. We give an algorithm that is superior to existing ones when k is small compared to n. Consequently, we show that this problem can be solved in O(loglog n + log k/loglog n) time and O(n) operations for all 1 [<or =] k [<or =] n/2. A matching time lower bound is shown for all algorithms that use n or fewer processors to solve this problem."
AlgorithmsESA 2002 : 10th annual European symposium, Rome, Italy, September 1721, 2002 : proceedings by ESA (Symposium)(
)
1 edition published in 2002 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2002 in English and held by 2 WorldCat member libraries worldwide
Mining sequential patterns from probabilistic data by Muhammad Muzammal(
)
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
Sequential Pattern Mining (SPM) is an important data mining problem. Although it is assumed in classical SPM that the data to be mined is deterministic, it is now recognized that data obtained from a wide variety of data sources is inherently noisy or uncertain, such as data from sensors or data being collected from the web from different (potentially conflicting) data sources. Probabilistic databases is a popular framework for modelling uncertainty. Recently, several data mining and ranking problems have been studied in probabilistic databases. To the best of our knowledge, this is the first systematic study of mining sequential patterns from probabilistic databases. In this work, we consider the kind of uncertainties that could arise in SPM. We propose four novel uncertainty models for SPM, namely tuplelevel uncertainty, eventlevel uncertainty, sourcelevel uncertainty and sourcelevel uncertainty in deduplication, all of which fit into the probabilistic databases framework, and motivate them using potential reallife scenarios. We then define the interestingness predicate for two measures of interestingness, namely expected support and probabilistic frequentness. Next, we consider the computational complexity of evaluating the interestingness predicate, for various combinations of uncertainty models and interestingness measures, and show that different combinations have very different outcomes from a complexity theoretic viewpoint: whilst some cases are computationally tractable, we show other cases to be computationally intractable. We give a dynamic programming algorithm to compute the source support probability and hence the expected support of a sequence in a sourcelevel uncertain database. We then propose optimizations to speedup the support computation task. Next, we propose probabilistic SPM algorithms based on the candidate generation and pattern growth frameworks for the sourcelevel uncertainty model and the expected support measure. We implement these algorithms and give an empirical evaluation of the probabilistic SPM algorithms and show the scalability of these algorithms under different parameter settings using both real and synthetic datasets. Finally, we demonstrate the effectiveness of the probabilistic SPM framework at extracting meaningful patterns in the presence of noise
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
Sequential Pattern Mining (SPM) is an important data mining problem. Although it is assumed in classical SPM that the data to be mined is deterministic, it is now recognized that data obtained from a wide variety of data sources is inherently noisy or uncertain, such as data from sensors or data being collected from the web from different (potentially conflicting) data sources. Probabilistic databases is a popular framework for modelling uncertainty. Recently, several data mining and ranking problems have been studied in probabilistic databases. To the best of our knowledge, this is the first systematic study of mining sequential patterns from probabilistic databases. In this work, we consider the kind of uncertainties that could arise in SPM. We propose four novel uncertainty models for SPM, namely tuplelevel uncertainty, eventlevel uncertainty, sourcelevel uncertainty and sourcelevel uncertainty in deduplication, all of which fit into the probabilistic databases framework, and motivate them using potential reallife scenarios. We then define the interestingness predicate for two measures of interestingness, namely expected support and probabilistic frequentness. Next, we consider the computational complexity of evaluating the interestingness predicate, for various combinations of uncertainty models and interestingness measures, and show that different combinations have very different outcomes from a complexity theoretic viewpoint: whilst some cases are computationally tractable, we show other cases to be computationally intractable. We give a dynamic programming algorithm to compute the source support probability and hence the expected support of a sequence in a sourcelevel uncertain database. We then propose optimizations to speedup the support computation task. Next, we propose probabilistic SPM algorithms based on the candidate generation and pattern growth frameworks for the sourcelevel uncertainty model and the expected support measure. We implement these algorithms and give an empirical evaluation of the probabilistic SPM algorithms and show the scalability of these algorithms under different parameter settings using both real and synthetic datasets. Finally, we demonstrate the effectiveness of the probabilistic SPM framework at extracting meaningful patterns in the presence of noise
Online algorithms for temperature aware job scheduling problems by Martin David Birks(
)
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
Temperature is an important consideration when designing microprocessors. When exposed to high temperatures component reliability can be reduced, while some components completely fail over certain temperatures. We consider the design and analysis of online algorithms; in particular algorithms that use knowledge of the amount of heat a job will generate. We consider algorithms with two main objectives. The first is maximising job throughput. We show upper and lower bounds for the case where jobs are unit length, both when jobs are weighted and unweighted. Many of these bounds are matching for all cooling factors in the single and multiple machine case. We extend this to consider the single machine case where jobs have longer than unit length. When all jobs are equal length we show matching bounds for the case without preemption. We also show that both models of preemption enable at most a slight reduction in the competitive ratio of algorithms. We then consider when jobs have variable lengths. We analyse both the models of unweighted jobs and the jobs with weights proportional to their length. We show bounds that match within constant factors, in the nonpreemptive and both preemptive models. The second objective we consider is minimising flow time. We consider the objective of minimising the total flow time of a schedule. We show NPhardness and inapproximability results for the offline case, as well as giving an approximation algorithm for the case where all release times are equal. For the online case we give some negative results for the case where maximum job heats are bounded. We also give some results for a resource augmentation model that include a 1competitive algorithm when the extra power for the online algorithm is high enough. Finally we consider the objective of minimising the maximum flow time of any job in a schedule
1 edition published in 2012 in English and held by 1 WorldCat member library worldwide
Temperature is an important consideration when designing microprocessors. When exposed to high temperatures component reliability can be reduced, while some components completely fail over certain temperatures. We consider the design and analysis of online algorithms; in particular algorithms that use knowledge of the amount of heat a job will generate. We consider algorithms with two main objectives. The first is maximising job throughput. We show upper and lower bounds for the case where jobs are unit length, both when jobs are weighted and unweighted. Many of these bounds are matching for all cooling factors in the single and multiple machine case. We extend this to consider the single machine case where jobs have longer than unit length. When all jobs are equal length we show matching bounds for the case without preemption. We also show that both models of preemption enable at most a slight reduction in the competitive ratio of algorithms. We then consider when jobs have variable lengths. We analyse both the models of unweighted jobs and the jobs with weights proportional to their length. We show bounds that match within constant factors, in the nonpreemptive and both preemptive models. The second objective we consider is minimising flow time. We consider the objective of minimising the total flow time of a schedule. We show NPhardness and inapproximability results for the offline case, as well as giving an approximation algorithm for the case where all release times are equal. For the online case we give some negative results for the case where maximum job heats are bounded. We also give some results for a resource augmentation model that include a 1competitive algorithm when the extra power for the online algorithm is high enough. Finally we consider the objective of minimising the maximum flow time of any job in a schedule
Lower bounds for set intersection queries by
P Dietz(
Book
)
1 edition published in 1992 in English and held by 1 WorldCat member library worldwide
Furthermore we consider the case q = O(n) with an additional space restriction. We only allow to use m memory locations, where m [<or =] n[superscript 3/2]. We show a tight bound of [theta](n²m[superscript 1/3]) for a sequence of O(n) operations, again ignoring polynomial in log n factors."
1 edition published in 1992 in English and held by 1 WorldCat member library worldwide
Furthermore we consider the case q = O(n) with an additional space restriction. We only allow to use m memory locations, where m [<or =] n[superscript 3/2]. We show a tight bound of [theta](n²m[superscript 1/3]) for a sequence of O(n) operations, again ignoring polynomial in log n factors."
2006 Proceedings of the Eighth Workshop on Algorithm Engineering and Experiments (ALENEX)(
)
in English and held by 0 WorldCat member libraries worldwide
in English and held by 0 WorldCat member libraries worldwide
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Möhring, R. H. (Rolf H.) Author Editor
 Stallmann, Matthias F.
 Sedgewick, Robert 1946
 Society for Industrial and Applied Mathematics
 ACM Special Interest Group for Algorithms and Computation Theory
 Hagerup, Torben Author
 Dietz, Paul Frederick 1959 Author
 LINK (Online service)
 MaxPlanckInstitut für Informatik
 Chaudhuri, Shiva Author
Associated Subjects
Aeronautics Algorithms Astronautics Combinatorial analysis Combinatorial optimization Computational complexity Computer algorithms Computer graphics Computer science Computer software Data structures (Computer science) Electronic data processing Parallel processing (Electronic computers) Random fields Realtime data processing Search theory Software engineering Sorting (Electronic computers) United States