WorldCat Identities

Salem, Kenneth

Overview
Works: 24 works in 32 publications in 1 language and 106 library holdings
Genres: Conference papers and proceedings 
Roles: Author
Classifications: QA76.M3,
Publication Timeline
.
Most widely held works by Kenneth Salem
Adaptive block rearrangement by Sedat Akyürek( Book )

3 editions published between 1992 and 1993 in English and held by 11 WorldCat member libraries worldwide

Abstract: "An adaptive technique for reducing disk seek times is described. The technique copies frequently-referenced blocks from their original locations to reserved space near the center of the disk. Reference frequencies need not be known in advance. Instead, they are estimated by monitoring the stream of arriving requests. Trace-driven simulations show that seek times can be cut in half by copying only a small number of blocks using this technique. It is designed to be implemented in a device driver or controller, and is independent of the file system or database manager that uses the disk."
Altruistic locking by Kenneth Salem( Book )

2 editions published between 1987 and 1990 in English and held by 8 WorldCat member libraries worldwide

Abstract: "Long lived transactions (LLTs) hold on to database resources for relatively long periods of time, significantly delaying the completion of shorter and more common transactions. To alleviate this problems [sic] we propose an extension to two-phase locking, called altruistic locking, whereby LLTs can release their locks early. Transactions that access this released data are said to run in the wake of the LLT and must follow special locking rules. Altruistic locking guarantees serializability and does not a priori specify an order in which database objects must be accessed."
Placing replicated data to reduce seek delays by Sedat Akyürek( Book )

1 edition published in 1991 in English and held by 5 WorldCat member libraries worldwide

Abstract: "In many environments, seek time is a major component of the disk access time. In this paper we introduce the idea of replicating data on a disk to reduce the average seek time. Our focus is on the problem of placing replicated data. We present several techniques for replica placement and evaluate their performance using trace-driven simulations."
Probabilistic diagnosis of hot spots by Kenneth Salem( Book )

2 editions published in 1991 in English and held by 5 WorldCat member libraries worldwide

Abstract: "Commonly, a few objects in a database account for a large share of all database accesses. These objects are called hot spots. The ability to determine which objects are hot spots opens the door to a variety of performance improvements. Data reorganization, migration, and replication techniques can take advantage of knowledge of hot spots to improve performance at low cost. In this paper we present some techniques that can be used to identify those objects in the database that account for more than a specified percentage of database accesses. Identification is accomplished by analyzing a string of database references and collecting statistics
Failure recovery in memory-resident transaction processing systems by Kenneth Salem( Book )

1 edition published in 1988 in English and held by 5 WorldCat member libraries worldwide

The testbed runs on a large-memory VAX 11/785 using services provided by the Mach operating system. Our results indicate that the selection of a checkpointing strategy is the most critical decision in designing a recovery manager. In most situations, fuzzy, or unsynchronized, checkpointing strategies outperform highly synchronized alternatives. This is true even when synchronized checkpoints are combined with efficient logical strategies, which cannot be used with fuzzy checkpoints."
Main memory database systems : an overview by Hector Garcia-Molina( Book )

1 edition published in 1993 in English and held by 4 WorldCat member libraries worldwide

Abstract: "Memory resident database systems store their data in main, physical memory and provide very high speed access. Conventional database systems are optimized for the particular characteristics of disk storage mechanisms. Memory resident systems, on the other hand, use different optimizations to structure and organize data, as well as to make it reliable. This paper surveys the major memory-residence optimizations and briefly discusses some of the memory resident systems that have been designed or implemented."
Adaptive block rearrangement under UNIX by Sedat Akyürek( Book )

1 edition published in 1994 in English and held by 4 WorldCat member libraries worldwide

Abstract: "An adaptive UNIX disk device driver is described. To reduce seek times, the driver copies frequently-referenced blocks from their original locations to reserved space near the center of the disk. Block reference frequencies need not be known in advance. Instead, they are estimated by monitoring the stream of arriving requests. Measurements show that the adaptive driver reduces seek times and response times substantially."
Non-deterministic queue operatios by Hector Garcia-Molina( Book )

2 editions published in 1991 in English and held by 4 WorldCat member libraries worldwide

Implementing extended transaction models using transaction groups by Kenneth Salem( Book )

1 edition published in 1993 in English and held by 4 WorldCat member libraries worldwide

Transaction groups themselves can be implemented in a transaction processing system without modifying its existing transaction managers and resource managers. The proposed implementation exploits the well- documented ideas of recoverable storage, triggers, and exactly-once transaction execution. Transaction groups can also be implemented in federated systems."
Management of partially-safe buffers by Sedat Akyürek( Book )

1 edition published in 1991 in English and held by 4 WorldCat member libraries worldwide

Abstract: "Safe RAM is RAM which has been made as reliable as a disk. We consider the problem of buffer management in mixed buffers, i.e, buffers which contain both safe RAM and traditional volatile RAM. Mixed- buffer management techniques explicitly consider the safety of memory in deciding where to place recently read or written data. We present several such techniques, along with a simple model of a mixed buffer and its backing store. Using trace-driven simulations, we compare their effectiveness at reducing I/O to and from the backing store."
Space-efficient hot spot estimation by Kenneth Salem( Book )

1 edition published in 1993 in English and held by 4 WorldCat member libraries worldwide

Abstract: "This paper is concerned with the problem of identifying names which occur frequently in an ordered list of names. Such names are called hot spots. Hot spots can be identified easily by counting the occurrences of each name and then selecting those with large counts. However, this simple solution requires space proportional to the number of names that occur in the list. In this paper, we present and evaluate two hot spot estimation techniques. These techniques guess the frequently occurring names, while using less space than the simple solution. We have implemented and tested both techniques using several types of input traces. Our experiments show that very accurate guesses can be made using much less space than the simple solution would require."
System M : a transaction processing system for memory resident data by Hector Garcia-Molina( Book )

1 edition published in 1988 in English and held by 3 WorldCat member libraries worldwide

Crash recovery mechanisms for main storage database systems by Kenneth Salem( Book )

1 edition published in 1986 in English and held by 3 WorldCat member libraries worldwide

Geographically distributed database management at the cloud's edge by Cǎtǎlin-Alexandru Avram( Book )

1 edition published in 2017 in English and held by 3 WorldCat member libraries worldwide

Request latency resulting from the geographic separation between clients and remote application servers is a challenge for cloud-hosted web and mobile applications. Numerous studies have shown the importance of low latency to the end user experience. Small response time increases on the order of a few hundred milliseconds directly translate to reduced user satisfaction and loss of revenue that persist even after a low latency environment is restored. One way to address this challenge in geo-distributed settings is to push all or part of the application, along with the data it requires, to the edge of the cloud - closer to application clients. This thesis explores the idea of taking advantage of clients' proximity to the edge of the network in order to reduce request latencies. SpearDB is a prototype replicated distributed database system which operates in a star network topology, with a core site and a large number of edge sites that are close to clients. Clients access the nearest edge, which holds replicas of locally relevant portions of the database. SpearDB's edge sites coordinate through the core to provide a global transactional consistency guarantee (parallel snapshot isolation or PSI), while handling as much work locally as possible. SpearDB provides full general purpose transactional semantics with ACID guarantees. Experiments show that SpearDB is effective at reducing workload latencies for applications whose access patterns are geographically localizable. Many applications fit this criteria: bulletin boards (e.g., Craigslist, Kijiji), local commerce or services (e.g., Groupon, Uber), booking and ticketing (e.g., OpenTable, StubHub), location based services (mapping, directions, augmented reality), local news outlets and client-centric services (e-mail, rss feeds, gaming). SpearDB introduces protocols for executing application transactions in a geo-distributed setting under strong consistency guarantees. These protocols automatically hide the complexity as well as much of the latency introduced by geo-distribution from applications. The effectiveness of SpearDB depends on the placement of primary and secondary replicas at core and edge sites. The secondary replica placement problem is shown to be NP-hard. Several algorithms for automatic data partitioning and replication are presented to provide approximate solutions. These algorithms work in a geo-distributed core-edge setting under partial replication. Their goal is to bring data closer to clients in order to lower request latencies. Experimental comparisons of the resulting placements' latency impact show good results. Surprisingly however, the placements produced by the simplest of the proposed algorithms are comparable in quality to those produced by more complex approaches
Checkpointing memory-resident databases by Kenneth Salem( Book )

2 editions published between 1987 and 1988 in English and held by 3 WorldCat member libraries worldwide

Crash recovery for memory-resident databases by Kenneth Salem( Book )

1 edition published in 1987 in English and held by 3 WorldCat member libraries worldwide

Concurrency controls for global procedures in federated database systems by Rafael Alonso( Book )

1 edition published in 1987 in English and held by 2 WorldCat member libraries worldwide

SAGAS by Hector Garcia-Molina( Book )

2 editions published in 1987 in English and held by 2 WorldCat member libraries worldwide

Main memory database systems : an overview by Hector Garcia-Molina( )

1 edition published in 1992 in English and held by 1 WorldCat member library worldwide

 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.71 (from 0.48 for Main memor ... to 0.77 for Checkpoint ...)

Languages
English (28)