<br><h3> Chapter One </h3> <b>OVERVIEW OF TECHNOLOGY-ENHANCED ASSESSMENTS</b> Nancy T. Tippins <p> <p> The availability of affordable, easy-to-use technology-based tools and their interconnectivity via the Internet have hastened the spread of technology into virtually all aspects of our lives. The Internet provides quick access to huge amounts of information and facilitates all kinds of relationships among all kinds of people. In the last fifty years, technology has changed how we do our work, how we spend our leisure time, and how we interact with others. The next fifty years promises more of the same. <p> Technology has permeated work in the 21st century, and the field of talent assessment is no exception. Technology has influenced what kinds of assessment tools are used as well as how they are developed and administered, enhancing some traditional practices and fundamentally changing others. For example, a multiple-choice test may now be administered via a computer that displays items, scores responses, and stores test results. Alternatively, realistic work samples administered via a computer that might have been too labor intensive or too inconsistent in the past can replace a more abstract form of standardized testing. Unproctored prescreens administered via the telephone or a computer narrow the applicant pool to a more manageable size. In the development phase of a test, large numbers of items are developed and their item parameters are defined so that many equivalent forms can be constructed automatically for computer adaptive testing. <p> The objective of this volume is to enable practitioners to make better decisions about using technology-enhanced assessments in the workplace that are based on current scientific knowledge and best professional practices. This volume explores the methodological underpinnings of technology-enhanced assessment as well as the measurement concerns its use raises, and then provides examples of how technology has been employed in assessment procedures in real-world applications. The purpose of this first chapter is to set the stage by defining the scope of what will be covered in the volume and then providing a brief discussion of the opportunities and the challenges the use of technology-enhanced assessments presents. Brief sections on the future of technology-enhanced assessment are presented before the chapter concludes with an overview of the entire volume. <p> <p> <i>What Is Technology-Enhanced Assessment?</i> <p> In this book, we refer to technology-enhanced assessment as the use of any form of technology in any aspect of testing or assessment. Technology-enhanced assessments can include various technologies used for presenting or scoring items or other assessment materials. For example, computers, personal digital assistants (PDAs), telephones, interactive voice response (IVR) equipment, or video-teleconferencing equipment may be used to present testing materials. Perhaps most simplistically, a computer serves as a page turner that presents items and response alternatives on a screen and collects responses, or an IVR presents items orally and records responses. At the other extreme, complex in-baskets that involve emails, voice messages, memos, telephone calls, and appointments simulate actual work and require test-takers to behave as they would in a realistic setting. <p> The range of testing formats used when technology is introduced is broad. High-volume testing programs, particularly those focused on screening candidates, continue to use multiple-choice formats administered on computers and IVRs. Yet, other item formats are increasingly used in conjunction with technological tools. Structured interviews have been adapted for delivery over computers. The work samples and simulations that are components of assessment centers or sophisticated selection batteries are often delivered via computer and responses to them are captured electronically via a computer or video or audio recording equipment. Some assessments make use of video teleconferencing equipment that enables assessment center participants and assessors to work from different locations. <p> Computers can be programmed to deliver fixed forms of a test or determine which items or tests to present depending on answers to questions on an application blank (for example, For which jobs are you applying?) when the assessment system is integrated with an applicant tracking system (ATS), or on responses to previous items as in computer adaptive testing. Similarly, sophisticated, electronic test data bases can automatically determine who is eligible to test and when they are eligible. <p> Computers are often used to score test items by determining which responses are correct and incorrect as well as aggregating responses to items into test scores and test scores into battery scores, sometimes based on complex algorithms. Traditionally, computers have simply been used to execute programs that specified exactly what was right and what was wrong. Increasingly, computers can take into account patterns of responses to items in more complex scoring procedures. An emerging technology that is beginning to be used more often involves data mining techniques that evaluate complex written responses. Although many constructed responses must still be evaluated by human evaluators, video technology that records the responses allows checks of the scoring process that increase accuracy. <p> Similarly, computers can be used to store responses as simple as the number or letter of a response alternative or as complex as the summaries and spreadsheets associated with a business case or a videotape of an interactive role play simulation or the written responses to a structured interview that has been presented online. Typically, responses to test items that are presented electronically are also stored electronically. Even when tests are not delivered via a computer, test results may be entered into an electronic database. Increasingly, technology is used to distribute test responses and test scores. For example, work sample products are distributed to assessors electronically; test qualification status is sent to hiring managers; or test feedback is sent to candidates. Because of security concerns, many test users avoid distributing confidential information such as test results via email; instead, these are stored in "eRooms" where authorized users may access confidential data. <p> <p> <b>What Are the Advantages and Disadvantages of Technology-Enhanced Assessments?</b> <p> As technology has become increasingly easy to use, affordable, and widely available, industrial and organizational psychologists have learned that there are many factors to be considered when making decisions about how to use technology in assessment and that few factors can be considered solely an advantage or a distinct disadvantage. Instead, the thoughtful industrial and organizational psychologist must consider the entire set of benefits and liabilities of a specific technology-based approach in his or her specific situation and compare them to the pros and cons associated with each of the alternatives. The next section highlights the most important factors. <p> <p> <b>Cost</b> <p> An overall assessment of cost is particularly difficult to obtain because there are typically many sources of costs in a technology-enhanced assessment program. For example, there is the cost of administration, and there is the cost of developing items. Moreover, there are tradeoffs between costs and the anticipated benefits. For example, an organization may spend the money to develop a computerized work sample not because it is cheaper but because the realistic assessment results in a better estimate of an individual's skills, attracts better-qualified candidates, or provides a realistic job preview. Another organization may computerize its executive assessment process in order to standardize the process globally, even in locations where face-to-face assessment is practical. <p> In many respects, the use of technology has lowered the cost of assessments. As technology has replaced live administrators, proctors, scorers, and data-entry personnel, labor costs have undoubtedly decreased dramatically. Even when an organization considers the cost of outsourced test administration services, the costs are typically reduced whenever personnel have been replaced by computers. <p> When computer-administered tests began to be used in private industry for large scale selection programs, many industrial and organizational psychologists were concerned about the high cost of equipment. However, two important things have happened since that time to alleviate that concern. First, the cost of equipment has dropped substantially. At this point in time, it is reasonable to assume that most equipment costs are more than compensated for by reductions in labor costs. Second, many testing programs have shifted the obligation to provide equipment from the employer to the candidate through unproctored testing programs. <p> The equipment on which a test or assessment is administered is only one type of equipment usually required. Large scale testing programs often require servers that contain the administration programs and executable modules to be downloaded to the user's computer as well as data bases to store results, including data at the item, test, and battery levels. Increasingly, demands for reliable accessibility require redundant servers, and security concerns necessitate highly technical barriers to these servers. Some users of assessments that transmit real-time video may find that the bandwidth required is not available in some countries at any price. <p> At the same time that costs of labor associated with administration and scoring and equipment have diminished, other sources of costs may have increased or new sources of costs may have been introduced. For example, the number of items that are required for computerized tests often increases substantially because of security concerns, particularly when unproctored Internet testing (UIT) is used. Thus, more labor is required to develop and maintain a larger pool of items. Similarly, test administration procedures involving computer adaptive testing were not feasible in most situations without a computer; yet, the programs for such administration have to be written and maintained. Similarly, complex work simulations like in-baskets may require substantial programming expense. <p> In addition to requiring large number of items with accurately defined item parameters, UIT used for selection purposes can introduce other costs. For example, companies that use verification should account for the costs of the UIT plus those associated with later verification testing. Moreover, the employer must also consider what effect UIT has on its applicant pool and determine if the UIT has an effect that has implications for costs such as broadening the applicant pool or reducing the number of qualified people who remain in the recruitment and selection process or who are likely to accept a job offer. A technology-enhanced testing program that is off-putting to qualified candidates may reduce testing costs while increasing recruiting costs. <p> Whether the reductions in costs exceed the increases in costs is obviously dependent upon many factors including the choice of instruments, the organization, its staffing context, the resources available, and the expectations of its applicant pool. Direct comparisons of total costs for various approaches are difficult if not impossible to make. Each assessment user is advised to carefully consider all the sources of expense as well as the tradeoffs among various elements of the staffing process, and plan accordingly. <p> <p> <b>Effect on the Quality and Quantity of Candidate Pool</b> <p> A critical concern for organizations that use any sort of assessment for selection purposes is the impact on the quality and quantity of the candidate pool. Many assessment programs that have incorporated technology into their delivery are still administered in controlled settings with proctors. In theory at least, these technology-enhanced assessments should have no effect on the size of the candidate pool when compared to a proctored paper-and-pencil version of the same test. However, apple-to-apple comparisons are often not made. When apples are compared to oranges, some might argue that a realistic, technology-enabled assessment program used for selection may be more engaging and may help keep some candidates in the applicant pool longer than a less realistic form of evaluation that does not require technology. <p> Many technology-enhanced assessments are administered in unproctored conditions at times and places of the candidate's convenience; yet, there is little consensus on the effect of this flexibility on applicant behavior during the recruiting, selection, and hiring processes. Many staffing professionals argue that the freedom to take a pre-employment assessment any time or any place greatly expands the number of people who actually take the test. The lack of constraints on the actual testing event may also improve the quality of applicants because the employed are able to look for other employment without taking time off from their current jobs. There are contrasting arguments, however, that suggest the number and quality of applicants may be limited when UIT is used and the applicant must supply the equipment necessary to take the test. If a digital divide exists, UIT may have no effect on applicants from higher socio-economic status brackets but severely limit representation from lower brackets. Based on anecdotal evidence, recruiters often argue that many applicants have a low tolerance for completing lengthy applications and tests on the Internet and only the most desperate candidates will pursue lengthy and rigorous online selection procedures. Others postulate that highly qualified candidates have higher expectations regarding their treatment as applicants and drop out of the recruiting process when the selection procedures do not acknowledge their special qualities. Simultaneously, one could hypothesize that some applicants appreciate the respect for their time and the recognition that some assessments do not need to be administered in a face-to-face setting. Some employers fear the use of UIT will dissuade the honest applicant from pursuing employment because of the company's assumed acceptance of malfeasant behavior. Further, UIT may increase the amount of cheating that occurs on some types of tests and consequently result in a less qualified pool of candidates for the next step in the hiring process. <p> Perhaps, the most obvious effect of increasing or decreasing the quality or quantity of the applicant pool is on recruiting costs. If UIT increases the number of candidates who apply and remain in the selection process when recruiting costs are held constant, the per hire recruiting expense decreases. A larger proportion of more qualified candidates reduces the number of people who must be attracted to the hiring process and evaluated. Equally important but sometimes overlooked is the effect of a larger applicant pool on the capabilities of new employees. Many employers want the best of the applicant pool and not merely the acceptable. As the applicant pool increases relative to the need for new employees, an organization may raise its standards and select individuals with higher abilities. The user of a technology-enhanced assessment for selection purposes should anticipate its effect on the quality and quantity of the candidate pool and take into account the implications for recruiting costs and for the capability of the workforce. <p> <p> <b>Candidate Expectations and Reactions</b> <p> Closely related to concerns about the number of candidates and their capabilities are issues regarding candidate reactions to technology-enhanced assessments. Generally, organizations want positive candidate reactions because they are typically associated with candidates who stay in the employment process rather than drop out. In addition, many employers want to maintain positive relationships with applicants because they are also customers of the firm's products and services. <p> It is, of course, impossible to answer the question, "Do technology-enhanced assessments increase positive candidate reactions?" for all situations. The answer depends on what kind of assessment is used and what the candidate's expectations regarding assessment are. Some candidates will expect to see technology embedded in a testing program for some jobs in some types of companies, while others will expect a high-touch evaluation without technological intervention. For example, applicants to a manufacturing technician position in a high-tech firm might expect a highly mechanized selection process, but applicants to executive level positions in a service-oriented business might be disappointed in the selection system unless face-to-face interviews with the firm's management were used. <p> <i>(Continues...)</i> <p> <!-- copyright notice --> <br></pre> <blockquote><hr noshade size='1'><font size='-2'> Excerpted from <b>Technology-Enhanced Assessment of Talent</b> by <b>Nancy T. Tippins Seymour Adler</b> Copyright © 2011 by John Wiley & Sons, Ltd. Excerpted by permission of John Wiley & Sons. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.<br>Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.