WorldCat Identities

Leaders for Global Operations Program

Overview
Works: 329 works in 329 publications in 1 language and 334 library holdings
Classifications: HD57.5,
Publication Timeline
.
Most widely held works by Leaders for Global Operations Program
Enabling strategic fulfillment : a decision support tool for fulfillment network optimization by Bryan Drake( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

Dell's Third-Party (3P) Product network uses several different order fulfillment methods, though the determination of which products are fulfilled under which method is not clearly delineated. We have developed a tool to assist in the decision making process for Dell's 3P distribution network. This tool transparently presents the results of cost modeling and forecast variance simulation while maintaining usability to achieve broad adoption and exert influence on product fulfillment method decisions. The cost model created takes into account product, overhead, logistics, and capital costs and has the capability to deal with volume uncertainties through simulation. This tool solidifies the discussion around choosing the correct fulfillment method decision process and is the first step towards quantifying the fulfillment method decision
Driving cycle time reduction through an improved material flow process in the electronics assembly manufacturing cell by Paul Millerd( Book )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

Many companies have implemented lean and six sigma programs over the past twenty years. Lean has been a proven system that has eliminated waste and created value at many companies throughout the world. Raytheon IDS's lean program, "Raytheon Six Sigma" became a top priority in the past ten years at the Integrated Air Defense Center (IADC) in Andover, MA. However, as Raytheon's corporate goals state, they want to take this further and bring "Raytheon Six Sigma" to the next level, fully engaging customers and partners. A focus of this continuous improvement effort was the Electronics Assembly Rack manufacturing cell, which was experiencing high levels of cycle time variability. To help reduce cycle times within the cell, a continuous improvement project was undertaken to improve the material flow process. A current state analysis of the process showed an opportunity to improve process standardization and prioritization while lowering inventory levels. In addition to working with managers from EA to evaluate the material flow process, a kitting cart was developed with a cross functional project team to serve as a tool to help improve the process. Although the improvements were not rolled out to the entire cell during the project, a successful pilot was conducted that helped improve engagement with operators and create a path for future success
Characterizing and improving the service level agreement at Amazon by Alberto Luna( )

1 edition published in 2015 in English and held by 1 WorldCat member library worldwide

Amazon's Service Level Agreement (SLA) is a promise to its customers that they will receive their orders on time. At the Fulfillment Center (FC) level, the SLA is based on the capability to fulfill open orders scheduled to ship at each departure time. Each center's capability depends on a complex interaction between fluctuating product demand and time-dependent processes. By lowering SLA, Amazon could provide an enhanced the customer experience, especially for same day delivery (SDD). However, providing additional time to the customer also means that the FCs have less time available to fulfill open orders, placing the customer experience of those orders at an increased risk of a missed delivery. This thesis explores cycle time reductions and throughput adjustments required to reduce the SLA at one of Amazon's Fulfillment Centers. First, a method to analyze time-dependent cycle time is used to evaluate the individual truck departure times, revealing that the current process conditions have difficulty meeting current demand. Then, using lean principles, process changes are tested to assess their ability to improve the current processes and allow for an SLA reduction. Although a 1% increase in capacity is possible by improving the processes, system constraints make the changes impractical for full implementation. Consequently, a capacity analysis method reveals that an additional capacity of up to 9.38% is needed to improve the current process conditions and meet current demand. The capacity analysis also reveals that reducing the SLA from its current state requires up to 13.79% more capacity to achieve a 50% reduction in SLA. Through capacity adjustments, the added cost of late orders is mitigated, resulting in a reduced incidence of orders late to schedule and a reduced risk of missed deliveries. The methods utilized in this thesis are applicable to other Amazon FC's, providing a common capability and capacity analysis to aid in fulfillment operations
Diagnosing intensive care units and hyperplane cutting for design of optimal production systems by J. Adam Traina( )

1 edition published in 2015 in English and held by 1 WorldCat member library worldwide

This thesis provides a new framework for understanding how conditions, people, and environments of the Intensive Care Unit (ICU) effect the likelihood the preventable harm will happen to a patient in the ICU. Two years of electronic medical records from seven adult ICUs totalling 77 beds at Beth Israel Deaconess Medical Center (BIDMC) were analysed. Our approach is based on several new ideas. First, instead of measuring safety through frequency measurement of a few relatively rare harms, we leverage electronic databases in the hospital to measure Total Burden of Harm, which is an aggregated measure of a broad range of harms. We believe that this measure better reflects the true level of harm occurring in Intensive Care Units and also provides hope for more statistical power to understand underlying contributors to harm. Second, instead of analysing root causes of specific harms or risk factors of individual patients, we focus on what we call Risk Drivers, which are conditions of the ICU system, people (staff, patients, families) and environments that affect the likelihood of harms to occur, and potentially their outcomes. The underlying premise is that there is a relatively small number of risk drivers which are common to many harms. Moreover, our hope is that the analysis will lead to system level interventions that are not necessarily aiming at a specific harm, but change the quality and safety of the system. Third, using two years of data that includes measurements of harms and drivers values of each shift and each of seven ICUs at BIDMC, we develop an innovative statistical approach that identifies important drivers and High and Low Risky States. Risky States are defined through specific combinations of values of Risk Drivers. They define environmental characteristics of ICUs and shifts that are correlated with higher or lower risk level of harms. To develop a measurable set of Risk Drivers, a survey of current ICU quality metrics was conducted and augmented with the clinical experience of senior critical care providers at BIDMC. A robust machine learning algorithm with a series of validation techniques was developed to determine the importance of and interactions between multiple quality metrics. We believe that the method is adaptable to different hospital environments. Sixteen statistically significant Risky States (p < .02) where identified at BIDMC. The harm rates in the Risky States range over a factor of 10, with high risk states comprising more that 13.9% of the total operational time in the ICU, and low risk states comprise 38% of total operating shifts. The new methodology and validation technique was developed with the goal of providing a basic tools which are adaptable to different hospitals. The algorithm described within serves as the foundation for software under development by Aptima Human Engineering and the VA Hospital network with the goal of validation and implementation in over 150 hospitals. In the second part of this thesis, a new heuristic is developed to facilitate the optimal design of stochastic manufacturing systems. The heuristic converges to optimal, or near optimal results in all test cases in a reasonable length of time. The heuristic allows production system designers to better understand the balance between operating costs, inventory costs, and reliability
Is the pharmaceutical industry ready for value based procurement? by Mary E Anito( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

In the U.S. news, healthcare related headlines frequent the covers of newspapers and make regular primetime news appearances and the U.S. is not alone. The world is awakening to a need for more equal access to healthcare for its citizens and governments believe this equality will be achieved through tighter regulation in huge healthcare markets such as the pharmaceutical industry. To sustain shareholder value, as a result of these changing price structures, Novartis sees a need to reassess its sourcing and procurement strategies, assessing the feasibility of value based procurement, through staged implementation. While all procurement organizations tend to focus on maximizing cost savings for a company, this approach can often alienate suppliers and leave untapped value on the table. This additional value can be captured through long-term supplier development and collaborative work to utilize a supplier's knowledge, while maximizing the value proposition both for the company and the suppliers. For this early research, there was a focus on packaging equipment at pharmaceutical production facilities. The goal has been to understand the types of savings which could be achieved by purchasing extending its measure of success beyond price reduction to include value, such as through enabling increased quality or flexibility. As a company formed from many individual companies with a myriad procurement maturity, Novartis has an extended geographic and physical footprint which requires many disjointed groups to come together to produce products. This project has focused on developing a new procurement strategy to help optimize and standardize the procurement of Novartis's packaging lines at the manufacturing facilities, enabling them to work more cohesively to deliver greater benefit to the company. Additionally, following a successful pilot of the proposed sourcing practices in the packaging equipment space could be replicated in other category spaces both in the Pharmaceutical division and throughout Novartis's other six divisions. Novartis will continue to develop and produce drugs, as the predecessor companies have excelled in doing for more than a hundred years but it should also realize what expertise should be core and where others outside of the company excel
Application of queueing theory in bulk biotech manufacturing by Michael Donohue( )

1 edition published in 2011 in English and held by 1 WorldCat member library worldwide

One of the most challenging problems in Amgen's biological manufacturing facility is adhering to the daily schedule of production tasks. Delays in non-time critical tasks have been traced to temporary workload surges that exceed the production staff's capability to handle them. To quantify this effect, a method for creating an M/M/c queueing model that is specific for bulk biologic manufacturing processes was developed. The model was successfully validated by comparing the predicted results to the historical data for each of the five production shifts. A discussion of how to model different improvement programs is presented, and Amgen-specific data are presented. It was found that across-the-board task duration reductions will reduce the schedule deviation rate by up to 50%. Additionally, it is shown that implementing staff-cross training with other production areas will reduce the schedule deviation rate between 14% and 75%. Implementation aspects of these improvement initiatives in a regulated production environment are discussed
Business case assessment of unmanned systems level of autonomy by Edward W Liu( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

The federal government has continually increased its spending on unmanned aerial vehicles (UAVs) during the past decade. Efforts to drive down UAV costs have primarily focused on the physical characteristics of the UAV, such as weight, size, and shape. Due to the saturation of the UAV business in the federal sector, the civilian sector is not as penetrated. Hence, companies see this phenomenon as an opportunity to establish itself as the standard bearer in this sector. This thesis will address how Boeing can establish guidelines for business strategies in UAV offerings to potential clients. The key innovation that will be introduced is a modeling tool that will focus on simulation/trending and sensitivity analysis to help provide some insight into what these guidelines will be. The modeling tool will quantify many of the benefits and costs of the components and features of the production and utilization of UAVs. Other notable recommendations include defining a new data recording process to obtain sets of sample data to validate the results of the modeling tool and streamlining the complexity of additional features and enhancements that will be incorporated in future versions of the modeling tool
Analysis of a diagnostics firm's pre-analytical processes by Kevin M Thomas( )

1 edition published in 2016 in English and held by 1 WorldCat member library worldwide

Quest Diagnostics provides diagnostic information to clinicians, allowing them to make informed decisions on the appropriate course of treatment for their patients. Quest advertises an 8 a.m. next-day turnaround time for a subset of clinical tests, a service that provides competitive advantage for Quest. When this 8 a.m. turnaround time goal is missed, it causes ripple effects throughout the customer support organization resulting from increased client complaints. This research approaches Quest's late-release challenges through an analysis of phlebotomy services, courier route planning, and specimen accessioning to find precisely the source and cause of challenges preventing Quest from achieving their turnaround time goals. Prior to this research, Quest hypothesized that their logistics network could provide a consistent in-flow of patient specimens into their Marlborough, MA facility, improving the lab's likelihood of reaching their turnaround time goals. A simulation of a new demand-focused vehicle routing solution suggested that creating routes to provide a steady inflow of specimens would increase operating costs by 72%; what appeared to be an attainable, low-cost solution was found to be quite the opposite. We then provide an analysis of pre-analytical processes outside of logistics. Patient service centers (PSC) will soon provide 47% of the total specimen-volume to the Marlborough laboratory compared to 36% currently, thus evaluation of PSC processes and methodologies were conducted to identify ways to release a larger percentage of specimen volumes during midday courier pickups. Recommendations for process improvements to provide couriers with more patient samples during midday pickups are provided. Specimen accessioning processes and staffing were also analyzed, revealing that between 17%-24% of the subject tests results were unable to be resulted prior to 8 a.m. due to insufficient staffing for a second-stage accessioning task. Alterations to Quest's logistics network proved to be costly and low-impact, whereas slight alterations to phlebotomy-service processes and in-lab staffing could provide far higher value to Quest's customers with less impact to operations. By redirecting their focus to these other pre-analytical processes, Quest could focus their efforts on higher impact, lower cost options to improve operations and meet their turnaround time goals
Size curve optimization for replenishment products by Andrew J Gabris( )

1 edition published in 2016 in English and held by 1 WorldCat member library worldwide

Nike replenishment products (make to stock) are forecasted and planned at a style/color level and then disaggregated to a size level forecast through the use of a size curve. This method of forecasting and planning provides many advantages such as reduced effort expended on forecasting and the ability to quickly roll up data for capacity planning. Size curves are based on historical proportions of sales. For instance, if size small sells 10% of the volume for a given style/color, the size curve would be set to 10% for small. Not surprisingly, the size curve for a given style/color sums to 100%. In a manner similar to the forecast, size curves are used to disaggregate style/color safety stock quantities that are used to ensure target service levels (item fill rates) are met. However, this disaggregation results in lower-than-anticipated service levels for the size-level stock-keeping units (sku's), since the style/color safety stock does not account for the increased forecast error at the size level. Additional challenges occur from the fact that the relative magnitude of the forecast error is inversely proportional to the demand level. As a consequence, fringe sizes, which account for lower volumes of sales, account for a disproportionate amount of variability within a style/color affecting service levels. To resolve these observations, the project first attempted to improve the quality of the size curves by applying different statistical forecasting techniques in their formulation. We found that the status quo forecasting methodology was as good as or better than other methods, which suggests that there is a limit to the accuracy of size curves. In order to increase service levels across all sizes, several recommendations have been made. First, by reducing the number of size offerings from the replenishment products, many of the more challenging sizes will be eliminated. Next, this additional size level error can be accounted by right-sizing safety stock. Finally, a manual update process for size curves is employed that leaves many facets of the process to individual planners. Standardization of the size curve process will support more consistent results
Modeling of ICU nursing workload to inform better staffing decisions by Yiyin Ma( )

1 edition published in 2015 in English and held by 1 WorldCat member library worldwide

Beth Israel Deaconess Medical Center (BIDMC) has partnered with the Gordon and Betty Moore Foundation's to eliminate preventable harm in the Intensive Care Unit (ICU). Many medical publications suggest nursing workload as a major contributor to patient safety. However, BIDMC was not using any tool to measure nursing workload, and as a result, nurse staffing decisions were made solely based on the ad hoc judgment of senior nurses. The objective of this thesis is to create a prospective nursing workload measurement and ultimately use it to improve staffing decisions in ICUs. To create a nursing workload measurement, a wildly-adopted patient-based scoring system, the Therapeutic Intervention Score System (TISS), was modified to BIDMC's ICUs. With consultation from clinicians and nurses, changes were made to the TISS to reflect BIDMC's workflow, and a new nursing workload scoring system called the Nursing Intensity Score (NIS) was created. The NIS for each patient per shift was calculated over a two-year period to gain further insights to improve staffing decisions. After looking at the current state, there was no correlation between nursing staffing and overall patient workload in the unit. In addition, nurses with 1 patient (1:1 nurses) had significantly less workload than nurses with two patients (1:2 nurses) even though they were expected to be the same. Finally, there was one overworked nurse (150% of median nursing workload) in every three shifts in the ICU. A prospective approach to analyze patient workload was developed by dividing patients based on clinical conditions and categorizing the results on two axis: the nominal workload level and the variability around the nominal value of workload. This analysis suggests that, a majority of patients are predictable, including a few patients with high but predictable load. On the other hand, some patients are highly unpredictable. A nursing backup system was proposed to balance workload between 1:1 and 1:2 nurses. To test the proposal, a simulation was developed to model the ICU with the goal of minimizing the number of overworked nurses. The best backup system was a buddy pairing system based on predictive model of patient conditions, with the resource nurse as the ultimate backup
Anomaly detection for natural gas regulator stations by Adam Christopher Chao( )

1 edition published in 2016 in English and held by 1 WorldCat member library worldwide

Natural gas regulator stations control the flow of gas across PG&Es gas transmission and distribution system. Ensuring the proper functioning of these-stations is critical for the safety of the natural gas system. Currently, PG&E uses sensors linked to a Supervisory Control and Data Acquisition (SCADA) system to monitor pressure and other characteristics of select regulator stations, with continuing installation of new sensor systems across the network. PG&E seeks to develop algorithms for detection and prediction of safety issues before they occur, as well as monitor performance degradation in a regulator station. First, analysis of historical failure events was conducted to better understand the varying causes of regulator overpressure events and their corresponding downstream pressure patterns. Then, downstream pressure time-series data was collected and processed for each regulator station. Useful features from these time-series were extracted, including day-today changes and moving averages. Piecewise linear segmentation was also performed on the time-series to extract relevant features. These features were then used to cluster stations by their operating characteristics, grouping stations with similar volatility and pressure patterns. Anomaly detection methods were then developed and calibrated for the station clusters. We use a variety of statistical process control techniques, including CUSUM and EWMA to detect changes in the behavior of a regulator downstream pressure time-series. Detection algorithms were then evaluated with and without clustering using ROC curves on simulated pressure anomalies. Ultimately, we show that modified CUSUM and adaptive sliding window techniques can detect pressure anomalies in natural gas regulators with reasonable false positive rates. We also show how improvements to data handling and sharing at PG&E can facilitate better algorithms for regulator anomaly detection
Improving outpatient non-oncology infusion through centralization and scheduling heuristics by Adam Ryan Marshall( )

1 edition published in 2016 in English and held by 1 WorldCat member library worldwide

The use of highly effective intravenously infused specialty drugs has increased significantly over the past two decades as they have led to dramatic improvements in patients' quality-of- life. At Massachusetts General Hospital, these drugs are administered in ten independent outpatient clinics. While some clinics only need to offer sporadic treatments and have low utilization of resources, other clinics find patient access is severely limited due to high utilization, poor scheduling practices, and inadequate staffing. This thesis describes methods to increase patient access to infusion while improving resource utilization. Underlying this improvement is a specially developed scheduling algorithm that smooths chair utilization while permitting flexible, multi-day scheduling. By employing the new scheduling algorithm, the recommended centralized infusion unit will be able to provide more expedient care, offer emergent appointments, avoid unnecessary hospital infusion admissions, and make more efficient use of clinical resources. Adding only two days of flexibility to appointments reduces resource requirements by up to 57%. Also, the day-to-day variability in patient volume is stabilized. Finally, the centralization of administrative resources ensures efficient prior authorization processing, leading to significant financial savings
Using analytics to improve delivery performance by Tacy J Napolillo( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

Delivery Precision is a key performance indicator that measures Nike's ability to deliver product to the customer in full and on time. The objective of the six-month internship was to quantify areas in the supply chain where the most opportunities reside in improving delivery precision. The Nike supply chain starts when a new product is conceived and ends when the consumer buys the product at retail. In between conception and selling, there are six critical process steps. The project has provided a method to evaluate the entire supply chain and determine the area that has the most opportunity for improvement and therefore needs the most focus. The first step in quantifying the areas with the most opportunity was to identify a framework of the supply chain. The framework includes the target dates that must be met in order to supply product to the customer on schedule and the actual dates that were met. By comparing the target dates to the actual dates, the area of the supply process that caused the delay can be identified. Next a data model was created that automatically compares the target dates to actual dates for a large and specified set of purchase orders. The model uses the framework and compiles all orders to quantify the areas in the supply chain that create the most area for opportunity. The model was piloted on the North America geography, Women's Training category, Apparel product engine, and Spring 2013 season, for orders shipped to the Distribution Center (DC). The pilot showed that the most area for opportunity lies in the upstream process (prior to the product reaching the consolidator). In particular the pilot showed that the area with the most opportunity for the sample set was the PO create process. This conclusion was also confirmed with the Running category. The method developed during the internship provides Nike with a method to measure the entire supply chain. By quantifying the areas in the process, Nike can focus and prioritize their efforts on those areas that need the most improvement. In addition the model created can be scaled for any region, category, or product engine to ultimately improve delivery precision across the entire company
Production leveling and cycle time reduction in satellite manufacturing by Karl Gantner( )

1 edition published in 2016 in English and held by 1 WorldCat member library worldwide

Reducing cycle time for geostationary communication satellites represents a major competitive advantage for manufacturers. Reducing cycle time can be done through application of lean manufacturing techniques such as production leveling, or heijunka. However, research on applying lean manufacturing to the manufacture of satellites is not straightforward as payloads vary considerably. To show the effect of production leveling on satellite manufacturing, we analyze production data recorded over a 5-year time frame from a major satellite manufacturer to propose and simulate a method for leveling production. Statistical analysis of historical cycle times was performed to identify the critical path and bottleneck in the satellite development process. Production through the bottleneck was then leveled at the maximum consistent throughput. The effect of leveling was estimated using a Monte Carlo simulation to predict the total cycle time and delivery date for each satellite. Analysis showed that the critical path ran through the development of the communications payload and the bottleneck was at the payload unit build process. The bottleneck was leveled to operate at a takt time of 2 months with 4 payloads in WIP at a time. Simulating a leveled bottleneck estimated that the total cycle time of each satellite, on average, would decrease by 1.9 months with a standard deviation of 14 days. Cycle time in the payload unit manufacturing process fell from 13 months to 8 months, with standard deviations 64 and 12 days, respectively. Over the 5-year period investigated, all satellites through the factory would have met their delivery dates while being produced to a 2 month takt. These results demonstrate that production leveling can be applied to the high-mix low volume manufacturing of geostationary communications satellites to increase efficiency and reduce cycle time. Leveling manufacturing should be a top priority for all satellite manufacturers looking to become more competitive
Modeling neuroscience patient flow and inpatient bed management by Jonas Hiltrop( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

Massachusetts General Hospital (MGH) experiences consistently high demand for its more than 900 inpatient beds. On an average weekday, the hospital admits about 220 patients, with the emergency department (ED) and the operating rooms (OR) being the main sources of admissions. Given MGH's high occupancy rates, a comparable number of discharges have to occur daily, and the intraday time distributions of admissions and discharges have to be aligned in order to avoid long wait times for beds. The situation is complicated by the specialization of beds and the medical needs of patients, which place constraints on the possible bed-patient assignments. The hospital currently manages these processes using fairly manual and static approaches, and without clear prioritization rules. The timing of discharges is not aligned with the timing of new admissions, with discharges generally occurring later in the day. For this reason MGH experiences consistent bed capacity constraints, which may cause long wait times for patients, throughput limitations, disruptions in the ED and in the perioperative environment, and adverse clinical outcomes. This project develops a detailed patient flow simulation based on historical data from MGH. The model is focused on the neuroscience clinical specialties as a microcosm of the larger hospital since the neuroscience units (22 ICU beds and 64 floor beds) are directly affected by the hospital's important capacity issues (e.g., patient overflows into other units, ICU-to-floor transfer delays). We use the model to test the effectiveness of the following three interventions: 1. Assigning available inpatient beds to newly admitted patients adaptively on a just-in-time basis; 2. Discharging patients earlier in the day; 3. Reserving beds at inpatient rehabilitation facilities, thereby reducing the MGH length of stay by one or more days for patients who need these services after discharge from the hospital. Intervention effectiveness is measured using several performance metrics, including patient wait times for beds, bed utilization, and delays unrelated to bed availability, which capture the efficiency of bed usage. We find that the simulation model captures the current state of the neuroscience services in terms of intraday wait times, and that all modeled interventions lead to significant wait time reductions for patients in the ED and in the perioperative environment. Just-in-time bed assignments reduce average wait times for patients transferring to the neuroscience floor and ICU beds by up to 35% and 48%, respectively, at current throughput levels. Discharges earlier in the day and multi-day length of stay reductions (i.e., interventions 2 and 3) lead to smaller wait time reductions. However, multi-day length of stay reductions decrease bed utilization by up to 4% under our assumptions, and create capacity for throughput increases. Considering the expected cost of implementing these interventions and the reductions in patient wait times, we recommend adopting just-in-time bed assignments to address some of the existing capacity issues. Our simulation shows that this intervention can be combined effectively with earlier discharges and multi-day length of stay reductions at a later point in order to reduce wait times even further
Predictive storm damage modeling and optimizing crew response to improve storm response operations by Sean David Whipple( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

Utility infrastructures are constantly damaged by naturally occurring weather. Such damage results in customer service interruption and repairs are necessary to return the system to normal operation. In most cases these events are few and far between but major storm events (i.e. Hurricane Sandy) cause damage on a significantly higher scale. Large numbers of customers have service interrupted and repair costs are in the millions of dollars. The ability to predict damage before the event and optimize response can significantly cut costs. The first task was to develop a model to predict outages on the network. Using weather data from the past six storms as well as outage data from the events, asset information (framing, pole age, etc.), and environmental information were used to understand the interactions that lead to outages (forested areas are more likely to have outages than underground assets for example). Utilizing data mining and machine learning techniques we developed a model that gathers the data and applies a classification tree model to predict outages caused by weather. Next we developed an optimization model to allocate repair crews across Atlantic Electric staging locations in response to the predicted damage to ensure the earliest possible restoration time. Regulators impose constraints such as cost and return to service time on utility firms and these constraints will largely drive the distribution of repair crews. While the model starts with predicted results, the use of robust optimization will allow Atlantic Electric to optimize their response despite the uncertainty of why outages have occurred, which will lead to more effective response planning and execution across a variety of weather-related outages. Using these models Atlantic Electric will have data driven capability to not only predict how much damage an incoming storm will produce, but also aid in planning how to allocate their repair crews. These tools will ensure Atlantic Electric can properly plan for storm events and as more storms occur the tools will increase their efficacy
Long range planning of biologics process development and clinical trial material supply process by Emily Edwards( )

1 edition published in 2011 in English and held by 1 WorldCat member library worldwide

This thesis investigates the feasibility of using a complex model with a Monte Carlo simulation model to forecast the financial, personnel, and manufacturing capacity resources needed for biologic drug development. Accurate forecasting is integral across industries in order to make strong longterm, strategic decisions and an area many companies struggle with. The resources required for the development of a biologic drug are especially hard to estimate due to the variability in the time and probability of success of each development phase. However, in the pharmaceutical industry getting products to market faster allows the company more time to recoup the substantial development investments before the patent expires and also potentially has a large impact on a company's market share. For these reasons, Novartis Biologics wanted to develop a simulation model to provide an objective opinion and assist them in their long-range planning. This thesis describes the design, development, and functionalities of the resultant model. During validation runs, the model demonstrated accuracy of greater than 90% when compared against historical data for headcount, number of campaigns, costs, and projects per year. In addition, the model contains Monte Carlo simulation capabilities to allow users to forecast variability and test the sensitivity of the results. This proves the model can be confidently used by project management, operations, and finance to predict their respective future resource needs
Investigation of integrally-heated tooling and thermal modeling methodologies for the rapid cure of aerospace composites by Harrison Scott Bromley( )

1 edition published in 2015 in English and held by 1 WorldCat member library worldwide

Carbon Fiber Reinforced Polymer (CFRP) composite manufacturing requires the CFRP part on the associated tool to be heated, cured, and cooled via a prescribed thermal profile. Current methods use large fixed structures such as ovens and autoclaves to perform this process step; however heating these large structures takes significant amounts of energy and time. Further, these methods cannot control for different thermal requirements across a more complex or integrated composite structure. This project focused on the below objectives and approaches: - Gather baseline energy and performance data on ovens and autoclaves to compare with estimations of new technologies; - Determine feasibility, applicability, and preliminary thermal performance of proposed heated tooling technologies on certain part families via heat transfer analyses. The project yielded the below results and conclusions: - Proved the capability of the modeling software to mimic an oven cure with less than 3% error in maximum exothermic temperature prediction; - Provided guidelines on when to use 1D, 2D, and 3D heat transfer analyses based on part thickness; - Concluded which size/shape of parts would work best for the single sided integral heating technologies; - Calculated energy intensity of incumbent technologies for comparison of future experiments on integrally heated tooling. Overall, this project helped steer the team into the next phase of their research of the technology and its applications. It provided recommendations on what type of parts the technology can be used as well as quantified the energy intensity of incumbents for comparison
Waveless picking : managing the system and making the case for adoption and change by G. Todd Bishop( )

1 edition published in 2010 in English and held by 1 WorldCat member library worldwide

Wave-based picking systems have been used as the standard for warehouse order fulfillment for many years. Waveless picking has emerged in recent years as an alternative pick scheduling system, with proponents touting the productivity and throughput gains within such a system. This paper analyzes in more depth the differences between these two types of systems, and offers insight into the comparative advantages and disadvantages of each. While a select few pieces of literature perform some analyses of wave vs. waveless picking, this paper uses a case-study of a waveless picking system in an Amazon.com fulfillment center as a model for how to manage a waveless system once it has been adopted. Optimization methods for decreasing chute-dwell time and increasing throughput by utilizing tote prioritization are also performed using discrete-simulation modeling. The analysis concludes that managing waveless picking warehouse flow by controlling the allowable quantity of partially picked orders to match downstream chute capacity can lead to reduced control over cycle times and customer experience. Suggestions are also made on possible future research for how to optimally implement a cycle-time controlled system
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.60 (from 0.35 for Do the rig ... to 0.68 for Predictive ...)

Associated Subjects
Alternative Names

controlled identityLeaders for Manufacturing Program

LGO

Massachusetts Institute of Technology. Leaders for Global Operations Program

MIT LGO

Sloan School of Management. Leaders for Global Operations Program

Languages
English (20)