Next Article in Journal
Research on Rotating Machinery Fault Diagnosis Method Based on Energy Spectrum Matrix and Adaptive Convolutional Neural Network
Next Article in Special Issue
Improving Transactional Data System Based on an Edge Computing–Blockchain–Machine Learning Integrated Framework
Previous Article in Journal
The Influence of Hydrodynamic Changes in a System with a Pitched Blade Turbine on Mixing Power
Previous Article in Special Issue
Abrasive Water Jet Cutting of Hardox Steels—Quality Investigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis and Optimization of Two Film-Coated Tablet Production Processes by Computer Simulation: A Case Study

1
Department of Pharmacy, Saarland University, 66123 Saarbrücken, Germany
2
SW Pharma GmbH, 66578 Schiffweiler, Germany
3
Helmholtz Institute for Pharmaceutical Research Saarland (HIPS), 66123 Saarbrücken, Germany
4
Free Consultant and Qualified Person According to German Law, 79410 Badenweiler, Germany
*
Author to whom correspondence should be addressed.
Submission received: 30 November 2020 / Revised: 22 December 2020 / Accepted: 26 December 2020 / Published: 30 December 2020

Abstract

:
Increasing regulatory demands are forcing the pharmaceutical industry to invest its available resources carefully. This is especially challenging for small- and middle-sized companies. Computer simulation software like FlexSim allows one to explore variations in production processes without the need to interrupt the running process. Here, we applied a discrete-event simulation to two approved film-coated tablet production processes. The simulations were performed with FlexSim (FlexSim Deutschland—Ingenieurbüro für Simulationsdienstleistung Ralf Gruber, Kirchlengern, Germany). Process visualization was done using Cmap Tools (Florida Institute for Human and Machine Cognition, Pensacola, FL, USA), and statistical analysis used MiniTab® (Minitab GmbH, Munich, Germany). The most critical elements identified during model building were the model logic, operating schedule, and processing times. These factors were graphically and statistically verified. To optimize the utilization of employees, three different shift systems were simulated, thereby revealing the advantages of two-shift and one-and-a-half-shift systems compared to a one-shift system. Without the need to interrupt any currently running production processes, we found that changing the shift system could save 50–53% of the campaign duration and 9–14% of the labor costs. In summary, we demonstrated that FlexSim, which is mainly used in logistics, can also be advantageously implemented for modeling and optimizing pharmaceutical production processes.

1. Introduction

The pharmaceutical industry is known to be prosperous but inflexible. Regulatory authorities expect increasing standards for medicinal products, e.g., during clinical trials or production [1,2,3], which makes the industry less prosperous and even more inflexible. Multinational pharmaceutical companies react with various strategies, such as outsourcing or mergers and acquisitions [4], while small- and middle-sized companies need to compensate for their losses differently. For both, the strategy of addressing production costs is promising because of this strategy’s generally low equipment utilization [5] and because of production’s high costs (production makes up to 30% of the overall costs) [6]. Instead of real-world experiments and tests, computer simulations enable one to test different scenarios without any interruptions or threats to daily business.
The chosen processes for this case study involve the production of two film-coated tablets for the treatment of tuberculosis. As of 2020, about one-quarter of the world’s population is infected with latent tuberculosis. Ending this epidemic by 2030 is one of the health targets of the United Nations Sustainable Development Goals, so the incidence of the disease is decreasing by 2% each year. This disease is mainly caused by acid-fast rod-like Mycobacterium tuberculosis. Most patients suffer from a curable lung infection that is treated with a combination of different agents [7]. According to the World Health Organization, isoniazid, rifampicin, pyrazinamide, and ethambutol are the most essential first-line anti-tuberculosis drugs [8]. The main active pharmaceutical ingredients of the two investigated products are isoniazid and ethambutol; thus, the products are abbreviated as PINA and PEMB.
Background information about simulations can be found in Banks (2005), who defined them as the “imitation of the operation of a real-world process or system over time” [9], while a model can be described as a “representation of a […] process intended to enhance our ability to understand, predict, or control its behavior” [10]. The link between both is that “the exercise or use of a model to produce a result” is a simulation [11]. Simulating different scenarios, thereby changing the parameters in a model and running it, enables one to test and evaluate the effects of certain parameters. Hence, diverse fields of application are possible in a pharmaceutical context. These applications could support decision making for pipeline management [12,13,14] and optimize supply chain management [15,16,17]. Simulations of production processes are also of interest. They can be divided into simulations of single process steps, such as computational fluid dynamics for mixing steps [18], and simulations of multiple production steps [19] or even of entire continuous production processes [20]. The models of Sundaramoorthy et al. [16] and Matsunami et al. [21] were used to investigate a mixture of the abovementioned factors. Their prospective models targeted the planning of production capacities when a product is still in its developmental stage. Matsunami et al. compared a batch with continuous production considering various uncertainties, prices, and market demands for one product. Sundaramoorthy et al., however, address the production capacities of multiple products. Habibifar et al. recently published a study on the optimization of an existing production line, including a sensitivity analysis, the design of multiple scenarios, and a data envelopment analysis. Other comparable work was also examined intensively—11 references from 2007–2019 were investigated and compared [5]. The applied techniques (simulations, mathematical modeling, and statistical techniques) differed, as did the focuses of the studies. Some papers concentrated on the optimization of specific process steps [22], while others pursued a more holistic approach [23]. The high variability in this small population demonstrates that there are many different approaches and even more available software solutions for optimizing pharmaceutical production. One described attempt involved a discrete-event simulation, in which the variables of a model changed due to defined events. Recent work on discrete-event simulation addressed room occupancies in a hospital [24], a flow shop [25], and manufacturing scheduling [26].
In contrast to the above-mentioned studies (i.e., prospective simulations for future medicinal products), this study addressed two already existing production processes. The motivation of this work was to optimize the validated and approved production processes of PINA and PEMB via discrete-event simulations without the need to interrupt or interfere with the continuing production processes themselves. Initially, it was investigated whether it is possible to establish a discrete-event simulation approach with limited resources and simulation knowhow. Meanwhile, the three most critical steps in model building (implementing model logic, operating schedule, and processing times) were determined, verified, and partly validated. Based on the generated as-is models of the PINA and PEMB production processes, bottlenecks in the production process were identified and different production scenarios designed to find the optimal one under existing conditions. The optimizations focused on process efficiency, not pharmaceutical or validation questions. The market authorization of these products limited the possible changes to only organizational ones. Therefore, the shift systems of the created as-is models for PINA and PEMB were changed from the existing one-shift system to a one-and-a-half shift system and a two-shift system to optimize employee utilization. The campaign duration, labor costs, and used resources were calculated to generate comparable outcomes.

2. Materials and Methods

For discrete event simulation of these pharmaceutical production processes, the software package FlexSim was chosen since it is easy to use and already widely implemented in various industrial sectors for logistics or production, as well as in national institutions. To the best of our knowledge, this is the first time that FlexSim has been used to optimize pharmaceutical bulk production in its entirety. This report suggests a possible implementation path for a batch production. It starts in a semi-automated facility to obtain data, continues with the creation of a representing simulation model and ends with a case study optimizing the capacity utilization of the investigated batch production. Where the available FlexSim software was not sufficient, additional software, such as Cmap Tools (Florida Institute for Human and Machine Cognition, Pensacola, FL, USA), Microsoft® Excel (Microsoft, Seattle, WA, USA) and Minitab® (Minitab GmbH, Munich, Germany), was used.
Since our literature research yielded few standards for creating a discrete-event simulation in the context of pharmaceutical processes, an intuitive attempt was pursued and implemented. At the beginning, basic decisions about the model were made for model description. Afterwards, information about the investigated production processes was collected. All information was clustered into numerical and logical information. The numerical information covers the collection of historical processing times, their analysis as well as the selection of the most representative distribution for each process step. The logical information was used for model building. The most meaningful information sources were historical batch data, official validation and qualification data of the process owner, instruction manuals, on site observations, and work experiences. The gathered process information was firstly depicted in a flow chart and afterwards transferred to a simulation model. These elements of the methodological approach were addressed separately, but simultaneously. Together, they resulted in an as-is model of the production process, which was later verified and partly validated. It furthermore served as a process analysis and optimizing tool. This methodological approach is depicted in Figure 1.

2.1. Employed Software

Different software was used during this case study on a Windows (Microsoft, Seattle, WA, USA)-based laptop equipped with 12.0 GB RAM and a 64-bit processor. Initially, the historical processing times and their deviations, as well as the new data for model verification and application, were collected in Microsoft® Excel. Moreover, the basic questions were analyzed statistically, such as the calculations of standard deviations and the minimum and maximum values.
An extensive flow chart was created in Cmap Tools, which is freely available from the Florida Institute for Human and Machine Cognition (Pensacola, FL, USA), to understand and take stock of the entire process. This program is ideal for creating flow charts or visualizing logical relations and features easy drag and drop handling. The knowledge gained from the program facilitates the steps for later model building in FlexSim [27].
Further statistical analyses were performed using Minitab® (Minitab GmbH, Munich, Germany, version 18.0), a commercial statistics package that is widely used in Lean Six Sigma projects [28]. Some basic Lean Six Sigma approaches were integrated into the data handling of this case study. Data analysis, such as hypothesis tests, and graphical evaluations, such as boxplots or individual moving range charts, can be easily performed in Minitab®.
Discrete-event simulations were conducted in FlexSim (FlexSim Deutschland—Ingenieurbüro für Simulationsdienstleistung Ralf Gruber, Kirchlengern, Germany), a commercially available 3D simulation software designed for modeling production and logistic processes. FlexSim provides discrete-event simulations that are object-oriented, which means to implement all components as objects, to assign specific attributes and methods to them for their characterization as well as for manipulating the overall system. In addition to a graphical 3D click-and-drag simulation environment, programming in C++ is offered. The user can choose between different views and methods for representing data [9]. The applied, non-configured student version is FlexSim 19.0.0. Here, dynamic process flows can be captured on a functional level, plainly visualized, and extensively analyzed. This program gives decision makers the opportunity to forecast the outcomes of possible changes in their processes, such as changes in product flow, resource utilization (staff, money, and machinery), or plant design.
The production processes were captured by implementing and connecting all single components of the system by their procedural functions and attributes. Additional critical parameters (set-up times, staff) and logic (priorities, random events) made the models as close to reality as possible. The 3D visualization of the process provided an intuitive understanding of the current state of the system and future possibilities. The results of the simulations were analyzed via performance and output statistics. Capacity utilization, transport time, and state statistics are examples of the metrics of interest.

2.2. Production Processes

Two approved coated tablet production processes were selected for this case study because they were similar but different in their ingredients and product properties, such as tablet size. Their obviously different punches require different processes and cleaning times, but their other equipment and operations are similar. Since both processes are comparable, comparable results were expected. Comparable results would enable the transfer of optimizations to other processes and thereby increase the impact of this study. The processes consisted of 47 sub-steps that were merged to the following 13 super-ordered steps:
  • Setting up the scales
  • Weighing the granule and granulation liquid
  • Dissolution of the solid components to finish the granulation liquid
  • Compulsory mixing
  • Fluid bed granulation
  • In-process controls
  • Sieving
  • Tumble blending
  • Compaction
  • Weighing the coating
  • Dissolution of the solid components to finish the coating
  • Coating
  • Bulk packaging
Most machines (in FlexSim referred to as “processors”) are about 25 years old and largely still operated manually. The two products, PINA and PEMB, are produced batch-wise in variable campaign sizes from three to 18 batches. The simulations were chosen to represent the average campaign sizes to identify bottlenecks and increase productivity. Therefore, simulations for PINA covered four batches and those for PEMB covered ten batches.
Since the intention of this work was to describe standard production campaigns of these products, deviations as machine breakdowns, personnel shortages, and other human failures were excluded. During data preparation, we assessed whether the observed deviations influenced any of the historical processing times. In addition, outlier tests were performed to exclude deviating historical batch data, so the resulting data pool solely represented standard processing times. Further excluded data are product-dependent cleaning times as well as times for setting up. The cleaning and set-up efforts before a product campaign vary widely, since the production of analogous products containing the same APIs at different doses lowers the necessary effort dramatically. Thus, this study only includes product-independent daily routine cleaning times, as well as the daily times for setting up the machines.

2.3. Statistical Data Processing

The processing times are the most important since the campaign duration endpoint in this study influences a second endpoint: the labor costs. Therefore, the collection and handling of processing times are of great importance. We distinguished between the initial historical batch data and the FlexSim-generated verification data. The collection of historical batch data strongly depends on the production equipment. While data is easily available in automated production lines, most semiautomated production plants do not have automated tracking and data generation. For these two production processes, no digitally workable data were available. Hence, the processing times of each process step were collected manually and batch-wise before they were transferred into digital files. The overall statistical data process is depicted in Figure 2.
During data preparation, we investigated in Minitab® whether the data were under statistical control with individual moving range charts (I-MR chart). Even though I-MR charts are, strictly speaking, only to be used for normally distributed data, they nevertheless indicate shifts, trends, and process variations. Therefore, the first impressions of the process stability were obtained. Secondly, we tested if the data were normally distributed using probability plots for further data handling. Additional statistical tests for the outliers, data pooling possibilities, and visualization were also performed in Minitab®.
FlexSim contains a tool named ExpertFit, which helps to find the best fit distribution based on raw data. FlexSim distinguishes two main types of probability distributions: discrete and continuous distributions. Continuous distributions are subdivided into non-negative, unbounded, and bounded distributions. Historical batch data were entered as raw data in ExpertFit during model building. The results were extracted, and the suitability of all distributions were evaluated. Afterwards, several graphical comparisons between different distributions were performed to select the best-fitting distribution. The last step in ExpertFit was to transfer the selected distribution and its parameters into the according process step in the FlexSim model. If none of the available distributions provided a satisfying evaluation, the usage of an empirical table was recommended and implemented. The results of this analysis can be found in the Supplementary Materials (Table S1). Interestingly, even though some data were earlier shown to be normally distributed, ExpertFit never evaluated a normal distribution as the best choice.
During model verification, the FlexSim-generated data were compared to the historical batch data. Since the probability tests during data preparation proved that most processing times were not normally distributed, Mann–Whitney tests were performed in Minitab®.
Choosing the best way of model validation was challenging. Initially, a face validity was made. The head of production investigated the models, their behavior, and logics. For a historical data validation, the data pool of historical processing times was statistically too small. Therefore, an additional, predictive validation was attempted. Therefore, FlexSim-generated data were compared to processing times of new campaigns.

3. Results

3.1. Model Development: Design and Building

Model design started with the creation of a detailed flow chart in Cmap tools, resulting in a conceptual model. The flow chart includes all production steps and sub-steps, the operators, the average historical processing times, and the relevant machines. Connections and cross references in the overall process become evident here. Information on merging multiple process steps, the elimination or integration of process steps, and the required model logic was collected. The correctness of the flow chart was approved by the process owner—the head of production. Altogether, this step of model design ensured deep process knowledge and identified all the dependencies and necessary components for the subsequent FlexSim models. Since the manufacturing process is confidential, this process flow chart cannot be shown. However, the complexity of such production processes makes it highly recommendable to start with such depiction and its approval by the head of production.
After the basic information was determined, model building in FlexSim began (Figure 3) following the common guiding idea to include only crucial attributes in the model and to keep the model as simple as possible [29]. Additionally, in the models, we assumed that no machine breakdowns or other major deviations would occur.
Initially, the floor plan (step 1) and the machines (step 2) were transferred out of official documents into FlexSim (Figure 4). The machines are represented as so-called processors. There are three processor types: one to process the item, one to combine multiple items, and one to separate an item into multiple items.
The employees (step 3) are called operators. The headcount in the model, symbolized with the four males on the left, corresponds to the headcount of the original process.
Inserting the model logic (step 4) is one of the most complex parts in building the simulation model. Its basis is the batch documentation, which gives the order of the process steps. Some steps can be connected easily. More complex sequences occur when certain events condition other steps, e.g., the weighing of a new batch cannot start until certain steps of the previous batch are finished, even if the scales and operators are not occupied. Such conditions are considered in the model by triggers like opening and closing the processor ports, sending messages, or placing certain information on the items. It must be determined which and how many operators fulfill a process step.
Next, the operating schedule (step 5) and the times off are entered. Working hours are from 7:00 a.m. to 3:45 p.m., with a breakfast break from 9:15 to 9:30 a.m. and a lunch break from 12:15 to 12:45 a.m. For some processors, it must be guaranteed that the entire process can be finished at once. Therefore, the operating schedules of the processors must be considered as well.
Processing times (step 6) are then established as described in the statistical data processing. If a pairwise comparison revealed no statistical significance (Mann–Whitney test), the times for the different process steps of the two products, PINA and PEMB, were pooled (see Table S1 in Supplementary Materials: Weighing granulation liquid on the table and on the floor scale, dissolution of granulation liquid, mixing in a tumble blender, and weighing the coating on the table and on a floor scale).
Some working steps and times are not explicitly part of the batch documentation, which only provides the daily routine. These times are instead based on employees’ experiences and were also implemented in the model. As these times do not vary depending on the product, the implemented times were identical in all models (Table S2 in the Supplementary Material).
Not all process steps have the same importance. Some process steps can be stopped, while others cannot. Only some of the process steps require the attendance of an operator for the entire processing time. Therefore, the priorities (step 7) have to be defined.
Additionally, the scope (step 8) of the production must be implemented. The prevalent conditions are the campaign production with a certain number of batches. FlexSim also offers the implementation of actual dates.
In this case study, the most important model parameters to verify were the processing times, model logic, and operating schedules. Model verification is unique and strongly depends on the model itself. Thus, finished components are rarely available. Sometimes, additional checkpoints and workarounds, such as labeling the so-called flow items with informational stickers, had to be implemented. FlexSim offers different statistical analysis modules that must be transformed to enable the verification of these parameters. Some of these analysis modules can be added via drag-and-drop and do not need further changes. To track the processed flow items and operators, each must be individually in C++. The implementation of these elements for model verification is the last step of model building.
Testing of the models (step 9) can now be performed in single runs that are started and stopped manually. Alternatively, it is possible to use the FlexSim module Experimenter. Experimenter offers the opportunity to predefine the amount of replications (runs), the statistics for evaluation, and the variables to compare, as well as subsequently perform the necessary replications.

3.2. Model Verification

The blue colored boxes (steps 4–6) in Figure 3 highlight the most important parameters to verify, including simple but productive model logic, an accurate operating schedule for operators and processors, and the correct processing times for each process step. To establish a sound foundation for the FlexSim-generated data, the Experimenter module was used. The amount of model replications was determined as identical to the amount of historical batch data (PINA: 25 runs; PEMB: 45 runs). While using the Experimenter, FlexSim accesses the deposited statistical distribution of each process step and thereby generates different processing times for each run. Afterwards, an interactive report and a performance measure report can be exported. This interactive report includes data on the model logic in item-trace Gantt charts and all data on the working schedules in state charts. These charts are produced replication-wise. The performance measure report includes a statistical summary, a replication plot, a frequency histogram, and the single values of all replications for each process step.

3.2.1. Model Logic

Verification of the model logic is complex to integrate. The goal of this step is to prove that all process steps run in the correct order and that the conditions and dependencies between different process steps are correctly implemented.
Therefore, all items run through the model are tagged, and triggers are programmed for all processors to leave information on these tags. This type of information is best captured using item-trace Gantt charts. Figure 5 shows a schematic item-trace Gantt chart for one batch. One batch consists of six items, with two each for the granule, the granulation liquid, and the coating. The items are pictured as one bar. Each process step is represented as one colored square of the bar. Following from top to bottom and left to right, the process order and dependencies of the different process steps become evident. As an example, the processing step for coating (light blue squares) is provided. This process requires the previously formed tablets (green square) and the newly dissolved coating (light yellow squares). Coating then combines these two items on one bar for the coated tablets, which are packed in the subsequent step (light pink). Hence, the process order and processor type of the coater, a combiner, are verified. During model building, a trigger was set to withhold weighing the coating until compaction is almost finished to prevent long holding times for the liquid parts of the coating. This chart verifies the implementation of this trigger since weighing (purple) starts shortly before the end of compaction (green). Overall, the item-trace Gantt charts were able to verify the overall model logic. Additionally, the overall processing time became evident, which is important for the subsequent comparison of different optimization scenarios.

3.2.2. Operating Schedules

The operating schedules are pictured in the state charts for the operators and processors. Each operator and each processor is represented by one bar, which is divided into several states, including utilize, idle, or scheduled down. The x-coordinate shows the duration in days.
Figure 6 shows a schematic state operator chart (top) and a schematic processor chart (bottom). These charts enable the modeler to verify break times (duration and fixed moments). Moreover, the capacity utilization of each operator and processor becomes evident in this chart. The process steps are arranged in chronological order. This way, one can follow a batch by starting with the utilized part of the first process step and continue watching it work downstream through the processors. The purple parts indicate processing times at which the process becomes stuck because of too few operators, while the light-yellow sections indicate blocked processors. FlexSim can export processing times and states in a table form. This way, export for further data analysis is easy to handle. An evaluation of the state charts verified the break times and process order.

3.2.3. Processing Times

The last parameter to verify is the processing time. For this purpose, the performance measure report was used. Single FlexSim-generated values were transferred into Minitab® where these values were compared to the processing times of the historical batch data. Mann–Whitney tests with a confidence interval of 0.95 were performed, as most data are not normally distributed. None of the generated processing times were significantly different to the historical processing times (Table 1).

3.3. Model Validation

To prove the reproducibility of the simulation models, a model validation was conducted. There are different options to validate computer models. Initially, it was chosen to prove the correctness and reasonability of our models by face validity. Also, a predictive validation was added for which future production campaigns of PINA and PEMB were picked and specifications about the process flow, processing times, and campaign durations were defined. The campaigns, covering four batches for PINA and ten for PEMB, were run under normal conditions. Non-standard conditions, changes, and deviations were additionally monitored and documented. Afterwards, the relevant data was collected from batch documentation, transferred into Minitab®, and compared to FlexSim-generated data. Analyzing the new data showed a valid process flow. The other parameters, the processing times and the campaign duration, however did not meet the specifications caused by severe deviations during both campaigns. Due to confidential issues, detailed explanations are limited. Some of the deviations, such as machine breakdowns, were intentionally excluded during model description and therefore not considered in the FlexSim-generated data. Another very influential deviation was personnel shortage; for none of the production days, the full head count was available. On top, urgent, non-campaign work cut the already minimized work capacities. Besides these issues, the predictive model validation gathered additional important information for the process owner and validated the model logic. The fact that not meeting the specifications was at least partly caused by a lack of the necessary personnel which, however, at least indirectly validates the models.

3.4. Model Application: Optimization and Evaluation of Fictive Shift Systems

After the successful verification and partly validation of the as-is model, realistic and meaningful changes in the real production processes were discussed with the head of production. As Figure 6 (bottom) illustrates, processors have long idle times with little utilization. This raised the question whether the system could run more profitably under different shift systems. Profitability was investigated as the total duration for one campaign and the labor costs. Labor costs included not only the salaries of the employees but also the costs for the machine’s run times.

3.4.1. Establishment of Models with Different Shift Systems

The shift systems of interest were one-shift (OS), one-and-a-half-shift (OHS), and two-shift (TS). The related operating schedules can be found in Table 2.
The different parameters between the different scenarios were operating schedule, product type, and the number of operators. These variations were implemented manually without any optimization algorithm. In addition to simply changing the schedules and headcounts, other aspects must be considered. The campaign duration strongly depends on the weighing operations of the different batches. Therefore, it is important to identify the best times after the start of the campaign for each batch. This is done by supervising and evaluating the interactive report of the Experimenter module. The shift system and number of operators for the OS system of PINA influence these times. The impact of this parameter on the overall campaign duration is visualized in Figure 7. The campaign of PINA includes four batches (left side), and the campaign of PEMB includes ten batches (right side). The last batch of PINA can be weighted in after 8 h under a TS system compared to 48 h using an OS system. For PEMB, the time could be reduced from 9 d to only 3 d after campaign start. Hence, a significant reduction in campaign duration was already expected when building the optimization scenarios.

3.4.2. Results of the Shift Systems

The implemented shift systems were evaluated by the number of successful replications, the utilization degree of the operators, and the campaign duration. A well-established model with suitable logic and processing times can complete the replication. Therefore, the presence of several successful replications indicates a harmonious model that can be used to evaluate model optimization. The predefined number of replications aligns with the replications used during model verification and thus to the number of available historical batch documents. As already mentioned, the schematic operator state charts (Figure 6, top) visualize the utilization degree of the operators. The mean campaign duration and headcounts were used as the basis for the labor cost calculations.
Originally, the overall headcount of the case study was always four operators. Hence, in the first step, the optimization scenarios for both products with OS, OHS, and TS systems were built. The four operators worked simultaneously in the OS system and at staggered intervals for the OHS and TS systems. For some combinations, some replications did not finish (Table 3). It was also impossible to build a running TS model with four operators for PEMB, since only two operators were available for monitoring up to four simultaneous running processors. Obviously, such real-life limits of this process were also reflected by the computational models. The operator and processor state charts also indicated that too many processors needed an operator at the same time. As a result, alternative models were built featuring an increased headcount of six, with three operators always working simultaneously. A summary of the results for PINA and PEMB can be found in Table 3.
The production processes of PINA and PEMB are comparable; however, the number of batches per campaign for PEMB is 2.5 times larger than that for PINA. This makes the PEMB models both more susceptible and more relevant for choosing the optimal shift system.
The combination of six operators working in a TS yielded the lowest duration (PINA: −50%; PEMB: −53%) and the lowest labor costs (PINA: −9%; PEMB: −14%) for both products compared to the initial scenario. This is not initially surprising, as both duration and headcount seem to correlate, as duration is one parameter of labor cost calculations. As shown in Table 3, the durations of OHS with four and six operators differs slightly for PINA due to its different capacity utilizations. There was also no difference in the duration of the TS with four operators. Ultimately, having six operators in a TS decreased the duration by about 30% compared to having four operators in a TS or six operators in an OHS. This decreased labor costs by 4% (OHS, four operators) and 13% (OHS, six operators) for the TS with the six operators. This noteworthy difference in labor costs highlighted PINA in the OHS as the second-best option for PINA.
For PEMB, the TS with six operators was found to be 28% faster and 10% cheaper than the second-best option. This OHS used six operators, as it was significantly different (15%) to the OHS with four operators in the campaign duration. This extra production day produced only minimally higher labor costs (2%).
As previously mentioned, the obtained results indicate that production with six operators in a TS is superior to all types of production with four operators for both products. Here, only four operators were fully qualified. Therefore, the best option under a headcount of four is of great importance. For PINA and PEMB, OHS is the best option due to its faster production (PINA: 25%; PEMB: 23%) and lower labor costs (PINA: 5%; PEMB: 3%) compared to the prevailing OS system. The head of production confirmed the superiority of OHS compared to OS post-hoc based on his own experiences.

4. Discussion

Computer simulations enable testing and evaluating different production scenarios by changing relevant parameters in the according model and running it. In this way, the best possible scenarios were found in this case study. The prevailing conditions of a small headcount and limited resources led to a user-friendly, practice-oriented simulation approach for optimizing two approved pharmaceutical production processes. The majority of computer simulation studies in pharmaceutical supply chain and manufacturing, as reported in the introduction [5,16,19,24] concerned with planning or conducting the complex issue of a whole production process, were performed by experts in modelling and process design. In contrast, this study was conducted by non-computer experts, but experts in pharmaceutical technology and production. We have deliberately modeled an already established process to show that there is still a lot of untapped optimization potential. Our determined potential savings of 50% campaign duration, and of up to 14% labor costs, highlight the significance of this approach. Other less complex optimization attempts, as published by Bähner et al., who evaluated the machine utilization, benefit from the fact, that the time-consuming modeling is not required [30]. However, only the processor utilization is mapped and the operator utilization is disregarded. With modeling, especially having a small amount of historical data, one has a problem with a formal proof of validity, but on the other hand, after implementation, one has more application possibilities. With the still feasible efforts, our simulations intend to close the gap of published industrial case studies.

4.1. Case Study Limitations

While a case study allows to obtain detailed and usually well-protected information on specific production processes, it is also limited to it. The deliberate exclusion of extraordinary events, such as breakdowns or process times with deviations, in the very beginning, defined the simulations to represent standard processes without any incidence. Also, data collection and analysis highlighted two challenges in this case study. Time recording was performed manually minute-wise. This was disadvantageous for short processing times. Setting up of scales takes 8 min on average, which is only 3% of the compaction time. However, the time resolution for both process steps is the same. Additionally, each process step was started, and sometimes stopped, manually. Therefore, the processing times strongly depend on the availability of an operator. This produces high relative standard deviations for short processing times (PINA: 5–115%; PEMB: 6–75%), although the production process is still within the necessary specifications. As an example, the process step of compulsory mixing takes 11–20 min in the historical batch documentation for PINA. This time must be split into the actual mixing time (10 min, fix) and the manual setting up/starting/stopping time (1–10 min, operator dependent). Here, a waiting time of 10 min has a stronger impact on the relative standard deviation than the same waiting time has on a compaction process with a mean duration of 269 min. The impact of these limitations on the significance of this case study are, despite everything, acceptable. Even though FlexSim offers the integration of breakdowns, a substantiated analysis of past quality issues would have been necessary and was beyond the scope of this work. Nevertheless, relevant assumptions can be made. The investigated solid production processes have a linear structure. A total breakdown of one processor stops the production of all subsequent batches. Moreover, the start of some batches will additionally be delayed since intermediate products have limited shelf lives in the validated processes. Such factors prolong the campaign duration but do not influence the processing times of single process steps. The effects of the unprecise time recording and low time resolution on the overall model are also not critical, as short processing times have only a small impact on the duration of the entire campaign. The dependency on available operators also has small impact on the campaign duration, but great impact on the model logic. This factum is negligible since all employees are well trained to prioritize between different steps and since those priorities are implemented in the models as well.

4.2. Case Study Outcomes

In addition to the main aim of this paper (the optimization of existing processes by discrete-event simulations), the implementation of this intuitive process can be evaluated. Initially, the steps of data collection up to model verification were very time-intensive but gave precious insight into the production processes. The as-is states of the actual production processes were scrutinized more intensively than those during daily routines. The obtained results challenged the workflow and the dependencies between different working steps. During data analysis, process steps with significant economic potential were identified; thus, analyses were performed, and possible improvements were developed. Even without actually running any discrete-event simulations, significant knowledge was gained. Therefore, it can be assumed that process owners already profit from such case studies regardless of the simulation outcomes.
Knowledge of the relevant processes and regulations (e.g., GMP, quality management, galenics, and marketing authorization) is important for successful model development in a pharmaceutical production setting. Model implementation is manageable and worth the effort, as the present case study demonstrates. Most elements of model building are performed using simple drag and drop options, apart from the implementation of complex logic. Creating possibilities to track processing times and process flow requires more complex logic and, therefore, requires low-level programming. It is, however, possible to overcome these obstacles, especially after gaining some experience. Consequently, the necessary efforts during model building and verification strongly decrease for any other comparable production process. The next step of model validation is already described as being a crucial point for model-based debottlenecking approaches. Irregular or unsteady production steps are known to complicate or even preclude a successful model validation [30]. Unfortunately, unfavorable circumstances also led to a failed predictive model validation. It is therefore of great importance to choose the best validation strategy. The efforts for a historical data validation or a predictive validation are easily manageable, since the data handling identical to the one during model verification. Establishing optimization scenarios, is less time-intensive and challenging, even though the optimization module in FlexSim is not part of the applied student version. This means that all optimization scenarios had to be developed and implemented from scratch. While the results are clear on the superiority of six compared to four operators, more factors need to be considered. The examined work only covers bulk production, which is only one part of the entire production process and strongly depends on other departments, such as warehousing and quality control. Whenever warehousing and quality control work in an OS system, sufficient cooperation with the bulk production working in the OS or OHS system is granted. Modifications and adjustments must only be made when a TS system is established. The establishment of a TS system in bulk production can also produce a more rigid structure, decrease spontaneity, and increase the headcount. The necessary financial and work inputs needed to qualify two more operators for the production of about 20 products must also be taken into consideration. The impact of these disadvantages could be tested with the pilot run of a TS system. It could also be further explored whether an increased headcount in other departments should be mandatory, and whether the pressure on the involved staff is bearable.
Besides analysis of the effects of different shift systems, further simulations could examine more extensive questions, such as a change of the process layout. The simulated processes are based on a long-established production site. Hence, the layout and consequently the equipment localization depend on the floor plan and the structural conditions, such as media supply systems and electrical equipment. As shown in Figure 4, the current floor plan does not allow a lean production flow; an according spaghetti diagram would reveal inefficient transportation and employee movements. A re-layout including a rebuilding of the manufacturing premises of the site would enable an efficient and continuous workflow with significantly reduced non-value-added time (NVA), such as work in process inventory (WIP) or repeating pathways. Simulations could test the benefits of a re-layout and thereby also substantiate such far-reaching considerations.

5. Conclusions

Due to stiff regulations and a large operational complexity in multi-product, batch-operated sites, pharmaceutical companies sometimes operate in suboptimal conditions. Optimizing such pharmaceutical production processes via computer simulations not only appears to be practical but is also highly recommendable, even for relatively small companies. This paper demonstrates that discrete-event simulations of pharmaceutical productions (i) are feasible with FlexSim, (ii) can be conducted by non-computer experts, and iii) may significantly improve the production performance. Modeling of the entire production process does mean some effort at first. However, by considering the entire process, this model can be applied with more flexibility compared to other bottleneck identification methods focusing only on the technical equipment. In this case study, the entire process simulation was conducted by one person and it was possible to reduce the campaign duration by 50% for both products. If more resources were available, the benefits would likely improve and be determined more quickly. From a user perspective, more easily applicable tools and possibilities to effortlessly standardize modeling are desirable, especially for data acquisition and model verification. However, this process is only compelling for software developers if the demand is high enough. A widespread use of discrete-event simulations in pharmaceutical companies may, therefore, potentiate the present possibilities of such simulations.

Supplementary Materials

The following are available online at https://0-www-mdpi-com.brum.beds.ac.uk/2227-9717/9/1/67/s1, Table S1: Listing of all integrated process steps with the best fitting statistical distributions according to ExpertFit; Table S2: Listing of additional work steps that are product independent.

Author Contributions

Conceptualization, S.H., F.S., and C.-M.L.; methodology, S.H., N.S., T.M.B., F.S., and C.-M.L.; software, S.H.; validation, S.H.; formal analysis, S.H.; investigation, S.H.; resources, T.M.B. and T.R.; data curation, S.H.; writing—original draft preparation, S.H.; writing—review and editing, S.H., N.S., T.M.B., B.L., T.R., F.S., and C.-M.L.; visualization, S.H.; supervision, N.S., T.M.B., B.L., F.S., and C.-M.L.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable

Informed Consent Statement

Not applicable

Data Availability Statement

Patent protection. Restrictions apply to the availability of these data. Data was obtained from protected company information. Data sharing is not applicable to this article.

Acknowledgments

The authors are thankful to Ralph Gruber, Ingenieurbüro für Simulationsdienstleistung, Kirchlengern, Germany for providing support in the programming and handling of FlexSim. We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) and Saarland University within the funding programm Open Access Publishing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kvesic, D.Z. Product lifecycle management: Marketing strategies for the pharmaceutical industry. J. Med. Mark. 2008, 8, 293–301. [Google Scholar] [CrossRef]
  2. Federsel, H. Process R&D under the magnifying glass: Organization, business model, challenges, and scientific context. Bioorg. Med. Chem. 2010, 18, 5775–5794. [Google Scholar] [CrossRef] [PubMed]
  3. Brandes, R. Der Entwurf zum neuen Annex 1. Pharm. Ind. 2018, 5, 671–680. [Google Scholar]
  4. Kleemann, A. Metamorphosis of the Pharmaceutical Industry. Pharm. Ind. 2013, 75, 562–574. [Google Scholar]
  5. Habibifar, N.; Hamid, M.; Bastan, M.; Azar, A.T. Performance optimisation of a pharmaceutical production line by inte-grated simulation and data envelopment analysis. Int. J. Simul. Process Model. 2019, 14, 360–376. [Google Scholar] [CrossRef]
  6. Behr, A.; Brehme, V.; Ewers, C.; Grön, H.; Kimmel, T.; Küppers, S.; Symietz, I. New Developments in Chemical Engineering for the Production of Drug Substances. Eng. Life Sci. 2004, 4, 15–24. [Google Scholar] [CrossRef]
  7. WHO. Tuberculosis. Available online: https://www.who.int/en/news-room/fact-sheets/detail/tuberculosis2020 (accessed on 22 March 2020).
  8. WHO. Guidelines for Treatment of Drug-Sesceptible Tuberculosis and Patient Care; 2017 update Annex 6; World Health Organization: Geneva, Switzerland, 2017. [Google Scholar]
  9. Banks, J.; Carson, J.S.; Nelson, B.L.; Nicol, D.M. Discrete-Event System Simulation, 4th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2005. [Google Scholar]
  10. Neelamkavil, F. Computer Simulation and Modelling., 1st ed.; Wiley: Chichester, UK, 1987. [Google Scholar]
  11. Oberkampf, W.L.; Roy, C.J. Verification and Validation in Scientific Computing; Cambridge University Press (CUP): Cambridge, UK, 2010. [Google Scholar]
  12. Rotstein, G.; Papageorgiou, L.; Shah, N.; Murphy, D.; Mustafa, R. A product portfolio approach in the pharmaceutical industry. Comput. Chem. Eng. 1999, 23, S883–S886. [Google Scholar] [CrossRef]
  13. Laínez, J.M.; Schaefer, E.; Reklaitis, G.V. Challenges and opportunities in enterprise-wide optimization in the pharma-ceutical industry. Comput. Chem. Eng. 2012, 47, 19–28. [Google Scholar] [CrossRef]
  14. Blau, G.E.; Pekny, J.F.; Varma, V.A.; Bunch, P.R. Managing a Portfolio of Interdependent New Product Candidates in the Pharmaceutical Industry. J. Prod. Innov. Manag. 2004, 21, 227–245. [Google Scholar] [CrossRef] [Green Version]
  15. Shah, N. Pharmaceutical supply chains: Key issues and strategies for optimisation. Comput. Chem. Eng. 2004, 28, 929–941. [Google Scholar] [CrossRef]
  16. Sundaramoorthy, A.; Karimi, I.A. Planning in Pharmaceutical Supply Chains with Outsourcing and New Product In-troductions. Ind. Eng. Chem. Res. 2004, 43, 8293. [Google Scholar] [CrossRef]
  17. Chen, Y.; Mockus, L.; Orcun, S.; Reklaitis, G.V. Simulation-optimization approach to clinical trial supply chain manage-ment with demand scenario forecast. Comput. Chem. Eng. 2012, 40, 82–96. [Google Scholar] [CrossRef]
  18. Parker, J.; Lamarche, K.; Chen, W.; Williams, K.; Stamato, H.; Thibault, S. CFD simulations for prediction of scaling effects in pharmaceutical fluidized bed processors at three scales. Powder Technol. 2013, 235, 115–120. [Google Scholar] [CrossRef]
  19. Sen, M.; Chaudhury, A.; Singh, R.; John, J.; Ramachandran, R. Multi-scale flowsheet simulation of an integrated continu-ous purification-downstream pharmaceutical manufacturing process. Int. J. Pharm. 2013, 445, 29–38. [Google Scholar] [CrossRef] [PubMed]
  20. Benyahia, B.; Lakerveld, R.; Barton, P.I. A Plant-Wide Dynamic Model of a Continuous Pharmaceutical Process. Ind. Eng. Chem. Res. 2012, 51, 15393–15412. [Google Scholar] [CrossRef]
  21. Matsunami, K.; Sternal, F.; Yaginuma, K.; Tanabe, S.; Nakagawa, H.; Sugiyama, H. Superstructure-based process synthesis and economic assessment under uncertainty for solid drug product manufacturing. BMC Chem. Eng. 2020, 2, 1–16. [Google Scholar] [CrossRef] [Green Version]
  22. Makrydaki, F.; Georgakis, C.; Saranteas, K. Dynamic Optimization of a Batch Pharmaceutical Reaction using the Design of Dynamic Experiments (DoDE): The Case of an Asymmetric Catalytic Hydrogenation Reaction. IFAC Proc. Vol. 2010, 43, 260–265. [Google Scholar] [CrossRef] [Green Version]
  23. Kaylani, H.; Atieh, A.M. Simulation Approach to Enhance Production Scheduling Procedures at a Pharmaceutical Company with Large Product Mix. Procedia CIRP 2016, 41, 411–416. [Google Scholar] [CrossRef] [Green Version]
  24. Saadouli, H.; Jerbi, B.; Dammak, A.; Masmoudi, L.; Bouaziz, A. A stochastic optimization and simulation approach for scheduling operating rooms and recovery beds in an orthopedic surgery department. Comput. Ind. Eng. 2015, 80, 72–79. [Google Scholar] [CrossRef]
  25. Lin, J.T.; Chen, C.-M. Simulation optimization approach for hybrid flow shop scheduling problem in semiconductor back-end manufacturing. Simul. Model. Pract. Theory 2015, 51, 100–114. [Google Scholar] [CrossRef]
  26. Caggiano, A.; Bruno, G.; Teti, R. Integrating Optimisation and Simulation to Solve Manufacturing Scheduling Prob-lems. Procedia CIRP 2015, 28, 131–136. [Google Scholar] [CrossRef]
  27. Cognition IfHaM. CmapTools. Available online: https://cmap.ihmc.us/cmaptools/2020 (accessed on 22 March 2020).
  28. Minitab. MINITAB SOLUTIONS Lean Six Sigma. Available online: https://www.minitab.com/en-us/products/companion/solutions/lean-six-sigma/2020 (accessed on 24 March 2020).
  29. Stachowiak, H. Allgemeine Modelltheorie; Springer: Wien, Austria; New York, NY, USA, 1973. [Google Scholar]
  30. Bähner, F.D.; Huusom, J.K. A Debottlenecking Study of an Industrial Pharmaceutical Batch Plant. Ind. Eng. Chem. Res. 2019, 58, 20003–20013. [Google Scholar] [CrossRef]
Figure 1. Methodological approach of this case study: Seven steps (blue boxes) were implemented from model description until its application as optimizing tool. The according sub-steps are listed underneath and include further information, such as applied software and cross references to other figures and tables.
Figure 1. Methodological approach of this case study: Seven steps (blue boxes) were implemented from model description until its application as optimizing tool. The according sub-steps are listed underneath and include further information, such as applied software and cross references to other figures and tables.
Processes 09 00067 g001
Figure 2. Chronological workflow of statistical data processing: Minitab® was used during data preparation and model verification to analyze the historical batch data and to later compare it to the FlexSim-generated data. The ExpertFit tool of FlexSim performed the automated fitting of the historical batch data to over 20 distributions to identify the best representation for each process step during model building.
Figure 2. Chronological workflow of statistical data processing: Minitab® was used during data preparation and model verification to analyze the historical batch data and to later compare it to the FlexSim-generated data. The ExpertFit tool of FlexSim performed the automated fitting of the historical batch data to over 20 distributions to identify the best representation for each process step during model building.
Processes 09 00067 g002
Figure 3. Steps for building a process model using FlexSim: In the first third, simple facts established the foundation of the model. After inserting the model logic, the working schedule and the processing times became more complex and included the most critical attributes of the model to be verified (colored blue).
Figure 3. Steps for building a process model using FlexSim: In the first third, simple facts established the foundation of the model. After inserting the model logic, the working schedule and the processing times became more complex and included the most critical attributes of the model to be verified (colored blue).
Processes 09 00067 g003
Figure 4. Simplified floor plan in FlexSim: The production area is depicted, including processors (blue boxes: number represent the chronological order of the process steps) and further details (grey boxes). Brown bars represent areas that are not used for the production of the investigated products.
Figure 4. Simplified floor plan in FlexSim: The production area is depicted, including processors (blue boxes: number represent the chronological order of the process steps) and further details (grey boxes). Brown bars represent areas that are not used for the production of the investigated products.
Processes 09 00067 g004
Figure 5. Example of the FlexSim-generated item-trace Gantt chart for model verification: This chart represents the production of one batch including the most important process steps. The granule, granulation liquid, and coating are necessary to produce film-coated tablets. Floor and table scales are used for each of these components, as different mass ranges are weighted. One bar symbolizes one of these components depending on the initially used scale. Hence, one batch consists of six bars. The different colored subdivisions of the bars show the finished process steps of each component. This type of item-trace Gantt chart allows to (i) control the correct logical order of the process steps; (ii) check if the dependencies are correct (e.g., the weighing of coating can only start shortly before the end of the compaction to avoid long holding times); and (iii) determine the total campaign duration.
Figure 5. Example of the FlexSim-generated item-trace Gantt chart for model verification: This chart represents the production of one batch including the most important process steps. The granule, granulation liquid, and coating are necessary to produce film-coated tablets. Floor and table scales are used for each of these components, as different mass ranges are weighted. One bar symbolizes one of these components depending on the initially used scale. Hence, one batch consists of six bars. The different colored subdivisions of the bars show the finished process steps of each component. This type of item-trace Gantt chart allows to (i) control the correct logical order of the process steps; (ii) check if the dependencies are correct (e.g., the weighing of coating can only start shortly before the end of the compaction to avoid long holding times); and (iii) determine the total campaign duration.
Processes 09 00067 g005
Figure 6. Example of the FlexSim-generated state operator chart (top) and state processor chart (bottom): The production of two work days is depicted, including the most important states for the six operators and the most important states for the last process steps. Each operator is symbolized by one bar. The different-colored subdivisions of the bars represent the operators’ states over time. The processors are also represented by bars divided into different-colored states. The major outcomes of both charts include verification of break, lunch, and after-work hours, as well as capacity utilization (idle/utilize). This enables one to i) test the model (chronological order) and to identify bottlenecks (capacity utilization of operators and processors).
Figure 6. Example of the FlexSim-generated state operator chart (top) and state processor chart (bottom): The production of two work days is depicted, including the most important states for the six operators and the most important states for the last process steps. Each operator is symbolized by one bar. The different-colored subdivisions of the bars represent the operators’ states over time. The processors are also represented by bars divided into different-colored states. The major outcomes of both charts include verification of break, lunch, and after-work hours, as well as capacity utilization (idle/utilize). This enables one to i) test the model (chronological order) and to identify bottlenecks (capacity utilization of operators and processors).
Processes 09 00067 g006
Figure 7. Summary of the optimal starts for weighing the granules of all batches after campaign start for PINA (left side) and PEMB (right side): The cylinders represent batches, the numbers indicate the batch number in the campaign. Therefore, the campaign of PINA includes four numbered cylinders and the one of PEMB includes ten cylinders. The start depends on the applied shift system and (only for the one-shift system of PINA) also on the number of operators. This indicates that the processors limit the weighing strategies for all other cases. The start times have a significant influence on the overall campaign time.
Figure 7. Summary of the optimal starts for weighing the granules of all batches after campaign start for PINA (left side) and PEMB (right side): The cylinders represent batches, the numbers indicate the batch number in the campaign. Therefore, the campaign of PINA includes four numbered cylinders and the one of PEMB includes ten cylinders. The start depends on the applied shift system and (only for the one-shift system of PINA) also on the number of operators. This indicates that the processors limit the weighing strategies for all other cases. The start times have a significant influence on the overall campaign time.
Processes 09 00067 g007
Table 1. Statistical analysis: Results of the probability plots of each process step and of the Mann–Whitney tests comparing historical batch data to the FlexSim-generated data for the products PINA and PEMB. There was no significant difference (p < 0.05) between the historical batch and FlexSim-generated data.
Table 1. Statistical analysis: Results of the probability plots of each process step and of the Mann–Whitney tests comparing historical batch data to the FlexSim-generated data for the products PINA and PEMB. There was no significant difference (p < 0.05) between the historical batch and FlexSim-generated data.
Process StepNormally Distributed?p-Value
PINAPEMBPINAPEMB
Dissolution of granulation liquidYesNo0.7600.243
Compulsory mixerYesNo0.5940.479
Fluid Bed GranulationNoNo0.9930.127
IPC Moisture Analysis NoNo0.8060.527
SievingNoYes0.7670.602
Tumble blenderNoNo0.3190.110
Compaction YesNo0.3310.107
Dissolution of coatingYesNo0.3990.561
CoatingYesYes0.6790.246
PackagingNoYes0.8860.086
Table 2. Operating schedules: Operating schedules of the optimization scenarios classified into the shift models of one-shift (OS), one-and-a-half-shift (OHS), and two-shift (TS) systems. The working hours for the one-shift system were adjusted according to an in-company agreement. * Different weekly hours for some operators in the past.
Table 2. Operating schedules: Operating schedules of the optimization scenarios classified into the shift models of one-shift (OS), one-and-a-half-shift (OHS), and two-shift (TS) systems. The working hours for the one-shift system were adjusted according to an in-company agreement. * Different weekly hours for some operators in the past.
One-Shift *One-and-a-Half-ShiftsTwo-Shifts
Operating schedule07:00 a.m.–03:15/03:45 p.m.06:00 a.m.–02:15 p.m.
09:15 a.m.–05:30 p.m.
06:00 a.m.–02:15 p.m.
02:00 p.m.–10:15 p.m.
Table 3. Results of the different shift models for PINA and PEMB: Generally, the campaigns of PINA consisted of four batches, and the campaigns for PEMB consisted of ten batches, yielding the resulting model scope. The number of successful replications indicates whether the model is stable. The arrows symbolize the utilization degree of the operators (↓ = some idleness, ↓↓ = much idleness, ↓↓↓ = operator is barely working, and ↑↑↑ = work overload, ✔ =appropriate work load). Additional metrics of interest are the campaign duration, including the standard error of the mean, and labor costs. The best scenarios are highlighted with grey boxes.
Table 3. Results of the different shift models for PINA and PEMB: Generally, the campaigns of PINA consisted of four batches, and the campaigns for PEMB consisted of ten batches, yielding the resulting model scope. The number of successful replications indicates whether the model is stable. The arrows symbolize the utilization degree of the operators (↓ = some idleness, ↓↓ = much idleness, ↓↓↓ = operator is barely working, and ↑↑↑ = work overload, ✔ =appropriate work load). Additional metrics of interest are the campaign duration, including the standard error of the mean, and labor costs. The best scenarios are highlighted with grey boxes.
One-Shift (OS)One-and-a-Half-Shift (OHS)Two-Shift (TS)
PINA
Operators total464646
Replications30/3030/3030/3030/3028/3030/30
Operator Utilization Op5 + Op6 ↓↓Op6 ↓↓↑↑↑
Duration mean [d]3.2 ± 0.003.2 ± 0.002.4 ± 0.022.3 ± 0.002.3 ± 0.021.6 ± 0.02
Labor costs [€]748889287128786687986840
PEMB
Operators total464646
Replications41/4541/4545/4544/45--/4542/45
Operator
Utilization
Op4 − Op6 ↓↓Op1 − Op6 ↓↑↑↑
Duration mean [d]9.4 ± 0.019.4 ± 0.017.2 ± 0.016.1 ± 0.02--4.4 ± 0.01
Labor costs [€]21,99626,22621,38420,862--18,810
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hering, S.; Schäuble, N.; Buck, T.M.; Loretz, B.; Rillmann, T.; Stieneker, F.; Lehr, C.-M. Analysis and Optimization of Two Film-Coated Tablet Production Processes by Computer Simulation: A Case Study. Processes 2021, 9, 67. https://0-doi-org.brum.beds.ac.uk/10.3390/pr9010067

AMA Style

Hering S, Schäuble N, Buck TM, Loretz B, Rillmann T, Stieneker F, Lehr C-M. Analysis and Optimization of Two Film-Coated Tablet Production Processes by Computer Simulation: A Case Study. Processes. 2021; 9(1):67. https://0-doi-org.brum.beds.ac.uk/10.3390/pr9010067

Chicago/Turabian Style

Hering, Stefanie, Nico Schäuble, Thomas M. Buck, Brigitta Loretz, Thomas Rillmann, Frank Stieneker, and Claus-Michael Lehr. 2021. "Analysis and Optimization of Two Film-Coated Tablet Production Processes by Computer Simulation: A Case Study" Processes 9, no. 1: 67. https://0-doi-org.brum.beds.ac.uk/10.3390/pr9010067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop