Self Teaching Guide
The self teaching guide is designed to walk you through a series of interesting lessons and exercises to help you become proficient with ProcessModel.
Install the Software
A model is made up of multiple files including additional temporary files which are created during simulation. Creating a model “package” is the method ProcessModel uses to bundle the essential files together in a compressed (.spg) format. This facilitates moving models to a new location (either on your computer or on another computer), or creating multiple versions of a model as you build your application. To open or install a model package, open ProcessModel and click the Open option from the File menu. Then browse for the package file. Don’t use WinZip (or other zipping program) to create or install package files.
When opening a model file you will be opening a .SPG file. To open a model you must start the ProcessModel application and click the Open option from the File menu, then browse for your model file. Do not simply double click on a model file from My Computer or Windows Explorer. Doing so may not open the application properly.
To aid in your learning process, each lesson will have questions to check your comprehension of the concepts being covered. Your answers can be verified by checking the Appendix at the end of this guide.
Self Teaching Guide Conventions
The following icons will be used throughout this manual to indicate sections to take note of.
|Action to take|
|Important information to be aware of|
|Tip or something to think about|
|Questions to answer in order to confirm comprehension of information covered|
|Watch tutorial video|
|Bold||Menu items to click, text to be typed, or other actions to take|
Continue with Lesson 1: Your First Model
Lesson 1: Your First Model
See Overview of Modeling with ProcessModel video tutorial.
See Learn How to Use the Interface video tutorial.
Operating Parameters and Requirements
In our first lesson we will model a small call center. The objective of this lesson is to find the appropriate staffing level for a given call volume. We will be modifying the model itself, as well as changing some parameters in order to reach the objective.
Help desk staff (Level 1 resource) make $8/hr while support staff (Level 2 resource) make $12/hr.
75% of the calls taken can be resolved over the phone within the first 2 minutes. Once the question is answered, the call exits the system. But the remaining 75% of calls are more difficult and require additional research. The research function takes 20 minutes to complete. Following the research, the call must be returned to the customer. This call back takes 3 minutes.
New calls come into the call center every 5 minutes. Management has determined that the average customers with a difficult issue should not to wait more than 60 minutes to receive a call back.
Open the Model
Start ProcessModel and open the Lesson 1 – Call Center model package by selecting the Open option from the File menu.
This will install the model files from the package but keep the package intact. That way you can change the model as much as needed. Then, if you ever need to go back to the original version of the model, you can just install the package again. Just make sure you use File / Save As to change the name of your model before installing the package again. Otherwise you will overwrite your current model of the same name.
You should now see the model shown below.
Answer the following questions without simulating the model. Supposing 200 calls arrived:
1. How many calls exit without requiring research?
2. How long does each simple call take to complete?
3. How many calls require research?
4. How long does it take to service the difficult calls?
Simulating the Model
To simulate the model, click the Simulation menu and then select the Save & Simulate option.
The upper left region of the simulation window is called the Scoreboard. It shows some key statistics for entities which have completed the process and left the model. The upper right section of the window shows the simulation clock. 30 hours and 23 minutes into the simulation, the 63 completed difficult calls have taken an average of 681.24 minutes to make it through the process. That may be a surprise since the total activity time for the model is only 25 minutes. One of the benefits of simulation is to demonstrate what will actually happen, rather than what is supposed to happen, or what you think will happen.
You can speed up the simulation or slow it down by dragging the scroll bar at the top of the window to the right or left.
The indicator lights above the resources show their activity. Red means the resource is on break or off shift. Blue means the resource is on shift but waiting for work. Green means the resource is currently working on an entity.
The number above and to the left of an activity represents the contents of its input queue. In this case there are 14 calls waiting to be returned because the activity is currently busy. The number below an activity shows the current contents of a multi-capacity location. The Perform Research activity has a capacity of 10, but since there is only 1 Level 2 resource, only 1 entity can be processed at a time at that location.
View the Output Report
Following the simulation you will be asked if you want to see the results. Click No, after returning to ProcessModel click the output report button .
Scroll down to the Resources section. You’ll notice that the Level 2 resource was busy 96.7% of the time while the Level 1 resource was only busy 40% of the time.
Scroll down to the Entity Summary section. There you see that 70 Difficult calls left the model and took an average of 669.69 minutes to do it.
Scroll to the top to view the Activities section. Notice the input queue for the Return Call activity had an Average Contents of 21.76 and an average wait time of 497.37 minutes. That’s over 8 hours in that activity alone before the call was returned. Our manager is not happy!
Solution 1 – Hire More Staff
Close the output report by clicking File and selecting Exit. Then double click on the Level 2 resource and change the Quantity field to 2.
Save this as a new file by clicking the Save As option on the File menu. Save the file as using the name Lesson 1 – Solution 1. Now simulate your model and find the average cycle time for the Difficult calls.
5. How much improvement was there in the cycle time?
6. How did the resource utilization change?
7. Is that utilization change good or bad?
Solution 2 – Change Entity Handling
While there was a dramatic improvement in cycle time, solution 1 will increase cost. Let’s try another where the Level 2 resource stays with the entity from the time they begin research until after they return the call.
Open the original call center model by clicking File and then Open. Select the Lesson 1 – Call Center model. Change the Get and Free connector from Perform Research to Level 2 to a GET.
Also change the Get and Free connector from Level 2 to Return Call to a Free.
Now, instead of the Level 2 resource being freed immediately after the Perform Research work is complete, the resource will travel with the Difficult call through the Return Call activity until the call is complete and has been returned.
Save this new model as Lesson 1 – Solution 2. Simulate the model.
The average cycle time is now 212.27 minutes which is not as good as in Solution 1. However, there is no increase in cost. The Perform Research input queue has an average contents of 8.28 and an average time of 179.23 minutes.
8. Where is the queue buildup now?
9. What happened to the input queue buildup at Return Call?
Solution 3 – Change Responsibilities
Since the Level 1 resource has some free time, let’s see what happens if we have them return calls instead of the Level 2 resource.
Close any models you have open. Open the Lesson 1 – Call Center model. Click the dashed line connecting the Level 2 resource to the Return Call activity. Press the Delete key on your keyboard. Hover your mouse over the Level 1 resource. Then click and drag your mouse to Return Call.
Rename your model to Lesson 1 – Solution 3 by clicking the Save As option on the File menu.
After simulating this new model, you’ll notice the cycle time for Difficult calls has dropped to 105.35 minutes.
10. What was the Average Contents value for the Perform Research input queue?
11. How long did the average Difficult call wait in the input queue of the Perform Research activity?
12. What was the utilization for each resource?
Solution 4 – Alternate Resource
Another alternative is to cross train Level 1 workers so they can help doing research when they aren’t busy taking calls. Remember though, taking calls is their first priority. So if they are doing research and a call comes in, they need to stop doing their research in order to take the call.
Open the Lesson 1 – Call Center model. Hover the mouse over the Level 1 resource. Click and drag your mouse to the line connecting Level 2 to Perform Research.
To place priority on taking calls, click the connector between Level 1 and Take Call. Then click the Respond Immediately check box.
Save this file as Lesson 1 – Solution 4. Then simulate the model.
The average cycle time for Difficult calls is no 53.95 minutes. Remember management wanted that time to be under 60 minutes.
13. Several alternatives were tried in this lesson. Given your knowledge of business, which solution do you think makes the most sense?
14. What other solutions would you suggest? Do you see how ProcessModel can help you determine the best alternative?
15. Go back to the original call center model. Review the operating parameters and requirements outlined at the beginning of this lesson. Double click each object in the model to locate each of those parameters
Continue with Lesson 2: Replications and Distributions
Lesson 2: Replications and Distributions
Distributions (which we will discuss later in this lesson) are a method used in ProcessModel to introduce randomness or variability. For example, you may have an activity where the processing time is not always constant due to the type of entity being processed, or a variety of other reasons. In order to understand the importance of running replications in a simulation, let’s suppose you have 3 consecutive activities, each having a distribution in their time fields. Because the values for those times are chosen randomly each time an entity enters the processes, it is possible that the majority of the times chosen will be on the low range of possible values. That of course would give a lower than expected overall cycle time for your process. Now let’s say you delete one of the processes. You would expect the average cycle time to drop because there is one less activity. However, because of the randomness of the distributions, the two remaining activities could return many time values in the high range of the scale. In that case, the average cycle time would actually go up instead of down. Even though this is an extreme example, it demonstrates the nature of randomness. Replications allow you to run a simulation multiple times, generating different random values for each replication. Then by taking an average of those replications, your results are much more realistic and repeatable.
When using replications, you can view the minimum and maximum results for each section in the output detail report. Run your replications, then on the output report’s filter tab, click Statics and select the check the Min – Max checkbox. Min – Max should now be selected.
The min and max values will be displayed in each of the report sections similar to that shown below.
Open the last model used in Lesson 1, Lesson 1 – Solution 4. Click on the Simulation menu and select Options.
You’ll remember that the simulations in Lesson 1 ran for 40 hours. The Options menu is where the simulation run length is determined. Warmup length is the amount of time you want to run your model before collecting statistics (use to preload a production line). Replications determines the number of times a simulation is repeated before displaying the results.
Turning off the animation in the Options window will make the simulation run much faster.
Let’s see what happens when you set the Replications to 30, turn off the animation and the scoreboard. Click Close, simulate the model, click No to close Output Detail Report, Click from the toolbar to view the output report.
Goto the Entity Summary tab on the General Report.
The results of each replication are shown in the first 30 rows. The last two rows show the Average and Standard Deviation for all replications. Note the Average Cycle Time average of the replications is 68.95 which is above the required 60 minute maximum specified by management.
A distribution could be defined as a set of values which specify the relative frequency with which an event has occurred or is likely to occur. In the real world, events tend to occur randomly, according to certain statistical patterns or distributions. Distributions allow you add randomness or variability to your model in order to make it more accurately reflect reality.
Distributions are of two major types: discrete and continuous. Discrete distributions contain a finite number of possible outcomes. Two examples would be an entity randomly routing to one of several locations, or a randomly generated quantity of arriving entities. Continuous distributions contain an infinite number of possible outcomes. Examples include generating service times or the time between arrivals.
The following represent only 4 of the commonly used distributions in ProcessModel. For a complete list of available distributions you can open the Stat::Fit program from ProcessModel’s Tools menu.
|Triangular||T(a, b, c, s)||a = mimum
b = mode
c = maximum
Example: T(2, 10, 15)
|Uniform||U(a, b, s)||a = mean
b = half range
|User-defined (or discrete)||D n (% 1 , X 1 , . . . % n , X n )||% = percentage of time the value is returned
x = value to be returned
n = number of return values (between 2 and 5)
Example: D3(20, 35, 30, 45, 50, 37.5)
The following diagrams show what the Triangular and Uniform distributions explained above would look like.
Triangular – Unload time for trucks could be as little as 7 minutes, is typically 18, but could be as long as 45 minutes. To model this example, you would enter T(7, 18, 45) in the Time field at the Loading Dock activity.
Uniform – Everyday you have anywhere between 50 and 150 customers come to your store. To model this example, you would enter U(50, 150) in the Quantity field of your daily pattern arrival.
User-defined – One of two types of prep need to occur on each part entering your process. 34% of the time the prep takes 8 minutes. 66% of the time it takes 14 minutes. To model this example, you would enter D2(34, 8, 66, 14) in the Time field of the Prep activity.
Close any currently open model and open the Lesson 1 – Call Center model. Double click the Perform Research activity and enter T(10,20,30) in the Time field.
Adding the time distribution will cause the time to vary between 10 and 30 minutes, with most times being around 20 minutes, for each entity that enters this activity.
Double click the arrival route (line between the Call entity and the Take Call activity). Enter D2(90,1,10,2) in the Quantity field. Then enter T(2,5,12) in the Repeat Every field.
90% of the time, a quantity of 1 call will arrive. 10% of the time, 2 calls will arrive together. Calls will arrive between 2 and 12 minutes apart, with the majority of arrivals occurring around 5 minutes apart.
Set this model to run 10 replications by clicking Simulation and selecting Options. Save this model as Lesson 2, then simulate it.
1. What is the average and standard deviation for the Level 2 resource?
2. What is the average and standard deviation for the Difficult calls?
3. Though you don’t see a typed distribution, where do you find randomness or variability in the Lesson 1 – Call Center model?
4. Think of a real world situation where you would use a triangular distribution. What questions would you ask to obtain the parameter values you need?
Though you can use estimates of what your randomized data values will be at any given time, it is always best to use real data to generate reliable results based on quantitative data. That is where Stat::Fit comes in (available on the Tools menu). Stat::Fit is a statistical tool which allows you to create statistical distributions based on actual measured data.
Let’s look at one example of why it is important to use Stat::Fit. One distribution which is fairly easy to understand is the Triangular distribution. It has 3 parameters: a minimum value, a mean value, and a maximum value. For example, suppose we have an activity that takes a minimum of .25 minutes, usually takes 2 minutes, but could take as long as 100 minutes. That is easily written as T(.25,2,100). However, if the data should actually be represented by a Lognormal distribution, the graph below shows where much of the time you could be getting wrong values if you used a Triangular distribution instead.
So taking the easy route (or using the easy distribution) may cost you in reliable results. Using collected data and Stat::Fit gives you confidence that you can depend on your output accuracy when introducing variability into your model.
For the next portion of our lesson, suppose you have collected data for the Perform Research activity, but you don’t know what distribution to use or how to get that data it into your model.
Close your current model and open Lesson 1 – Call Center. Click the Tools menu and select Stat::Fit. In the Document1: Data Table window, click in the column to the right of the number 1.
Enter the following numbers in the Data Table (you can select, copy and paste this data): 25 18 19 12 25 26 20 15 21 20 22 23 15 14 25 26 19 23 15 17 19 22 18 24 20
If you already have your data in another application like Excel, you can copy and paste your data points rather than typing them manually.
After entering your data, click the Auto::Fit button, select the unbounded option, and click OK.
To view the graph of any of the distributions simply click the distribution name on the left.
Load the Weibull distribution into the Perform Research activity by clicking Stat::Fit’s Export button, then click OK.
Exit Stat::Fit and double click the Perform Research activity. Highlight the 20 in the Time field. Right click your mouse and select Paste. This will paste the Weibull distribution you created in Stat::Fit into your model.
Continue with Lesson 3: Entity Arrivals
Lesson 3: Entity Arrivals
Entities are the things that move through your process. They include things like phone calls, patient’s, customer orders, raw materials, service requests, parts, etc. Entities come into your model through Arrival routes (with double headed arrows).
Close any open models and open Lesson 1 – Call Center. Double click on the arrival route connecting the Call entity to the Take Call activity.
A single Call arrives every 5 minutes. Using a Periodic arrival with a First time value of 0, the first call arrives immediately as the simulation begins.
Continuous – Entities will enter the model as soon as there is capacity to accept them at the first location (activity or storage) connected to the arrival. If the first activity has an input queue, it is ignored.
Daily Pattern – Entities arrive in a weekly (168 hours) repeating pattern, at times and quantities you specify on a given day of the week. If the start time and end time is different, the entities will arrive in a randomly distributed fashion during that block of time. The Quantity field may be a constant, distribution, or scenario parameter.
Ordered – Entities arrive according to the parameters established in the Order Signal route connected to it. This means a request must be made by an event downstream from the ordered arrival rather than from the arrival itself.
Periodic – Entities arrive at the interval indicated in the Repeat every field, in the quantity specified. Since all simulations start at midnight on Monday morning, that is when the first arrival occurs, unless you specify a value in the First time field in which case the arrival will be delayed by that amount of time. Each field may be a constant, distribution, or scenario parameter.
Scheduled – Entities arrive according to schedule you define using a numbered day (e.g. 1 st Monday, 3 rd Thursday). Each arrival is a one-time occurrence which the system does not repeat. The Quantity field may be a constant, distribution, or scenario parameter.
Daily Pattern Example
Let’s create a daily pattern arrival in which a quantity of 48 entities arrives between 8am and noon, then another 48 arrive between noon and 4pm.
Double click on the arrival route connecting the Call entity to the Take Call activity. Change the arrival Type to Daily Pattern and click Define Pattern. Then click New. In the Start time field enter 8:00 am. In the End time field enter 12:00 pm. In the Quantity field enter 48.
Click New and repeat the process for 12:00 pm to 4:00 pm.
Now copy the Monday pattern to Tuesday through Friday by clicking the Copy day button. Click Tuesday and then click Paste Monday. Repeat for Wednesday through Friday. Then click Close.
Each day, Monday through Friday, is divided into two blocks of 8am to noon, and noon to 4pm. In each of those 10 blocks a quantity of 48 calls will arrive, randomly distributed through each block.
Daily pattern arrivals (as well as scheduled arrivals and shifts) use a 24 hour clock. Up to now we have only been concerned with simulating straight working hours. As you will see shortly, this will impact how you setup your models.
Save this model as Lesson 3 and then simulate it.
1. What is the average cycle time for Difficult calls?
2. What is the utilization of the Level 2 resource?
3. Why has the utilization decreased?
4. Why did the animation stop during the simulation?
5. How many days were simulated?
6. How do you make the simulation run for an entire week?
7. If you change the arrival so all 96 entities arrive between 8:00am and 8:01 am, Monday through Friday, how does this change the results and why?
Continue to Lesson 4: Routings
Lesson 4: Routings
Routings are lines that connect the locations (activities, storages, etc.) in the model. Ten different kinds of routings allow for great flexibility in how entities move through a model. These routing types include
See The Logic of the Flow video tutorial on routing.
In the Lesson 1 – Call Center model there is only 1 routing type, the 3 percentage routes.
Note: The numbers 25% and 75% shown on the routes above are simply text labels used for clarity and have no affect on the actual percentages used.
These routes cause a specified percentage of the entities leaving a location to select that route when moving to the next location. With the exception of Create routes, no other routing type may be combined with percentage routes leaving a location. The sum of all percentage routes at a given location must equal 100%.
This is the time it takes the entity to move across the route to the new location. Any expression is valid.
The percentage of entities that should use this route. Only constants are allowed in this field.
This optional field allows you to rename an entity. If you have placed an icon in your model for this new entity, its name will appear in the drop down list and the name should be selected from that list rather than typed in. If no new icon exits, you may type a new name and the previous entity’s icon will continue to be used.
Unique statistics are kept for each entity named in the model. Thus, in the call center example, ProcessModel shows different statistics for Call and Difficult. When renaming an entity, all attributes and statistics are inherited from the previously named entity.
Since cycle time, the time an entity spends in the model, can only be accurately reported if the entity has left the model, stats are not collected until that time.
This route is used when you want to create additional entities to be processed. Often this route is used for parallel processing such as a customer placing an order which generates additional tasks such as data entry, credit approval, etc. In the example below the Truck entity would be considered the parent entity and the Pallet entity would be the child.
Create “after activity” or “before activity”
The new entity can be created before processing the activity from which originates, or after activity processing is complete.
This is the time it takes the entity to move across the route to the new location. Any expression is valid.
This is the number of entities to be created (0 – 9999). Any expression is valid.
Created Entity Name
The new entity must have a name given to it. If you have placed an icon in your model for this new entity, its name will appear in the drop down list and the name should be selected from that list rather than typed in. If no new icon exits, you may type a new name and the previous entity’s icon will continue to be used.
Attach routes are typically used in conjunction with Create routes. This route will attach one or more entities, known as child entities, to another single entity, known as a parent entity, which is waiting at the attach location. Once attached, the combined group of entities continues through the model as a single unit, with the parent entity’s graphic and attributes. Attributes of the child entity are masked by the parent and not accessible unless the entities are later detached.
It is best to either hold the attaching child entities in a storage, or make sure the child activity has an output queue of sufficient size to hold all waiting entities. That way if the parent and children somehow get out of sequence, the attach route won’t keep other waiting child entities from getting passed the entities first in line waiting for their parent.
This is the time it takes the entity to move across the route to the new location. Any expression is valid.
This indicates the number of child entities to attach to the parent entity. Any expression is valid.
The word All can be a useful attach quantity. Let’s talk about two examples.
1. You have created a variable number of child entities using a Create route. You then want to attach all the children which that parent created.
2. You have a varying number of test samples waiting to be picked up by a technician who does a collection run every 2 hours.
The child entity may be attached to any entity or to the entity that created it.
Install the model package Lesson 4 – Create and Attach model. Double click on the create route between Lab and Document Results.
A patient comes into the hospital for a lab test. After taking the test he (the parent entity) takes the 100% route to wait for his results at Release Patient. The test results are created and move to Document Results across the Create route, getting their new name of Results. The results then attach back to the entity that created it (Patient). After all, we don’t want the patient getting the wrong test results. Through the attach route, the patient receives his results and he is released.
As soon as the results have been documented, another create occurs, creating the Notification entity which goes to the doctor.
Double click the attach route between Document Results and Release Patient and review the settings for that route.
Conditional routes determine which route is selected based on a Boolean expression in the route’s Condition field.
Conditions can contain any Boolean expression and typically include at least one entity attribute. User defined variables may also be used. In the example above, the value of the attribute a_Priority is checked. If the value is 1, then the route is selected.
Conditional routes are tested in the order in which they were drawn. When the first true condition is found, that route is selected. The condition may be blank if you wish. But always make sure the route with a blank condition is the last one drawn since a blank condition is always true and no other routes will be tested.
This route is used to select an alternate location when the primary location doesn’t have capacity to accept the incoming entity. The route is drawn by clicking on the Connector Line Tool (from the Toolbox on the left edge of the modeling window), then drawing a line from the primary route to the alternate location.
A detach route caused entities that have been previously attached to the current entity to be detached and routed to the connected activity.
The Else route is used in conjunction with an Attach or Pickup route and is selected if the attach or pickup cannot be executed because the associated parent or child entity cannot be found.
The Ordered route causes entities to wait until and Order Signal is received. The order signal contains two pieces of information: the size of the order to be released, and the reorder level. When the contents of the activity or storage from where the signal originates falls below the reorder level specified, the order is placed, releasing the designated quantity of entities. An order signal is displayed as a dashed line and is connected on the arrow end to the ordered route.
The Pickup route functions in a similar way to the Attach route. However, the Pickup route causes the parent entity to wait for the child entity to arrive at the attaching location. Whereas the Attach route causes the child entities to wait. Also, the Pickup route only allows attaching a single child entity to the parent since no Quantity field is included.
The Renege route causes entities which have been waiting in an activity’s in put queue longer than an allotted time to leave the input queue without being processes at that activity and follow the renege route. The renege route will not pull an entity out of the activity itself. It only works from the input queue.
All Routes Sample
Install the model package Lesson 4 – All Routes model. Simulate it and view each of the routing types and how they work. The red routes are all 100% routes.
The Document entity is created by the Create route leaving the Process1 activity. Entities following the Renege route on the upper right side of the model are renamed to Item2. The attribute Att1, used in the Conditional routes, is assigned its value using action logic found on the Action tab of the Process11 activity.
Continue to Lesson 5: Attributes, Variables and Action Logic
Lesson 5: Attributes, Variables and Action Logic
Attributes are placeholders associated with an individual entity. Its value cannot be seen or changed by any other entity. An attribute describes an entity such as name, size, color, type, condition, etc. They can be either numeric values or character descriptors (single word descriptions). Attributes follow an entity as it moves through a model and can’t be changed globally by events that occur elsewhere in the model. All entities have 5 system defined attributes including Name, Cost, ID, VATime, and CycleStart. You may also create user-defined attributes. Attributes are often used to control action logic or to make routing decisions in the model. User-defined attributes are not displayed in the output reports. It is recommended that you begin all attribute names with “a_”. For example, a_Type, a_Color, a_Weight. That way they are easily recognized when reading your action logic.
Variables are placeholders for either numeric values or character descriptors (single word descriptions). They may be used to describe or track activities and states in the system such as the number of entities that have completed a particular activity, day of the week, hour of the day, etc. Variables are global in nature. That means any entity can view them or assigned a value in the Action tab of the properties dialog at any location. Variables are shown in the output reports and may be used to export modeling results to Excel. It is recommended that you begin all user-defined variable names with “v_”. For example, v_Count, v_Completed, v_BatchSize. That way they are easily recognized when reading your action logic.
ProcessModel allows you to design custom behavior in your model by writing simple but powerful logic statements. Action logic allows you to go beyond the normal property fields to determine how an entity is processed. Examples would include assigning values to attributes or variables, or performing a test using an If…Then statement.
Creating Attributes and Variables
You create an attribute by clicking the Insert menu and selecting Attributes & Variables. Then click New and enter the attribute name.
To create a variable click the Insert menu and selecting Attributes & Variables. Click the Global Variables tab, click New and enter the variable name.
Customizing with Action Logic
Let’s begin customizing our model behavior by adding a variable which will track the number of entities in our model at any given time.
Open the Lesson 1 – Call Center model. Create a variable by clicking the Insert menu and select Attributes & Variables. Click the Global Variables tab. Click New and enter the name v_Count in the Name field.
Next we need to write some action logic to use the v_Count variable.
Double click the arrival route (between the Call entity and the Take Call activity). Click the Action tab. In the window on the left, type Inc. That is a system statement which mean increment. Then click the Filter drop down list on the right and select the Variable option. This will display the name of all variables you have created using the Insert / Attributes & Variables option. With v_Count highlighted, click the Paste button.
Even though you can just type v_Count rather than selecting it from the filter list, selecting from the list eliminates typing errors, and can save time when you have defined many variables and attributes but have trouble remembering exactly how you spelled each one.
Inc v_Count is equivalent to writing v_Count = v_Count + 1, just shorter.
Now that we are increasing v_Count each time an entity enters the model, we need to decrease or decrement v_Count when each entity leaves.
Double click the 75% route leaving Take Call, then click the Action tab. Repeat the steps to create the action logic on the arrival route, except use Dec (for decrement) instead of Inc.
Since entities also leave the model after the Return Call activity is complete, we need to add an exit route and add the needed action logic.
To save a little typing time, click and drag your mouse over the Dec v_Count action logic. Then hold down the Ctrl key on your keyboard and press C to copy the text. Now add an exit route to the Return Call activity by hovering your mouse over the Return Call activity. Then drag your mouse to the right. Click the Action tab. Click your mouse in the Action window. Hold down the Ctrl key on your keyboard and press V to paste the action logic from the Windows clipboard.
To get a longer look at the process lets increase the simulation run length to 100 hours.
Click the Simulation menu and select Options. Change the Run Length to 100 hours.
Click Close. Now save your model as Lesson 5 – In-Process Count. Simulate the model and view the results. In order to create a Time Series plot of the counter variable, press the icon (Generate a time series plot) in the report toolbar.
In the Value window, double click the v_Count Value History item to add it to the Selected Values window on the right. Then click OK.
We created a variable to track the number of entities in the system at any given time. We then added action logic to increment that variable each time an entity entered the model, and used similar action logic to decrement the variable at each location where entities left the model. By using the Time Series Plot option in the output report, we now can see the trend of in-process entities. The increasing slope of the plot shows us that more entities are arriving in the system than it is able to process.
To learn about all the functions and statements allowed in ProcessModel, see Section 3.11 of the ProcessModel User’s Guide.
Let’s try adding a second Level 2 resource to see what happens to the plot.
Double click the Level 2 resource. In the Quantity field, enter 2. Save the model as Lesson 5 – In-Process Count 2. Then simulate the model and generate the time series plot as you did before.
Output Summary Report
A business decision is seldom made on a single factor such as in-process inventory. Consider some of the other things you need to evaluate before making a decision about the success of your change. Let’s look at another output report, the Output Summary.
Return to the modeling window by closing the Output Detail report. Then click in the ProcessModel toolbar, or click the View menu and select Output Summary. Click the Total Cost option on the left. Review each of the reports to see how the summary of
The Output Detail report (standard report shown following a simulation) uses an ABC costing method which only shows direct value added costs. To see the total cost of the simulation you must view the Output Summary report which includes all overhead costs such as unused resource costs. That total cost is then distributed across all entities leaving the process. Therefore, the average entity cost shown in the Output Detail report may be different than the cost shown in the Output Summary report.
1. Is it better from an in-process inventory perspective to have a 2 nd Level 2 resource?
2. What are the factors involved in deciding if it was cost effective to add a 2 nd resource?
3. Do you feel it is cost effective to add a 2 nd resource?
4. Are there better alternatives?
There are many functions and statements that can be used in ProcessModel to achieve a high degree of customization. Let’s just look at a couple of other examples.
Install the model package Lesson 5 – Route Distribution.
The Process2 activity utilizes 2 entity attributes in action logic to distinguish between the two entity types and route them accordingly. NAME is a system defined attribute given to all entities. In this model, a_Route is a user-defined attribute used on conjunction with the Conditional routes exiting Process2. All Item2 entities are to be routed to Process3 and are assigned an a_Route value of 1. The condition of the route leading to Process3 requires that a_Route must equal 1, therefore all Item2 entities will select that route. If the entity name is not Item2, a user-defined distribution is used to assign the a_Route value. 40% of all Item2’s are to be routed to Process4, and 60% to Process5.
Remember that a Percentage route must use a numeric constant in the Percent field. Distributions, attributes, and variables may not be used. However, if your routes are selected based on a percentage which may vary, Conditional routes can be used instead. The model above is only one example of using randomness in a percentage route selection.
Rework is a common part of just about every industry. This next example demonstrates the concept that each time an entity goes through rework, the odds of it needing additional rework diminishes.
Install the model package Lesson 5 – Rework Loop.
When a Card arrives, the a_LoopCount attribute is initialized to a value of 0. At Inspect, that attribute is incremented in order to track how many times the entity has been inspected. The 1 st time, there is a 50% chance it will require rework and the a_OK attribute will be set to 0. If the Card is ready to ship, a_OK is set to 1. The 2 nd time through the loop, 80% of the time it is ready for shipping. The 3 rd time, 90% of the time it is ok. Cards are always ok the 4 th time through.
If you need to execute multiple lines of action logic based on the result of an IF…THEN statement, you must enclose those statements within a BEGIN…END block. For example
If a_PatientType = Critical Then
Get Nurse And Doctor
5. In the Route Distribution model, if you changed the condition on the route leading to Process3 to “Name = Item2”, how would you change the action logic in Process2 to make the model give the same results as it does now?
6. Since user-defined attributes are not shown in the output report, how can you report on the number of times cards were reworked?
Continue to Lesson 6: Shifts
Lesson 6: Shifts
How They Work
By assigning a shift to a resource or activity, you create a fixed block of time when that resource or activity is available to work on entities. Shift files are created using a shift editor which can be called from the Shift tab of the Properties Dialog box for resources and activities. You can also create a default shift file by clicking the Simulation menu, selecting Options, then clicking the Files tab. Individual shift files assigned to a resource or activity will override a default shift file. Though you can assign a shift to either a resource or an activity or both, scheduling of activities is typically controlled by the shift of the resource working at that activity.
Once you assign a shift file in your model (or use Daily Pattern arrivals), you need to remember your simulation run length is now simulating a 24 hour clock. For example, a 40 hour run length no longer represents a week’s worth of working hours. 40 hours into a simulation now represents Tuesday at 4pm since all simulations start Monday morning at midnight.
By default, a resource on a shift file will work overtime if an entity is currently being worked on when the shift ends. If you want the resource to stop working and go off shift, you must check the option on the resource’s Shift tab to Interrupt current activity to go off shift or on break.
Let’s load our first model and begin working with shifts. Remember that the simulation run length is currently 40 hours. That represents five 8 hour days in straight work time. When we add shifts to the model we will need to adjust the simulation time to a 24 hour day.
Creating a Shift File
Open Lesson 1 – Call Center. Double click the Level 2 resource, then click the Shift tab. Open the shift editor by clicking the Create shift file button.
Create an 8 to 5 shift by clicking and dragging your mouse on the Mon line from 8am to 5pm. Don’t worry if you didn’t get it exactly right. Just click on the bar you drew, then adjust the starting and end times using the Begin Time and End Time adjustments at the top of the window.
Make sure the Monday shift is not selected by clicking the gray area around the time grid. Now define a lunch break from noon to 1pm. Click the Define Break icon in the tool bar. Then click and drag your mouse from noon to 1pm on the Monday shift you just created.
Click the 9 to 5 button next to the coffee cup icon to switch back to Define Shift mode. Click the Monday shift to select it. Click the Duplicate Day button to copy the Monday shift. Hold down the Shift key on your keyboard and click anywhere in the Tue timeline to paste the Monday shift. Repeat for Wed, Thu, and Fri.
Save your shift file as 8 to 5 and close the shift editor. Then click the Browse button and select the shift file.
Save your model as Lesson 6 – Shifts. Adjust the simulate run length to at least 120 hours (5pm on Friday). Assign a shift file to the Level 1 resource (either the same 8 to 5 shift or create a new one). Create a Daily Pattern arrival. Simulate your model.
1. Does your model have capacity to handle the calls they way have set them up?
2. Think of ways you could adjust your model to handle the load put on it.
Now is a good time to discuss model packages since you have now specified a path to your shift files. As we have mentioned already, your model is made up of several files, now including your shift file. Moving your model to a new location, either on your hard drive or sending it to someone else is easy because of the package creation function.
Click File and select Save.
Note the files listed. These are the files required to run your model. You could send your model by e-mail to another ProcessModel user. However, when they open the model, the shift file will be pointing to your hard drive location and they will get an error saying it can’t find the shift file.
By creating a package of your model and send the package file rather than the individual files, the other user would install the package rather than just opening the .igx file. By clicking File and selecting Install Package, the system will automatically remap the location of the shift file to the location selected by the package installer. This applies to simply moving your model to a new location on your own hard drive. When installing the package, a new shift file is copied and its location is remapped for you.
A model package is also a good way to backup your model for future retrieval. Just make sure you save the package with an appropriate name, uniquely identifying it. Helpful things to include in the name are the date, model builder, function of the model, etc.
Important! Be aware of one important issue when naming your files. With version 5.x and earlier, you cannot use periods in a model filename (e.g. My Model 1.5.3). You can use spaces, hyphens, or other delimiting characters, but not periods.
3. Model package are in a .zip format. Is it ok to use WinZip to extract model files rather than installing them from the File menu?
Continue to Lesson 7: A More Complex Call Center
Lesson 7: A More Complex Call Center
Our call center is quite simple. Though our next example is still fairly simple, let’s consider some additional issues such as the number of phones lines available, how long a customer is willing to wait on hold, alternate routings and additional action logic.
Another Call Center
Install the Lesson 7 – Call Center package.
Let’s look at what attributes and variables are being used in this model.
Click the Insert menu and select Attributes & Variables.
Complexity – A descriptive attribute describing how difficult the call is
Tolerance – How long a customer is willing to wait on hold before hanging up (reneging)
Start_Hold_Time – The simulation clock time when the customer was first put on hold
In_Process – The number of calls pending a resolution
On_Hold – The number of callers currently on hold
Abandons – The number of callers who hung up before being helped
Hold_Time – The hold time of each call
Research_Cycle_Time – The total amount of cycle time a Research call was in the system
1. How many trunk lines are there?
2. What is the purpose of the route coming out of the bottom of the Return Call activity?
3. How many TSR1 operators are there?
4. How many TSR2 operators are there?
5. Is TSR1 helping TSR2, or is TSR2 helping TSR1?
6. Where do the Renege calls go?
Action Logic Window
The action logic window is big and provides sufficient screen space to read and review the logic. Click on the action tab in the properties dialog, after selecting the arrival route to open the action logic window.
Double click the arrival route. Click the Action tab.
The number of in-process calls is incremented each time a new entity arrives. The call complexity is set using a user-defined distribution to one of three settings: easy, moderate, or difficult. The caller’s tolerance to stay on hold is set using a triangular distribution.
Double click the route between Take Call and Research Problem. Click the Action tab.
The Trunc_Line resource is freed since the caller has hung up and research will be starting in order to resolve the issue.
At any given location in your model, you can GET and/or FREE resources using connector lines or using action logic, but not both. For example, you may use connector lines at one activity and action logic at another activity. But you can’t use both methods at the same activity.
Double click the Research Problem activity. Click the Action tab.
All moderate calls will take a time determined by the Uniform distribution U(20,10). All other calls (difficult) will use a time assigned using the T(1.0,1.5,3.0) distribution. Remember the time on the General tab is 0 since the time is specified in the action logic.
7. With so many distributions in this model, how can you ensure that you have a good statistical representation of realistic results?
Continue to Lesson 8: Model Building Techniques
Lesson 8: Model Building Techniques
There is never only one right way to build a model. Most problems can be approached from many different angles and use different solutions. One of the benefits to simulation is that you can try several approaches to see which gives the best results before actually implementing the changes in your actual process. However, there are some basic steps you should take in order to have the greatest success with the least amount of rework and troubleshooting.
Some people try to build their entire model with all the detail they can right from the start. Even if you have a good understanding of how your process works, this approach can cause lots of headaches and rework. Even if your real process uses multiple entities, build your initial model with just one entity. It’s much easier to “watch” your simulation run and see trouble spots when there is only one entity to track. Don’t even add resources until you are confident that you flow is correct and entities routing to the proper locations. Don’t worry about times, capacities or quantities at first. Just get the flow working. Then start adding details.
Test as you go
You can save yourself a lot of grief by building a small section of your model, then testing it to make sure you are getting the expected results. If you build large sections, add functionality like resources and action logic, and then test when you’re finished building, you may find yourself with a troubleshooting nightmare. By testing small changes, you will always have a good idea where to look for problems.
Start adding times, capacities, arrival information, etc. If you need multiple entities in your model, start adding them before adding resources and action logic. Then add resources and action logic, testing as you go.
Remember, ProcessModel is an estimating tool that allows you to see trends and the impact of changes to a process. It isn’t intended to be a scheduling, order management, or MRP system. It’s important to make your model reflect reality as much as possible. But if your costs don’t match the real process penny for penny, that doesn’t mean the model is broken or can’t have significant impact on business decisions which can save millions of dollars.
Quick Model Building
To place a shape in the layout window simply left click on the shape you would like to use. You don’t need to drag the shape. Just move your mouse to the location in the layout window where you would like to place the shape and left click your mouse again.
Close any model you may have open. Click the green ball. Then click on the left side of your layout window to place the ball in your model.
By default, the green ball is an entity. But you may change it to another object type by clicking the Object type field drop down list.
If you would like to change the name of the object, just click it and start typing the new name. Whatever you type will replace the default name. When you’re done, either press the Esc key on your keyboard or click your mouse away from the graphic.
Notice there is a Name field on every Properties Dialog box. The text you type on the graphic will automatically be written to the Name field. The text displayed in the Name field will be name displayed in the output reports. If you would like to have different text on the layout than what shows in the output report, you can edit the Name field in the Properties Dialog box. In general though, it is best to edit object names by clicking on the graphic itself and typing a new name. You may also edit the text on the graphic by right clicking on it and selecting Edit Text or by pressing the F2 key on your keyboard.
Every object name field (also including attributes and variables) must be unique and must start with an alpha character.
You may customize the appearance of the graphics in your model by right clicking on them and selecting from the available options such as Format, Font, and Text Layout.
You can place shapes on your layout and then manually connect them by drawing each route. However, it is much faster if you use the automatic connection method.
With the green ball already on your layout, click the rectangle process shape in the shape palette. Then hover your mouse over the green ball in your layout. Note how the cursor changes to indicate you will be connecting shapes. Left click your mouse and drag to the right of the green ball.
The system knows that since you were connecting to an entity, the route should be a double headed arrival rather than a single headed routing line.
Left click your mouse away from the Process graphic just placed so the mouse cursor changes back to a pencil pointing to a box. Hover your mouse over Process, click and drag to the right again to place another Process step, automatically connected to the first.
Note how the shape names are automatically given unique names by adding a number to the end of the default name.
You may also draw exit routes, or routing lines from one object to another by simply hovering your mouse of the first object, then clicking and dragging to the desired location.
You may also place automatically connected resources in the same way.
Click the resource icon in the shape palette. Hover your mouse over the object where the resource is to work. Then click and drag away from the object.
You may also draw your own connector lines by hovering your mouse over the resource, then clicking and dragging to the desired location.
Continue to Case Study 1: Analysis
Case Study 1: Analysis
Finding the Problem
Although the model used in the case study wouldn’t be recommended for starting your own business, it is valuable for the purposes of demonstrating several things. We will look at activity and resource utilization, cycle time, and overall resource usage.
Install the model package called Case Study 1 – Analysis. Then double click the dashed resource connector between Gather Customer Information and the Tech 1 resource.
When you draw a resource connector, it defaults to a Get and Free connector. However, in this model, note that the connector between Tech 1 and the first activity is a Get connector. That means the resource is gotten at the first activity, but not released until later in the process. So he travels with the entity until a Free occurs after each oil change is complete.
Your 1st Assignment
The first thing we need to do is watch the simulation run to see what problems we can find before even looking at the output results. Then we will look at details of the output report in order to make a comparison of some simple changes we will be making to the model.
Simulate the model by clicking Simulation then select the Save and Simulate option.
The first thing you will notice is the second oil change activity is never used. Why is that when there are plenty of cars backing up waiting to be serviced? The reason is because of how the first resource (Tech 1) is being used. He takes the customer information for the first car, then goes with the car all the way through the process until the first oil change is complete. Then he is released and goes back to get the next customer’s information. By then, of course, there are no cars in the oil change bay. So when the 2 nd car comes in, it just goes to the first bay again because it’s empty. That means completely unutilized equipment and work space.
The second problem you will see is that the average cycle time (shown in the scoreboard) is 57.90 minutes for a 12 minute oil change. That means unhappy customers.
The third problem is that even though our last car arrived no later than 7pm, it doesn’t exit the model until close to 8pm. That means employees have to work overtime.
Now let’s look at the output report.
When asked if you want to see the results after the simulation, click No, on the ProcessModel main window click to launch the Output Report. Click on the Activities tab.
Note the Average Minutes Per Entry column. We see that customers waited (in the first input queue) to have their information taken for an average of 42.65 minutes.
Click on the Resources tab.
Our resource utilization is 45.52% for Tech 1, and 38.33% for Tech 2. So even though we have people waiting for an oil change, the resources are actually working less than half the day.
Click the Hot Spot Evaluator, with the Hot Spot Evaluator option selected, click the Chart tab.
This chart shows that approximately 80% of the cycle time for customers is spent waiting in the Gather Customer Information activity.
Your 2nd Assignment
That was a pretty dismal performance. Let’s see if we can make a simple change to the model to get some better results.
Close the output report and double click on the first resource connector again. Change the type to Get and Free. Then also change the two connectors going from Tech 1 to each of the Oil Change activities to Get and Free and simulate the model again.
As the simulation runs, note that both change bays are now being used, and the last car exits the process before 7:20pm.
View the output report and compare the results to the first simulation.
The average wait time in the first input queue has dropped from 42.65 minutes to just over 8.29. Plus the utilization has dropped on Tech 1.
Again, display the Hot Spot Evaluator’s Chart view.
The non-value added wait time in the first activity has dropped from 90% of the overall cycle time to less than 30%.
Wait time typically increases processing costs, increases production inventories, and lowers customer satisfaction. So reducing that non-value added time is typically a significant concern in process improvement projects. Low resource utilization in and of itself isn’t necessarily a good thing. But what this result is telling us is that the process is much more efficient than the first one was.
Continue to Case Study 2: Replications
Case Study 2: Replications
If a picture is worth a thousand words, how much is a dynamic, moving picture worth? Let’s find out. Rather than just talk about the theory behind the statistical reasons for replications, let’s watch some results in real time.
Install the model package called Case Study 2 – Replications.
This model uses a non-repeating periodic arrival with a quantity of 100 entities, 102 minutes of run length, a 50% route leading to Process2, 30% route leading to Process3, and a 20% route leading to Process4. Just for visual effect we rename the Item entity on the percentage routes.
This is a very simple model but it has a couple of new tools to make our “proof” a little easier to visualize.
We could just view the output report and do a simple calculation to determine if the number of entities entering each of the end processes matches the percentage on the routes leading to those activities. But with just a little extra work, the system can do all the work for us using variables and labels.
Double click the exit route leaving Process2. Click the Action tab, view the Action Logic window.
These action logic statements increment the variable v_TotalCount by value of 1 each time an entity uses this route in order to count the total number of entities leaving the model. The variable v_Count2 is also incremented, increasing the count of entities going through Process2. A calculation is then done for each of the 3 entities using the latest total count variable in order to accurately calculate the percentages of each entity type. The same action logic is repeated on each exit route, replacing Inc v_Count2 with the appropriate variable name.
Double click the red label titled Item2 Percentage.
The function of a label is to display the value of a variable on the screen during simulation. In this case, the variable selected from the Name drop down list is v_Pct2, which was calculated on the 50% route and shows the actual percentage of total entities entering the Process2 activity.
As the model simulates you will see that the percentage of entities crossing each percentage route varies because the system uses random functions to determine which of the percentage routes to select. For example, when the first entity arrives, it must choose which of the three routes to take. Regardless of which one it takes, the percentage break down will not be correct because one entity can’t be divided 50/30/20. As the second entity arrives, the same thing happens. But statistically it is impossible for the 50/30/20 breakdown to occur. A large enough sampling of entities must enter the model for those percentages to start to occur. The larger the sample of entities, the greater the chances for the breakdown to match the 50/30/20 percentages. But even with a large sample, randomness is still in effect. So the numbers will not be exact.
How can we minimize the effect of variation in the percentages? Replications. As you complete the simulation of this model you see the percentages are 52%, 25%, and 23%. That’s close to the expected results. But we should be able to do better.
Click the Simulation menu and select Options. Then change the Run Length to 1 day and Replications value to 30. Uncheck the Show Animation option and click Close.
When you run multiple replications, it can take quite a bit longer to run the simulation. Since we are just wanting to get the results and don’t need to watch the animation, let’s speed things up a bit.
Simulate the model, click No at the end of the simulation. Click the icon in the ProcessModel toolbar to view the output report. Click General Report to view the Averaged results.
Goto the Variables tab and to view the average variable values for v_Pct2, v_Pct3, and v_Pct4.
Note the percentages are now closer to the expected values, but still not quite there. Let’s get it closer.
Close the report. We will need a larger sample of entities. Increase the number of arrivals to 1000. Click Simulation and select Options. Then increase the Run length value to 1020 minutes and run the simulation again.
You will see the following in the Variables section of the output report.
Close the report. Click Simulation and select Options. Increase the arrival quantity to 10000. Then increase the Run length value to 10200 minutes and the Replications to 50. Run the simulation again and view the Variables section.
As you can see, the more replications and the larger the entity sample size, the closer the results get to the expected values. This is the statistical nature of variation you build into your models whether through percentage routes, distributions, arrival rates, etc.
The reason replications yield different results from each other is because of the starting random seed value. Each time you start a model with a single replication, the random functions start at the same point (starting seed). When you use replications, the starting seed for the next replication begins where the previous seed value left off. Therefore all subsequent random calculations have a new sequence.
Continue to Case Study 3: Froedtert Hospital Improves ICU Care
Case Study 3: Froedtert Hospital Improves ICU Care
Turning away Emergency Patients?
Froedtert Hospital, located in Milwaukee, Wisconsin, is the only ACS verified level 1 trauma center in southeastern Wisconsin. The hospital has almost 50,000 visits to the Emergency Department per year. Even though they have a capacity of nearly 450 beds, a staggering 12% of the time they were forced to reject patients requiring emergency care. Of course, this presented a critical problem for patients. But it also represented a significant loss of revenue as well as a black eye on the hospital’s reputation.
What the Model Told Them
A hospital staff member was given the assignment to analyze hospital processes in order to reduce the number of ambulance diversions away from Froedtert. A project was created which included the ICU, surgical capability and scheduling from both emergency arrivals (ambulances) and from elective surgeries. Using ProcessModel, it was determined that doctors had complete control over elective surgery scheduling without the insight of how the rest of the hospital was being affected by their decisions. It also showed that current patterns for elective surgery would stack up randomly on certain days, causing the ICU to fill in order to meet demand. On other days the ICU would be unaffected by elective surgeries. Because of the random loading of the ICU, they were sometimes unable to handle the demands of emergency patients needing ICU services. As a result, ambulances were turned away.
Through simulation, the hospital administrators and doctors were able to see the cause of the diversions. Experiments were conducted to identify an acceptable level of elective surgeries per day while still
accommodating the needs of emergency care. Rather than rely on best guess solutions, ProcessModel enabled Froedtert to use quantitative methods to discover the problem and suggest solutions. They arrived at a definitive number of elective surgeries that would still allow them to meet the emergency requirements of the ICU. Perhaps the most amazing thing learned through modeling their processes was that they could still perform the same number of elective surgeries by altering their scheduling, while being able to handle all the emergency cases needing help.
Benefits of Simulation
Doctors and administrators have implemented the solutions, resulting in millions of dollars of increased revenue for the hospital. The most important benefit though is that the hospital can now provide the level of health care needed by their patients, which in turn enhances public relations and the hospital’s reputation. These successes have lead to other projects being implemented in other areas such a reduction of length of stay for patients, optimizing staffing levels, reducing clinic patient wait times, optimizing lab procedures, etc.
Though building a comprehensive model of the complete process was required to gain a full understanding of the end to end impact of process factors, the key issue as far as scheduling was a relatively simple procedure. Load leveling was the key. Let’s look at the model representing the “before” arrival conditions.
Install the model package called Case Study 3 – Froedtert Before. Double click the red arrival route. Then click the Define Schedule button.
This scheduled arrival covers a period of 4 months. The action logic can be ignored since it will not be used in this case study. But it is used to define attributes of each patient.
Close the scheduled arrival and double click on the Surgical Intensive Care activity. Then select the Action tab.
The purpose of this action logic is to assign the variable v_SIC_Count the number of patients in the activity. A Time Series Plot of this variable will be shown when the output report is displayed.
The animation is turned of in order to display the output as quickly as possible. Begin the simulation by clicking the Simulation menu and selecting Save and Simulate. Then click Yes to see the results.
As shown above, the contents of the activity is at its maximum of 21 over 25% of the time. That means that “potentially” 25% of the time emergency patients would have to be turned away. Fortunately there were not always emergencies during those peak times. As noted earlier, the actual rejection rate was 12% which is still an alarming number.
Of course, you can’t control the number of emergency cases. But you can control the number and timing of elective surgeries. The following chart shows the number of elective surgeries scheduled on Mondays over a 4 month period.
The load leveling goal was to never have more than 4 scheduled procedures per day. The next chart shows the result of load leveling. On days where more that 4 surgeries were scheduled, those extra procedures were simply moved to other days.
The result of the load leveling was that they were able to maintain the same number of surgeries (elective and emergency) while reducing the number of turned away emergency cases.
Let’s see the results of the new schedule of arrivals found in the next model package.
Install the package file Case Study 3 – Froedtert After and simulate the model. Then view the results.
Now the contents of the activity is only at its maximum less than 13% of the time, dramatically reducing the risk of having to turn emergency patients away.
Continue to Case Study 4: Nuclear Site Cleanup
Case Study 4: Nuclear Site Cleanup
Not Enough Throughput
Washington Closure Hanford (WCH, LLC) won a contract with the Federal government to clean up low risk toxic waste at a retired nuclear site in the United States. Efficient use of resources, and optimizing the overall process was critical because of the stringent federal regulations and extreme fines imposed for delays in meeting defined objectives. Those fines could range anywhere from $250K to $1M per day.
One aspect of the cleanup project involved collecting contaminated soil and moving it to a storage site approximately 40 miles away. The project was expected to cost $10B over a 10 year period. Because of process inefficiencies the company was at risk of losing the project if significant changes weren’t made.
This procedure seemed fairly simple. A track hoe would fill containers at the cleanup site with the contaminated
soil. The containers would then be moved to a dump site for further processing. The logistics of moving the containers is where the problems came in. There were several stages in which containers were loaded and unloaded from trucks in multiple locations. An empty container would be loaded on a short haul truck at the cleanup site, loaded with dirt, and then moved to a transfer station.
From there the containers were loaded onto long haul trucks for the 40 mile trip to the dump site. There another transfer would take place in which full containers were exchanged for empty ones. The long haul trucks then returned to the collection site.
The full containers at the dump site were loaded on short haul trucks where they would be emptied. The empty containers would be reloaded on the long haul trucks as mentioned above and returned to the cleanup sites for the next cycle through the process.
In order to keep the contract and avoid very costly fines, the company needed to maximize the daily tonnage of soil removed from each of several collection sites. It also needed to find and eliminate as many delays in the system as possible.
Several questions needed to be answered before the goal could be met. However, in order to simplify the issues for the purposes of this case study, we won’t get into the political issues, union concerns, etc. We will just focus on the logistical concerns.
• How many trucks should be used at each location?
• How many containers should be used?
• Where are the bottlenecks in the operation?
• How can the most soil be transported in the least amount of time in order to reduce or eliminate fines and the risk of losing the contract?
• Is the current method the best approach for this job or does there need to be a dramatic change to increase throughput?
Several cleanup sites were involved. However, for simplicity we have limited the model to use a single cleanup site, long haul transfer, and the dump site.
Normally entities are processed and the system collects statistics on them as they exit. However, in this case, trucks and containers are continuously reused, never exiting the model. So the approach taken to solve this problem is slightly different than what would normally be used. Container entities are “attached” to truck entities to represent the loading step, then “detached” representing the unload step. Since nothing actually leaves the model, normal statistics are not collected. User-defined variables are used to gather each of the critical statistics to measure model performance. The routes shown in red contain action logic where variables are used to collect the needed statistics
Your 1st Assignment
Install the Case Study 4 – Nuclear Site Cleanup model package and review the model.
Note the variables used, and where they are located (on the red routes). Scenario Parameters are also used in the model. These parameters control the number of trucks and containers which arrive initially. By using scenario parameters, the values you will be changing as you work with this model will all be centrally located for ease of access.
Click the Simulation menu, then select Define Scenarios and click the Scenario Parameters tab.
You will need to determine how the number of trucks and containers used can be altered in order to improve performance and meet the goal specified above.
Your 2nd Assignment
As you watch the model simulate, observe where the delays are occurring in the system. Is there a way to eliminate those delays? Using the original model with its activities, times, capacities, etc., design your own model to alter the overall process in a way that reduces the wait times experienced. Remember you want to get as much throughput (soil moved) as possible, eliminate wasted time, and reduce costs as much as possible.
Continue to Case Study 5: Cheesecake Factory Customer Seating Optimization
Case Study 5: Cheesecake Factory Customer Seating Optimization
Known Problem, Unknown Solution
A popular and successful restaurant called the Cheesecake Factory suffered from a well known, long standing problem (literally). We’ve all seen and experienced it; long lines waiting to get a seat in a popular restaurant, even though you can see open seating available. This is an industry wide problem experienced by many restaurants. However, the solution isn’t quite as easy as you might think, unless you like to share your table with total strangers.
Focusing in on the Real Issue
They had a suspicion that the cause of the problem might be one, or a combination, of 3 things.
1. Availability of the food (preparation time)
2. People staying too long after their meal was done
3. Inefficient serving staff
The first thing to do was to sit in the restaurant during peak hours and observe what was happening. They found that in spite of a long line of waiting customers, 50-60% of the seats were empty. This was because many small parties (2 or 3 people) were sitting at tables intended for 4 or more customers.
Table Mix Optimization
They had previously tried adding additional serving staff. But the lines remained long and the seats unused. They also noticed that when customers where at larger tables and there were many empty seats around them, they tended to stay longer since they perceived that there was no rush for them to leave. Something else needed to be tried. But what?
The consultant saw what most everyone else saw, that the mix of table sizes was not effectively accommodating the waiting customers. But with ProcessModel software, he had the tools needed to find the optimum table mix.
The software was first used to create a model of their current situation. By analyzing the data the company had collected about customer arrivals and party sizes, the consultant built a model which yielded accurate results as reflected in their collected data. The next task was to build a model that would allow the system to find the optimal mix of tables to handle their customers more efficiently. Some rules had to be established.
1. Parties of 2 would be seated at a 2 top table whenever possible. However, they could also be seated at a 4 top table, but never at larger tables.
2. There would be no table combining during peak hours because it actually caused delays which lowered customer satisfaction.
3. Based on the results of the model, the table layout in the restaurant would be changed as needed during non-busy time to accommodate for upcoming busy hours.
4. The model would need to be changed from time to time to meet the requirements of new locations or newly gathered data analysis of customer patterns. Therefore, the learning curve would need to be shortened and change procedures simplified as much as possible.
It was determined that during peak hours, there was basically a continuous flow of customers so the Continuous Arrival was used.
The only change needed in the model to optimize for different locations or using different collected data is the breakdown of group sizes. This is the attribute a_Group_Size and is assigned its value in the “Line for Cashier” storage.
In the action logic shown above, the group size is assigned using embedded User-Defined distributions. 63.23% of the time, group size is either 1 or 2 people. Of that 63.23% of the time, 14.74% of the time the party size is 1. 85.26% of the time the party size is 2. Other group sizes are assigned in a similar fashion.
Then, based on the group size, a time is assigned for how long they will stay, and the ticket size (cost of the meal). The computer resource is simply used to ensure that more parties aren’t allowed into the eating area than there are total tables.
Based on the group size, the party is routed to the appropriate table using conditional routes.
Additional action logic is used to check the utilization of each table size. So if a group of 2 come in but the 2 top tables are all used, the group can be sent to a 4 top table. Only 1 size upgrade is available per party however.
Using ProcessModel’s optimization module called SimRunner, the consultant began by indicating his goal was to maximize the number of customers served, while minimizing the total number of tables used. Then, using scenario parameters he had created in the model, he specified a range of possible values for each table size. The system then ran as many experiments (or simulations) as needed to find the optimum number of tables of each size to reach the desired goal.
It is important to note that the optimized table mixes were created specifically for use during peak hours. During times when the restaurant is not at maximum capacity, it is less important to use an optimized table mix as there is no shortage of available seating, regardless of each party’s size.
Your 1st Assignment
David Overton, CEO and Chairman of the Cheesecake Factory, is skeptical of any changes to his successful chain of restaurants. He has been made aware of your work in simulation and has now requested a presentation from your group. He wants to see proof that you can maximize the number of customers served and increase the revenue for a restaurant. You are only responsible for showing how you can change business during peak hours (blocks of 3 hours). He wants the bottom line:
• How much can we increase our revenue during peak hours?
• What is the maximum number of customers we can serve during a peak 3 hour time frame?
• How do you intend to achieve this goal?
Install the package file Case Study 5 – CheeseCake Factory.
The model is already complete so you won’t need to add any new logic or change the structure of the model. Your first task is to manually create two scenarios which you think will be helpful in reaching the “bottom line” Mr. Overton wants to see.
To create your scenarios, click the Simulation menu and select the Define Scenarios option. Click the New button and name the scenario.
Note the parameter called s_Total_Seats. This is the scenario parameter which sets the maximum number of seats allowed in the restaurant according to fire code. The model is built to not allow exceeding that seating capacity during optimization. So don’t change that parameter value. Instead, change the parameter values for the quantity of each table size. Your objective is to maximize throughput of the restaurant without exceeding the seating capacity, as well as make as much money as you can, and serve as many customers as possible.
Create a 2 nd scenario in the same way and choose different parameter values. Close the scenario window. In order to run your scenarios, click Simulation and select the Run Scenarios option. Compare the results of each scenario to determine how well they performed, which one was best, and how close you came to making the boss happy.
Your 2nd Assignment
Now that you’ve manually created your scenarios and compared the results, let’s see how ProcessModel can make your life a whole lot easier. Use SimRunner to automatically find the optimal table mix for you. You will need the help of your instructor to understand and properly setup SimRunner. So wait for your next class in
order to receive that help. Following that instruction, sit back and watch while ProcessModel does the analysis for you.
Your students will likely need help in setting up SimRunner. Use the following setup details:
1. SimRunner uses the simulation data file created when a simulation is started. Therefore, before starting SimRunner you must start your simulation. It can either be a normal run or you may run scenarios. It isn’t necessary to complete a simulation however. Just start the simulation , then end it immediately. Click Tools / SimRunner. The simulation data file will be loaded automatically.
2. An optimization project file called Case Study 5 – CheeseCake Factory.opt is included (click here to download) with the model package. It contains all the parameters needed to run an optimization. To load the project file, click File / Open Project and browse to the folder where the model package was installed and select the project file. Some explanation will be needed to help the students understand how it all works.
3. The Setup Project button at the top left of the window allows you to create the objective of running the optimization and define the inputs you’d like the system to change in order to meet that objective.
4. Click Define objectives on the left. On the right you will see two variables with the desired outcome for each. The v_Opt_Error variable is assigned a value using the model’s Capacity Trigger and is used to monitor the total number of seats assigned to the table mix scenario parameters. If the number exceeds the maximum allowed, a value of -100 is assigned in the model. If the total number of seats is within the limit, a value of 100 is assigned. Note SimRunner is setup to maximize that variable value and assign a weight factor of 100. So if the simulation run within the limit, the objective function is incremented by 10000. If it is above the limit, a value of -10000 is added to the objective function. There is also a variable being maximized which represents the total number of customers. Since each party size is given a new entity name in the model, the variable is used to calculate the total number of each party size. The goal is to maximize that variable.
5. Click Define inputs. This section displays each of the scenario parameters that were created in the model. These are the values which SimRunner will alter in order to find the optimal configuration. You will see that s_Total_Seats is the only scenario parameter no selected for modification since that parameter is fixed by the fire code for the building. We want the system to alter all other scenario parameters to find the best table mix in order to meet our defined objectives.
6. It is important to note that the larger the spread of possible values for each parameter when defining your inputs, the more possible outcomes there will be. Therefore, the optimization will take longer to run. For that reason, a narrowed set of values has been pre-selected. But if you’d like to change the ranges, feel free to experiment.
7. For simplicity, the model is setup with a run length of 24 hours. That means it will run from midnight Monday morning to midnight Tuesday morning. Arrival information is built in to run longer than that. But this exercise is only intended to optimize one day’s worth of data.
8. Click the Optimize Model button at the upper right of the window. Then click the Seek optimum button on the left. Click Run to begin the optimization.
9. The Performance Plot window will graphically display the results of each experiment (complete simulation run) with the green line tracking the objective function of each experiment, and the red line at the top showing the best performing experiment.
10. The spreadsheet area displays the details of each experiment. The chart is sorted by the best resultant objective function.
11. When the optimization is complete, review each scenario parameter’s value in the spreadsheet. The best performing experiment will be at the top of the chart.
Continue to Appendix: Answers to Lesson Questions
Appendix: Answers to Lesson Questions
1. 150 calls.
2. 2 minutes.
3. 50 calls.
4. 23 minutes.
5. Cycle time dropped from over 11 hours to under 40 minutes.
6. Level2 utilization dropped to 51.81%.
7. That is bad because they are only busy half of the time they are available to work.
8. Perform Research.
9. The queue is no longer used 10. 3.71 11. 78.81 minutes 12. Level 1 = 53.38%, Level 2 = 89.70%.
1. 92.36 and 4.32.
2. 316.11 and 244.72.
3. The 25% and 75% routings.
3. Entities are only coming in between 8am and 4pm, but the resource is working the entire 40 hours.
4. The arrivals stopped occurring.
6. Click Simulation / Options and set the run length to 168 hours or 7 days.
7. Cycle time increases for both types of calls because they are waiting in queues much longer. All calls are completed by approximately 3pm.
2. Total cost, cost per call, unused resource cost, total number of calls returned, customer satisfaction.
3. Despite costs being higher, it is likely a better choice to add a second resource than stay with 1 resource. Throughput is much better, customers will be happier with shorter wait times, and your employees will probably have higher jobs satisfaction since stress levels will be significantly lower. The cost of customer dissatisfaction is hard to measure but in general you should be striving for the best service possible. Employee burnout is likely to be much higher in the case of only one Level 2 resource. That can end up costing a great deal in many different ways.
4. Yes, as noted in earlier lessons. It is important to remember though that the simulations need to be run and the results compared before assuming one solution is better than another. You might be surprised at the results of some of the experiments.
5. If Name = Item Then a_Route = D2(40,2,60,3).
6. Create a variable (e.g. v_ReworkCount) with an initial value of 0. On the rework route (returning to Production), add the action logic “Inc v_ReworkCount). In the output report, scroll to the bottom section to view the variable’s stats.
3. You can extract your files with WinZip. However, you lose the remapping functionality if you extract them. If your model only contains .igx and .pmd files, your model will still run. But if you use shift files or submodel files, the model will not run without manually remapping the folder locations. You should always “install” the package from ProcessModel’s File menu.
2. Some returned calls are not resolved satisfactorily. So they have to do more research and return the call later.
5. TSR2 is helping TSR1.
6. Since the route is not connected to any other activity, they exit the system.
7. Run multiple replications, then view the average values in the output report rather than each individual replication’s result.