Construction projects grow larger and more complex day by day. Hence, effective risk management of such projects is essential for the mutual benefits of the client and the organisation involved. The time factor is crucial, as it defines not only schedules and pressing deadlines, but also extra costs and potential liquidated damages that may arise due to time slippage. As a result, the inherent uncertainty of construction projects may hinder the on-time completion and therefore the effective delivery to the client.
To capture the inherent uncertainty in project networks with respect to time, several risk-measuring scheduling tools have been developed over the years. By the late 1950s and onwards, extensive research was being conducted regarding the schedule performance of projects. This research produced the Critical Path Method (CPM), which is a simple scheduling tool for calculating the overall duration in a project network. Another such tool is the Programme Evaluation and Review Technique (PERT), which, in a simple manner of speaking, is a probabilistic scheduling tool for analysing project networks. PERT relies on an approximation of the Beta Distribution, to account for the variation in activity time estimates and thus calculates a more reliable time estimate than CPM of completing the project on time.
Since PERT’s appearance though, various researchers (Mohan et al. ; Hahn ) have generated considerable negative criticism, which mainly stems from the notion that the method produces optimistic results. More specifically, they advocated that the Beta Distribution is not the most suitable one, as it does not assign enough skewness to the activity duration estimates. Also, the method is approximate, as it does not account for near-critical paths, which may become critical should something go wrong. However, PERT is still widely used in construction projects, because of its simplicity and computational ease.
In the early 1960s, the Monte Carlo simulation (MCS) was developed. The MCS is a computer-aided simulation technique which has various applications in different industries. It can be used in quantitative risk analysis with respect to time as well, as it can generate the overall probability of completing projects on time, after having simulated numerous times the potential outcomes of the probabilistic models (Van Slyke ). Without a doubt, MCS is more complex and computationally slower than PERT (Jun and El-Rayes ). However, MCS produces more accurate results, as it simulates all the possible scenarios precisely, accounts for near-critical paths and formulates an overall project duration probabilistic distribution.
Both of the scheduling tools mentioned above require the definition of the activity times estimates. According to the Project Management Institute’s (PMI)  Project Management Body of Knowledge (PMBOK) , a Work Breakdown Structure (WBS) is an “organisational tool that illustrates the entire scope of the project, broken down into manageable activities”. Such activities can be estimated with regards to cost and time. Therefore, the activity time estimates are computed on the lowest possible level, and if aggregated, the entire project schedule is formulated. However, what happens when these estimates are either erroneous or roughly approximate? Similarly, what happens if the probabilistic distributions in PERT are different from the Standard Beta?
The first aim of this project falls into the domain of quantitative project risk analysis with respect to the time factor. The criticality of defining accurate activity time estimates will be pointed out through the literature review. Further, the aspect of completing projects on time will be investigated through different methodologies apparent in the literature of quantitative project risk analysis. The results of the analysis will be compared against the widely accepted scheduling tools (PERT and MCS).
The second aim would be to implement such a risk analysis on two residential construction projects drawn from author’s work experience. The project networks will be formulated using MS Project, along with the activity duration estimates, essential for conducting the PERT analysis. Then, @Risk will be employed to perform the MCSs using various probabilistic distributions and a specific methodology drawn from the author’s literature review on the topic of quantitative project risk analysis.
Taking into consideration the aims of the study, the objectives of this project are as follows:
- To identify the importance of risk measuring scheduling tools in construction projects.
- To investigate any proposed methodologies in the literature of construction project risk management, alternative or complementary to PERT and MCS.
- To perform a quantitative risk analysis on two real-world construction projects to validate the alternative use of probabilistic distributions.
- To apply the specified technique to two construction projects, to compare the resulting outcomes with those of PERT and MCS.
- To formulate conclusions whether the findings drawn from the analysis could indicate that the employed methodology could be deemed more efficient and useful than the widely accepted risk measuring tools.
In Chapter 2, the insights of the literature review are presented through the author’s perspective. At first, an introduction in the field of project risk management is set out, followed by some research narrowed down to the fields of qualitative and quantitative risk analysis. Moreover, an extensive review is conducted regarding the PERT method along with the criticism it has received over the years.
In Chapter 3, the methodology adopted in the analytical part of the project is explained. The PERT method and the @Risk software are described for completeness. The main part of the methodology chapter focuses on the various types of probabilistic distributions equipped in the analysis, along with the Fast and Accurate Risk Evaluation (FARE) Technique by Jun and El-Rayes . The latter is selected as an alternative to the MCS. All these theoretical aspects are applied to two construction projects, one with 50 activities and one larger with 110, which are outlined in detail in Section 4.2.
In Chapter 4, the two construction projects employed in the analysis are outlined. Further, the presentation of the analysis is illustrated using numerous figures and tables drawn by @Risk’s outputs. Also, a discussion of the results takes place by pointing out the key findings of the quantitative risk analysis. At that point, any validation of the employed methodology is indicated by comparing it with the already established scheduling tools.
Finally, in Chapter 5, the conclusions are extended, as drawn throughout the project and especially from the discussion conducted in Chapter 4.
According to Ward and Chapman  all up to date risk management processes feature a limited focus regarding uncertainty management. More specifically, they pointed out that the term “risk” has become directly associated with threatening events and that its opportunistic side is therefore neglected. Such a notion, however, restrains the boundaries of efficient project risk analysis. As a result, the writers proposed a shift towards holistic uncertainty management, which would embrace already established risk modules and techniques. Ward and Chapman presented the views of the two leading institutions in the field, the Project Management Institute (PMI)  and the Association for Project Management (APM) , regarding risk management processes. Also, the authors extended the need for transforming these processes into the field of uncertainty management. As such, managers would be able to capture the opportunities arising from uncertain events more accurately.
Consequently, the authors structured the scope of uncertainty into five categories, spanning from the variability and basis of time estimates to the inherent uncertainty during the conceptual definition, design and logistic phases, and the lack of certainty in between the project stakeholders. They considered that uncertainty identification is an essential component towards managing any particular source of uncertain events. Also, the writers stressed that the previously mentioned five categories of uncertainty have to be tackled individually, so as to produce a robust plan for managing such events. They finally proposed that the shift of emphasis from the one-dimensional risk management to the multi-dimensional uncertainty handling process would only occur if the root-cause of uncertain events and the aspect of estimates variability are captured through effective quantitative techniques.
Carr and Tah  introduced in the paper a computer-aided methodology designed for qualitative risk assessment. The proposed method was derived by using cause and effect diagrams, which included the relationships between risk factors and risks, along with their impact on construction projects. The assessment and analysis of identified risks were conducted using fuzzy set theory so that their impact would be captured in a mathematical sense. The authors used the Hierarchical Risk Breakdown Structure (HRBS), which is based on cause and effect diagrams to capture the interdependencies between the various risk factors and the corresponding risks, to develop the first step of the proposed methodology. They showed an HBRS cause and effect diagram as an example. This diagram breaks down the risks associated with a particular activity, into the risk factors and their consequences (risks).
The authors extended their proposed fuzzy risk analysis model, which handles three different qualitative risk measures: occurrence, severity and the corresponding effect of the risk factor. These three measures can take qualitative linguistic measurements, from low to high; they affect four project performance factors: time, cost, quality and safety. The authors stretched out their methodology using fuzzy set theory for a risk factor, and then they showed how to apply it to all the project activities. Lastly, they advocated that the entire qualitative risk process should consist of the five following phases:
- Identification phase, which is the most important one as an unidentified risk could not be analysed;
- Assessment phase, in which the risk assessment is conducted and the values of the qualitative risk measures are defined;
- Analysis phase, in which the fuzzy set theory methodology takes place;
- Handling phase including the risk response strategies;
- Monitoring phase, in which the practitioner monitors the status of any risk and their relevant changes.
Finally, the authors highlighted the importance of such methodologies for the construction industry, by underlining the aim of the proposed method, which is to facilitate the entire risk analysis process.
De Marco and Thaheem  extended in the paper a detailed review of the current tools and techniques used in Project Risk Management (PRM). According to the authors, there are four different techniques in Project Risk Analysis (PRA):
- Qualitative, which includes several tools incorporating risks’ description and determining their impact on a 1 to 5 scale;
- Semi-quantitative, in which a more detailed scaling of risk effect takes place;
- Quantitative methods, in which the occurrence probability and risk impact are quantified into monetary terms;
- Simulation techniques, conducted in complex, large-scale construction projects with deep uncertainty.
The concept of the triple constraint, set out in , was expanded with the introduction of four project drivers, which were represented on a radar diagram. These drivers constitute of:
- The level of challenges arising from the project execution;
- The project management responsibility, which emphasises mainly on the aspect of scope;
- The focus, i.e. the complexity and attention required throughout the project life-cycle;
- The maturity of the organisation undertaking construction projects.
The authors then presented their methodology of selecting the appropriate tools and techniques for effective PRA, after having specified on a 1 – 4 scale the project driver’s score. This methodology was then applied to two construction projects yielding two different radar charts. By relying on this graphical technique, the authors alongside with each project manager were able to select the most appropriate PRA method. Both Project Managers pointed out that the developed methodology was deemed very useful and efficient regarding the selection of the most suitable risk analysis techniques.
According to González et al. , the relationship between causes of deviation from the schedule baseline and the subsequent impact on time performance should be deemed as critical. The authors suggested that delay analysis in construction projects should be performed at both activity and project level, to obtain a more robust view of the overall time performance and schedule baseline slippage. They proposed a mixed qualitative and quantitative methodology to capture the relationship of causes of delays and project time performance. Reasons for Non-compliance (RNC) were introduced by the authors as a qualitative measure that embodies the causes of delay of critical and non-critical activities.
A flowchart was used to illustrate the algorithm of the proposed methodology of quantitative delay analysis, which was developed to measure time performance efficiently on a weekly basis. The first step of the algorithm is to select the critical and non-critical activities to be analysed. In the next two steps, the actual (real) and as-planned cumulative percentage weekly progress is defined. If the actual value matches the as-planned one, then there is no need to engage in further analysis.
On the other hand, if there is activity time slippage, the quantitative measure called delay index is calculated, to capture the actual deviation from schedule. Finally, the weighted averages at the level of critical activities and project level (global) are computed, so that the relationship between the critical and global RNCs is taken into consideration.
The authors then tested the proposed methodology on two large-scale residential projects in Chile. The results of the case studies showed that the most critical RNC is that of planning and when combined with the subcontracts RNC, together they accounted for 80% of the total causes of delays. Lastly, the authors interviewed the project management teams working on the two projects regarding the proposed methodology and received an encouraging feedback.
In this project, it was considered necessary to capture the quantitative aspect of time risk management in construction projects. Therefore, the rest of the literature review focuses mainly on such techniques.
Jun and El-Rayes  described in the paper step by step, the development of a new probabilistic scheduling method that incorporates multiple features from several already established ones. The purpose of this novel probabilistic technique was to overcome significant limitations apparent in widely-used scheduling techniques. More specifically, the PERT method does not take into account any impact of non-critical paths on the overall completion probability (“merge event bias”). Also, the MCS, which uses stochastic processes to calculate the overall completion probability, was deemed time-consuming due to the number of activities and critical paths that appear in large-scale construction projects.
The Fast and Accurate Risk Evaluation (FARE) method is essentially a technique that allows the user to reduce the number of network paths that have very high probability of completing on time, by representing these paths with others. In the paper , the technique was applied in a real-life construction project, and the probability approximation of the project completion was measured against the results of a Monte Carlo simulation. The margin of error was 3% while the computational time dropped by 94%. FARE technique by Jun and El-Rayes is described in Section 3.6 in more detail.
Another interesting framework in scheduling risk management is that presented by Schatteman et al. . They reported the development of an integrated uncertainty management methodology for planning construction projects. The methodology was mixed qualitative and quantitative. The qualitative part relies on the identification of potential risks and their categorisation into activity groups. Then, the occurrence and impact of potential risks have to be measured. Thus, the resulting data is aggregated into a proactive scheduling tool that measures the robustness of project management plans. Furthermore, Project Managers should determine how activities should be classified into several groups, along with their duration estimates. Also, activity weights, which mainly depend on the impact of risks, have to be calculated to account for rescheduling marginal costs.
The objective of the proactive scheduling tool, called Starting Time Criticality (STC) algorithm, was to deliver an outcome as less affected as possible by any distortions that may occur during project execution. This scheduling tool was applied to a real-world construction project and was compared with three different commercial packages. The comparison was conducted in two steps using four baseline schedules. The first analysis took place using data from the planning phase of the project, whereas the second analysis used post-completion data. In both cases, the STC algorithm was deemed superior to the commercial packages, as it demonstrated 90% and 72% better schedule performance than MS Project and ProChain software respectively, along with minimal distortion of activities.
“The Program Evaluation and Review Technique (PERT) is a widely used scheduling technique with proven value in managing complex networks”, Premachandra, 1999 . In 1959, the creators of the technique considered the Beta Distribution approximation as the most suitable one for modelling the activity duration estimates. However, many authors have criticised the use of the Beta and have tried to prove that other distributions might perform better.
Hahn  advocated the need for a more flexible probabilistic distribution to be equipped in PERT. The standard method uses deterministic activity time estimates that account for a constant variance. However, the conducted literature review suggested that the assumption mentioned above poses a limitation in the accurate calculation of project completion times. In particular, the Beta Distribution in PERT cannot capture the low likelihood – great impact events, which appear in the distribution tail. Therefore, the author underlined the need for shifting towards another continuous distribution, essentially a mixture of the Beta and the rectangular (or uniform).
Furthermore, Hahn stated that the conjunction of the aforementioned probabilistic tools could model the behaviour of extreme events in the tails and incorporate the uncertainty in activity time estimates more accurately. Hence, the proposed heavy-tailed distribution uses a Theta (θ) factor (0 ≤ θ ≤ 1), to combine the Beta with the rectangular. The author then pointed out two methods for eliciting the θ factor, after having derived the expectation, variance and median of the proposed PDF. The first method is to set the most likely values and then rate the likelihood of it occurring on a scale of 1 to 10 or 1 to 100. Dividing this measure with the highest value (10 or 100) would provide the θ value.
Alternatively, the second method relies on a computer aided graphical method produced by Kadane et al. . The proposed probabilistic tool was then compared against PERT on an actual project network. As a result, as θ declined, the variance increased, and the weight of the distribution shifted towards the upper tail. Therefore, Kahn’s methodology forms a solid basis for capturing meticulously activity time estimates and the subsequent extreme events in project management.
On a similar basis, Gładysz et al.  proposed a mathematical model on the field of project time risk management. The model is a direct modification of PERT. However, it was developed as a new quantitative technique, based on stochastic modelling, which aimed to be applied to construction projects. The authors included a literature review on PERT and accounted for three broad categories of risk: risks at the macro level; construction market risks; and risks at the project level. The two former risk categories are frequently uncontrollable risks, whereas the latter could be controlled by the project management team and thus could be eliminated.
The authors proposed their model, which is based on the work done by Hahn , as described above. The main difference to Hahn’s model was that the authors altered the Beta Distribution PDF used in PERT, by taking into consideration not only one, but several uniform distributions. A change percentage Sk is introduced to model the variation of the pessimistic duration and a θk variable is used to capture the weighting on several risks at the project level defined by the Analytical Hierarchy Process. After these variables are defined, the minimum project completion time is computed through a linear programming problem. Should the user of the methodology require a shorter completion date, several project level risks can be eliminated for a certain cost.
Similarly, the authors developed a linear programming model that minimises the cost above, given the target completion time. The methodology was then applied to a medium-scale construction project to prove its usability and efficiency. The case study showed that the proposed quantitative model might help project managers in removing controllable risks for a given cost and produce a project schedule more robust and closer to the client’s requirements.
Mohan et al.  proposed another approximation for the activity duration distribution in the PERT method. They pointed out that usually, Project Managers do not have a clear insight, to provide accurate three-point estimates for activity durations. As a result, extensive research has been conducted regarding the issues of the appropriateness of various probabilistic distributions and the effectiveness of the use of two-point estimates. Based on the research, the authors presented a variation of the original PERT method, which featured the lognormal distribution with two-point estimates. The selection of either the optimistic (a) or pessimistic (b) duration values depends on the amount of skewness the user wishes to achieve; the most likely (m) applies to both cases. In other words, the selection depends on whether the overall project completion time falls towards the optimistic or the pessimistic side.
Thus, the researchers experimented on the proposed alteration of PERT, by using critical paths with various numbers of activities (from 10 to 100) and three ranges of skewness for the activity duration distributions. The conducted analysis featured the original PERT method, the Premachandra  method (variation of PERT), and the normal and lognormal distributions with two duration estimates (m and either a, or b). The distributions mentioned above were used to calculate the expected time and variance, which were employed as performance measures.
Moreover, average percentage errors for the mean and 95 percentile were computed, so as to offer a direct comparison between the various cases. The analysis shown that the lognormal approximation with two duration estimates outperforms the other distributions in both performance measures, when the activities are right-skewed, i.e. the lean towards the pessimistic side. Conversely, the left-skewed lognormal approximates the normal distribution very closely. However further research has to be implemented to validate the results of the analysis. Nevertheless, the proposed alteration of PERT seemed relatively robust.
According to Hajdu and Bokor , the appropriate definition of the probabilistic distributions for the activity durations is essential. They pointed out that erroneous assignment of activity estimates could affect the on-time project completion probability. Therefore, the research aimed to assess whether the appropriate selection of probabilistic distributions has a greater impact on the project completion date than the inaccurate estimation of the activity durations. The authors provided a literature review on the various alternative distributions used in the PERT method. Thus, they ultimately selected the uniform, triangular and lognormal distributions to compare against the standard Beta used in PERT.
Apart from these distributions, the authors used three other cases of the Beta; one with a -10% difference in the initial activity time estimates; one with a +10% difference; and another one with a +15%. As a result, Hajdu and Bokor could illustrate the second assumption regarding the inaccurate time estimates. They relied on MCSs, to better capture the convergence of probabilistic results and to produce the cumulative distribution functions (CDFs) for the overall project durations. Three hypothetical projects were analysed initially, which all showed that the uniform, triangular and lognormal distributions fell within the ±10% range provided by the inaccuracy of time estimates, illustrated by the original Beta one.
In addition, the authors also used four real-world construction projects with complex networks, to account for increased uncertainty. The implementation of the analysis showed that the selection of the particular three distributions satisfied the initial hypothesis. In other words, inaccurate time estimates affect the project completion time more than the various distributions do. Hence, the analysis showed that accurate time estimates are essential in the accurate planning of construction projects. However, research has to be made, to consolidate the findings of the paper.
Trietsch and Baker  presented in the paper an alternative proposal for upgrading the algorithm of the PERT method to fit the demands of the 21st century. At first, the writers addressed the various limitations of the PERT method; “the lack of statistical calibration; the independence assumption, i.e. no correlation between activity durations; and the reliance on deterministic estimates”. The paper aimed to propose a new scheduling framework for modern project management decision supporting systems (DSS) that would minimise these limitations.
The basis of this scheduling framework, called PERT 21, relies on the lognormal distribution (instead of the Beta), to capture the stochastic sequencing, activity durations and release dates more precisely. As such, users can account for the criticalities of project activities, as the lognormal seems to be the most appropriate distribution for modelling the pessimistic skewness of activity durations.
Furthermore, historical data should be elicited to estimate and calibrate the necessary statistical parameters. Thus, the variation between similar projects undertaken by the organisation that uses the proposed framework could be captured. By introducing linear association combined with appropriate usage of a systematic error random variable, the authors managed to fit historic data, as to provide PERT 21 practitioners with an objective perspective on activity correlations.
What is more, the authors relied on two graphical tools to extend the simplicity of PERT 21. The first one was the predictive Gantt chart, which allows the user to incorporate graphically the probabilistic start and due date on an activity level in the CDF. The second one was the flag diagram, which depicts the criticality and normalised delay cost of the activity. Lastly, the authors provided a qualitative comparison of PERT 21 against existing risk modules and the Critical Chain Project Management method (CCPM). They suggest that the proposed methodology could be retrofitted, as a complement, into existing frameworks and therefore provide holistic and robust stochastic activity scheduling.
Upon completing the literature review, two main methodologies were selected to be implemented in the quantitative risk analysis that follows. Following the findings of Hajdu and Bokor , a distribution comparison will be conducted to test the validity of several types of probabilistic tools.
Furthermore, the FARE technique was decided to be employed in the main body of the analysis. This technique reduces the number of paths in a project network by taking into consideration two different criteria. Its details will be discussed in Section 3.6.
At this point, note that both methodologies will be applied to two construction projects, as part of the quantitative risk analysis that follows in Chapter 4.
In this chapter, the methodology employed for the quantitative risk analysis that follows in Chapter 4 is explained. The PERT method and the capabilities of the @Risk® software are outlined. The need for a preliminary sensitivity analysis is underlined, to set the scene for the main part of the methodology. In Section 3.5, a thorough distribution description takes place as generated in @Risk, and the various types of probabilistic tools are pointed out. In the final section, a scheduling methodology selected during the literature research is employed. The purpose of this novel probabilistic method is to overcome significant claimed limitations encountered in PERT and MCS, as stated in Section 2.3.
In this section, the PERT method is outlined, based on the lectures for the module “Lifecycle Engineering” .
The PERT method analyses the various paths identified in project networks. The standard method uses the approximate Beta Distribution to account for the activity time estimates and thus calculates the overall project completion time.
The approximate Beta Distribution requires three input parameters called activity time estimates, which are user-defined and needed to compute the activity expected mean time (μ) and variance (σ2). The approximate formulas for the activity expected mean time and variance are:
where a is the optimistic activity time estimate, b is the pessimistic estimate, and m is the most likely estimate.
After having defined the activity time estimates, the user should perform the Critical Path Method to identify the critical path. By aggregating the expected durations of the activities, the critical path indicates the expected mean duration for the overall project network. Then, the user should calculate the corresponding standard deviation (σ) by simply adding the variances of the critical activities and taking the square root of the sum. This process assumes independence amongst these activities.
At this point, the following note is appropriate to be made. In a strictly mathematical way of thinking, the process of adding the variances of the critical path activities could be deemed as erroneous. More specifically, this quantitative procedure does not account for the various combinations of the variance amongst these activities and arbitrarily adds only the positive value. That way, uncertainty is inserted deliberately in the whole process. This procedure has been described as a limitation of the PERT method, as stated by Trietsch and Baker .
As a final step of the PERT method, the user has to determine the Z-value, as shown below:
where T is the target completion time, μpath is the expected duration of the project, and σpath is the critical path standard deviation.
The Z-value is computed so that the user can calculate the overall completion probability by using the cumulative standard Normal Distribution Table. Therefore, the overall completion probability derived by the PERT method is given as follows:
where Φ is the cumulative density function of the Normal Distribution and Z is the value fed into the Standard Normal Table.
Therefore, the overall completion probability, as shown in equation (4), is the resulting outcome of the PERT method.
The importance of the Monte Carlo Simulation in the field of probabilistic project network analysis was identified from the very beginning. As stated in Section 1.1, the MCS is a simulation technique that can be used to compute the overall completion probability, after having simulated numerous times the potential outcomes of the specified distributions assigned to project activities. Hence, the importance of the MCS stems from the fact that it can account for all the scenarios and bind them all in one outcome.
@Risk is a computer-software that incorporates all the features to produce complete MCSs for projects. It is an add-on to the Microsoft Excel and relies on project networks created in MS Project. The interface of the software may seem strange at first and difficult to familiarise with. However, Palisade  offers extensive instructions and tutorials, which facilitate the entire process. A summary of how to use the software is given below.
Practitioners of @Risk are required to import project networks from MS Project and link them directly with a new Excel Spreadsheet. The software prompts the user to perform a “schedule audit”, to avoid any unlinked activities or gaps in the network before starting the modelling process. Also, the number of iterations has to be set. More specifically, as the number of iterations grows, the results should converge to a specific number, and thus the user will eliminate errors as much as possible.
In the second stage of the modelling process, the user has to define the overall uncertainty of the network. Hence, probabilistic distributions from a wide range can be selected and assigned to virtually any input variable of the imported project network. In this particular project that deals with time risk management, the input variables are the activity time estimates of the project. Furthermore, aside from the numerous types of distributions, the software offers a broad selection of built-in parameters as well. In other words, the user can alter the parameters of the probabilistic distributions, to model the response of the activities more accurately. Thus, @Risk gives its users unlimited capabilities to modelling project networks precisely.
After having defined all the input variables of the @Risk model, the user should be ready to run the simulation. As stated by Palisade  “the software selects random values from the specified distributions of the input variables, places them in the created model and each time keeps the generated result”. As the @Risk software performs MCSs, it recalculates the model iteratively, meaning as many times as the user-defined iteration number. Therefore, the resulting outcome of the simulation is a precise approximation of the overall completion probability. Thus, the user has all the relevant output data to interpret the viability of the schedule and potentially move onto performing any changes on the project network.
At this point, it was deemed appropriate to set the scene for what will follow next. The remaining Sections of the Methodology Chapter reflect various techniques equipped in the @Risk simulations. As previously stated in Section 1.3, two residential projects were employed, in which the networks and activity time estimates were constructed heuristically by the author’s work experience. The details of these projects are outlined in Section 4.2. Therefore, everything that follows will be applied to both project networks.
As described in Section 3.3, @Risk runs simulations iteratively. This feature allows the software to account for all the possible outcomes numerous times. As such, each analysis converges towards a prevailing overall result. The predefined number of iterations is set to be either 100; 1,000; 5,000; 10,000; 50,000; or 100,000. The user though may select virtually any number of iterations for the MCSs.
Having that in mind, the author decided to conduct a sensitivity analysis on both construction projects, to determine whether there is any difference attributed to the user-specified number of iterations, or not. Hence, a comparison is conducted among the results generated by 1,000, 10,000 and 100,000 iterations. That way, an accurate decision would be made regarding the most time-efficient number to be used in the core part of the analysis.
As presented in the literature review chapter, two views were identified regarding the selection of activity duration distributions and the role they play in construction project networks. On the one hand, the view supported by Mohan et al. ; Hahn ; and Gładysz et al.,  advocates that the approximate Beta Distribution used in the standard PERT method does not account precisely for the inherent uncertainty of construction projects, as it produces optimistic results that lie far from reality.
On the other hand, Hajdu and Bokor  conducted an extensive analysis which shows that the only thing that matters, when it comes to precise modelling of construction project networks, is the accuracy of the activity time estimates fed into the model. Thus, they propose the view that the various types of activity duration distributions used in project networks play a minor role in the overall precision of the model. To strengthen their view, Hajdu and Bokor performed a direct comparison amongst six duration distributions. The resulting outcome of the analysis shows that indeed the six activity duration distributions produce different results. However, they suggest that the scientific community should shift towards the accurate modelling of activity time estimates, as this may affect the resulting outcomes considerably.
Given the insights from Section 2.4, it was decided to move towards a direct comparison amongst four different activity duration distributions applied to two real-world construction projects. In the sections that follow, the probabilistic distributions and the reasons for which they were specifically chosen for the analysis are presented.
As it has already been discussed, the PERT method uses the approximate Beta Distribution for the activity durations. In other words, it uses the formulas (1) and (2), as shown in Section 3.2. @Risk models this approximate Beta Distribution with the so-called Pert Distribution. The latter requires the familiar three-point estimates (optimistic, most likely and pessimistic) as input variables, to produce the PDF.
The characteristic of the Pert (Beta) Distribution is that it follows a somewhat bell-shaped function. Consequently, it produces a high probability measure around the most likely estimate while having relatively low probability measures towards the extreme events (optimistic and pessimistic). In Fig. 3.1 below, a symmetrical activity example is shown.
Another widely used distribution by Project Managers that uses three-point estimates in @Risk is the Triangular Distribution. The following graph (Fig. 3.2) illustrates the PDF with the same input data as Pert’s graph above.
The prevalent characteristics of the Triangular Distribution are that firstly, it produces a slightly higher probability measure around the most likely activity time estimate, compared to Pert. Secondly and most importantly, it has fatter-tails towards the extreme events, meaning that it takes into consideration the optimistic and pessimistic time estimates to a greater extent than Pert. Besides, it can be seen directly in the illustrated examples that the Triangular Distribution gives out a higher standard deviation value compared to Pert. Therefore, it can be deduced that the Triangular Distribution may produce more realistic outcomes than Pert does, as it models the behaviour of extreme events more precisely, as proposed by Schatteman et al. .
The Uniform Distribution differs from the other three because it does not demand the same three-point estimates to be modelled. Conversely, @Risk requires a lower and an upper limit, so that the interval of the Uniform Distribution would be defined. Hence, the optimistic and pessimistic activity time estimates could be used to represent these limits. An example of the Uniform Distribution PDF can be seen below in Fig. 3.3.
The most important characteristic of the Uniform Distribution is that all the events within the specified interval have the same occurrence probability. In other words, the Uniform Distribution could be used to model events that happen equally likely within the given duration interval. Hence, the reason for which the Uniform was selected, was that it could potentially produce a more pessimistic model compared to the other three distributions.
Finally, the Lognormal Distribution was selected, as the final probabilistic input to be analysed. More specifically, numerous researchers, such as Triesch and Baker  and Mohan et al. , have indicated that the Lognormal is an appropriate distribution for probabilistic scheduling of construction projects.
Similarly, with the Uniform, the Lognormal Distribution requires only two parameters to be modelled. In this case, the mean (μ) and the standard deviation (σ) are needed to produce the PDF in @Risk. For comparative reasons, it was decided to feed the two parameters of mean and variance, as generated by the PERT method, into the interface of @Risk. Therefore, the resulting Lognormal PDF, shown in Fig. 3.4 below, would follow PERT’s estimates closely and would be easy to model.
After having declared the four types of probabilistic distributions to be used in the analysis that follows, it was decided to set out a direct comparison amongst them, to draw meaningful conclusions.
In Fig. 3.5 in page 25, the four symmetrical distributions are illustrated together using an activity duration interval between 1 and 3 days. It is apparent that the Lognormal has the lowest standard deviation value, whereas the Uniform has the highest amongst the four. Therefore, it can be implied that the former would produce more optimistic results, the latter more pessimistic and Pert and Triangular would fall in between the two.
In Fig. 3.6 in page 26, a similar fitting of the four distributions is depicted, though the pessimistic time estimate is set to be equal to 5 days. Therefore, the Pert, Triangular and Lognormal distributions follow an asymmetrical shape and their behaviour becomes apparent. Pert has a mean value of 2.333, whereas the Triangular has a slightly higher at 2.667, while the latter generates a considerably higher standard deviation value 0.8498 compared to 0.7127. The Uniform Distribution produces the largest mean and standard deviation, with values that reach 3 and 1.1547 days respectively.
The values given above and seen in the two figures that follow could be justified by the shape of the generated probabilistic curves. The Pert Distribution has a steep increase in the optimistic time estimate, has a peak around the most likely one and follows a significant fall towards the pessimistic area. On the contrary, the Triangular distribution follows a smoother incline, has a lower peak on the most likely area, and finally has a fatter tail towards its end. Therefore, it can be induced that the Triangular would produce more stochastically pessimistic results than Pert.
Aside from the PERT method and the Monte Carlo Simulation, over the years, numerous novel scheduling tools have emerged in the scientific community of project risk analysis. As identified in Section 2.3, a fairly interesting probabilistic scheduling methodology is the one described by Jun and El-Rayes , called Fast and Accurate Risk Evaluation (FARE) technique.
The FARE Technique is a simplification of the usual network analysis in which large project networks are reduced by removing paths that could be represented by others. Essentially, this is achieved by two criteria. The first criterion is the Upper Probability Bound (UP), in which paths that have very high completion probabilities, greater than the UP value, could be represented by other paths on the network. The second criterion corresponds to the inherent correlation Upper Correlation Bound, (UC) amongst project paths. More specifically, in complex projects where the paths are contingent to each other due to sharing of common activities, highly correlated paths could be neglected. Again, the paths that generate a correlation value higher than the user-defined UC threshold should be removed from the project network. Therefore, the FARE technique is able to reduce the complexity of the project networks by using these two criteria.
As far as the creation of the FARE technique is concerned, the authors based themselves on three different scheduling components; the standard PERT method (described in Section 3.2), the multivariate normal integral method, which is the basis of the MCS, and the use of an approximation method on large-scale networks. The latter deals with the identification of the representative paths, or in other words, with those paths that are used to represent the ones that need to be removed. This approximation method is computer-based and some of its key points are illustrated in Appendix A. However, a brief description of FARE technique’s algorithm is outlined below.
The technique consists of three main phases of several steps. In “phase 1”, the user of the technique specifies the values for the two criteria, UP and UC. Also, redundant links that create parallel paths with a higher probability of completion should be deleted from the project network. Next, the user moves onto “phase 2” which deals with the removal of high-probability paths. This phase constitutes of three steps, as follows.
In the first step, the user changes the UP into an equivalent lower bound for mean path duration (LM). In the second step, the longest path duration search (LPDS) algorithm is employed, to determine the longest mean path duration (LDn) and the largest variance (LV) for every resulting combination of paths. Note that these two values correspond to the entire project network and are fed into the following step.
The third and final step analyses the representative paths by relying on the fast representative path search (FRPS) algorithm. The latter guides the user through the analysis process to determine which paths should be analysed or removed directly. The LM value is used as a threshold and is compared against the expected mean duration of the path at hand. If the former is higher than the latter, the path is removed from the project network, and the node at which the path starts off should not be analysed again. Otherwise, the path is kept as a representative path. The flowcharts of the two algorithms are given in Appendix A.
As a final third phase of the method, the user proceeds with the removal of highly correlated paths. Correlations amongst paths should be computed at first, to determine which paths could be represented and thus be removed from the network. The authors relied on the original method devised by Ang et al.  with a small, but considerable alteration. They suggested that the user should sort the resulting correlation table in ascending order of the path probability of completion. Therefore, the user could select the most appropriate representative paths.
Hence, users of FARE can produce simplified project networks and avoid potentially lengthy simulations, without compromising in accuracy compared to the MCS results. For this reason, in this dissertation it was decided to implement an approximation of the FARE Technique (FARETA), due to non-availability of the appropriate software, in order to test the reliability of the proposed method.
Based on the methodology illustrated by Jun and El-Rayes , the author proceeded to the application of a simplified approximation technique, to achieve the desired network path reduction. The methodology at hand was applied to both construction projects. Further, the results of FARETA are compared against those generated by a full MCS using the Pert Distribution.
The main limitations that led to the simplification were firstly due to the non-availability of an appropriate scheduling interface, to simulate the original technique and secondly due to the complexity of the two newly-developed algorithms. MS Project does not enable its users to isolate individual network paths. Consequently, it was not feasible to proceed with the application of the LPDS and FRPS algorithms, and their application has to be done manually. Hence, the approximation was deemed essential so that the analysis could be applied effectively to both construction project networks. This approximation is illustrated below in the following eight steps.
- Step 1: Setting of the upper probability bound (UP) value.
- Step 2: Setting of the upper correlation bound (UC) value and calculation of the path correlation using the formulas below.
ρk,lis the correlation between paths
Covk,lis the covariance between paths
nis on path
- Step 3: Application of PERT analysis (as illustrated in Section 3.2) on every single path to determine the mean, standard deviation and probability completion time (substitute of the LPDS algorithm).
- Step 4: Formation of a table (as shown in Table 3.1 below) that includes the mean, standard deviation, probability of completion and correlation amongst paths, as shown below, and sort it in ascending order with regards to the probability measure.
|Mean (μ)||St. Dev.(σ)||Probability (%)||ρ1,n||ρi,n||ρk,n|
- Step 5: Indicate the representative paths of the project network by comparing the probability of completion with the UP value and the resulting correlation value with the specified UC. Remove any paths whose P(t≤T) is greater than UP (substitute of the FRPS algorithm).
- Step 6: Remove any paths in which ρ is higher than UC as they can be represented by other paths with the same value of ρ and a lower UP measure.
- Step 7: Update the MS Project network and run an MCS using @Risk, to calculate the overall completion probability, using the Pert Distribution.
- Step 8: Compare the results with those produced by the full MCS using the original project network and specify the margin of error.
At this point, it is suggested that the eight steps shown above would approximate the original FARE Technique close enough. Hence, any distortions to the resulting outcomes could be avoided successfully.
As a last step of the FARETA analysis, it was decided to perform a sensitivity analysis between 10,000 and 100,000 iterations, to determine whether there is a significant change in the simulation duration, or not. That way, the initially proposed objective by Jun and El-Rayes  regarding the reduction of simulation duration could be validated as such.
In Chapter 4, the results of the conducted quantitative risk analysis on the two construction projects are extended in great detail.
Input distributions may be correlated with one another, individually or in a time series. Correlations are quickly defined in matrices that pop up over Excel, and a Correlated Time Series can be added in a single click. A Correlated Time Series is created from a multi-period range that contains a set of similar distributions in each time period.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: