Quantitative Project Risk Analysis for Construction Project: Literature Review
Info: 3939 words (16 pages) Example Literature Review
Published: 8th Sep 2021
This sample is part of a set:
- Introduction
- Literature Review
- Methodology
Chapter 2 – Literature Review
2.1 Project Risk Management
According to Ward and Chapman [7] all up to date risk management processes feature a limited focus regarding uncertainty management. More specifically, they pointed out that the term “risk” has become directly associated with threatening events and that its opportunistic side is therefore neglected. Such a notion, however, restrains the boundaries of efficient project risk analysis. As a result, the writers proposed a shift towards holistic uncertainty management, which would embrace already established risk modules and techniques. Ward and Chapman presented the views of the two leading institutions in the field, the Project Management Institute (PMI) [5] and the Association for Project Management (APM) [8], regarding risk management processes. Also, the authors extended the need for transforming these processes into the field of uncertainty management. As such, managers would be able to capture the opportunities arising from uncertain events more accurately.
Consequently, the authors structured the scope of uncertainty into five categories, spanning from the variability and basis of time estimates to the inherent uncertainty during the conceptual definition, design and logistic phases, and the lack of certainty in between the project stakeholders. They considered that uncertainty identification is an essential component towards managing any particular source of uncertain events. Also, the writers stressed that the previously mentioned five categories of uncertainty have to be tackled individually, so as to produce a robust plan for managing such events. They finally proposed that the shift of emphasis from the one-dimensional risk management to the multi-dimensional uncertainty handling process would only occur if the root-cause of uncertain events and the aspect of estimates variability are captured through effective quantitative techniques.
2.2 Qualitative Risk Management Techniques
Carr and Tah [9] introduced in the paper a computer-aided methodology designed for qualitative risk assessment. The proposed method was derived by using cause and effect diagrams, which included the relationships between risk factors and risks, along with their impact on construction projects. The assessment and analysis of identified risks were conducted using fuzzy set theory so that their impact would be captured in a mathematical sense. The authors used the Hierarchical Risk Breakdown Structure (HRBS), which is based on cause and effect diagrams to capture the interdependencies between the various risk factors and the corresponding risks, to develop the first step of the proposed methodology. They showed an HBRS cause and effect diagram as an example. This diagram breaks down the risks associated with a particular activity, into the risk factors and their consequences (risks).
The authors extended their proposed fuzzy risk analysis model, which handles three different qualitative risk measures: occurrence, severity and the corresponding effect of the risk factor. These three measures can take qualitative linguistic measurements, from low to high; they affect four project performance factors: time, cost, quality and safety. The authors stretched out their methodology using fuzzy set theory for a risk factor, and then they showed how to apply it to all the project activities. Lastly, they advocated that the entire qualitative risk process should consist of the five following phases:
- Identification phase, which is the most important one as an unidentified risk could not be analysed;
- Assessment phase, in which the risk assessment is conducted and the values of the qualitative risk measures are defined;
- Analysis phase, in which the fuzzy set theory methodology takes place;
- Handling phase including the risk response strategies;
- Monitoring phase, in which the practitioner monitors the status of any risk and their relevant changes.
Finally, the authors highlighted the importance of such methodologies for the construction industry, by underlining the aim of the proposed method, which is to facilitate the entire risk analysis process.
De Marco and Thaheem [10] extended in the paper a detailed review of the current tools and techniques used in Project Risk Management (PRM). According to the authors, there are four different techniques in Project Risk Analysis (PRA):
- Qualitative, which includes several tools incorporating risks’ description and determining their impact on a 1 to 5 scale;
- Semi-quantitative, in which a more detailed scaling of risk effect takes place;
- Quantitative methods, in which the occurrence probability and risk impact are quantified into monetary terms;
- Simulation techniques, conducted in complex, large-scale construction projects with deep uncertainty.
The concept of the triple constraint, set out in [6], was expanded with the introduction of four project drivers, which were represented on a radar diagram. These drivers constitute of:
- The level of challenges arising from the project execution;
- The project management responsibility, which emphasises mainly on the aspect of scope;
- The focus, i.e. the complexity and attention required throughout the project life-cycle;
- The maturity of the organisation undertaking construction projects.
The authors then presented their methodology of selecting the appropriate tools and techniques for effective PRA, after having specified on a 1 – 4 scale the project driver’s score. This methodology was then applied to two construction projects yielding two different radar charts. By relying on this graphical technique, the authors alongside with each project manager were able to select the most appropriate PRA method. Both Project Managers pointed out that the developed methodology was deemed very useful and efficient regarding the selection of the most suitable risk analysis techniques.
According to González et al. [11], the relationship between causes of deviation from the schedule baseline and the subsequent impact on time performance should be deemed as critical. The authors suggested that delay analysis in construction projects should be performed at both activity and project level, to obtain a more robust view of the overall time performance and schedule baseline slippage. They proposed a mixed qualitative and quantitative methodology to capture the relationship of causes of delays and project time performance. Reasons for Non-compliance (RNC) were introduced by the authors as a qualitative measure that embodies the causes of delay of critical and non-critical activities.
A flowchart was used to illustrate the algorithm of the proposed methodology of quantitative delay analysis, which was developed to measure time performance efficiently on a weekly basis. The first step of the algorithm is to select the critical and non-critical activities to be analysed. In the next two steps, the actual (real) and as-planned cumulative percentage weekly progress is defined. If the actual value matches the as-planned one, then there is no need to engage in further analysis.
On the other hand, if there is activity time slippage, the quantitative measure called delay index is calculated, to capture the actual deviation from schedule. Finally, the weighted averages at the level of critical activities and project level (global) are computed, so that the relationship between the critical and global RNCs is taken into consideration.
The authors then tested the proposed methodology on two large-scale residential projects in Chile. The results of the case studies showed that the most critical RNC is that of planning and when combined with the subcontracts RNC, together they accounted for 80% of the total causes of delays. Lastly, the authors interviewed the project management teams working on the two projects regarding the proposed methodology and received an encouraging feedback.
2.3 Quantitative Risk Management Methodologies
In this project, it was considered necessary to capture the quantitative aspect of time risk management in construction projects. Therefore, the rest of the literature review focuses mainly on such techniques.
Jun and El-Rayes [4] described in the paper step by step, the development of a new probabilistic scheduling method that incorporates multiple features from several already established ones. The purpose of this novel probabilistic technique was to overcome significant limitations apparent in widely-used scheduling techniques. More specifically, the PERT method does not take into account any impact of non-critical paths on the overall completion probability (“merge event bias”). Also, the MCS, which uses stochastic processes to calculate the overall completion probability, was deemed time-consuming due to the number of activities and critical paths that appear in large-scale construction projects.
The Fast and Accurate Risk Evaluation (FARE) method is essentially a technique that allows the user to reduce the number of network paths that have very high probability of completing on time, by representing these paths with others. In the paper [4], the technique was applied in a real-life construction project, and the probability approximation of the project completion was measured against the results of a Monte Carlo simulation. The margin of error was 3% while the computational time dropped by 94%. FARE technique by Jun and El-Rayes is described in Section 3.6 in more detail.
Another interesting framework in scheduling risk management is that presented by Schatteman et al. [12]. They reported the development of an integrated uncertainty management methodology for planning construction projects. The methodology was mixed qualitative and quantitative. The qualitative part relies on the identification of potential risks and their categorisation into activity groups. Then, the occurrence and impact of potential risks have to be measured. Thus, the resulting data is aggregated into a proactive scheduling tool that measures the robustness of project management plans. Furthermore, Project Managers should determine how activities should be classified into several groups, along with their duration estimates. Also, activity weights, which mainly depend on the impact of risks, have to be calculated to account for rescheduling marginal costs.
The objective of the proactive scheduling tool, called Starting Time Criticality (STC) algorithm, was to deliver an outcome as less affected as possible by any distortions that may occur during project execution. This scheduling tool was applied to a real-world construction project and was compared with three different commercial packages. The comparison was conducted in two steps using four baseline schedules. The first analysis took place using data from the planning phase of the project, whereas the second analysis used post-completion data. In both cases, the STC algorithm was deemed superior to the commercial packages, as it demonstrated 90% and 72% better schedule performance than MS Project and ProChain software respectively, along with minimal distortion of activities.
2.4 Insights of the PERT Method
“The Program Evaluation and Review Technique (PERT) is a widely used scheduling technique with proven value in managing complex networks”, Premachandra, 1999 [13]. In 1959, the creators of the technique considered the Beta Distribution approximation as the most suitable one for modelling the activity duration estimates. However, many authors have criticised the use of the Beta and have tried to prove that other distributions might perform better.
Hahn [2] advocated the need for a more flexible probabilistic distribution to be equipped in PERT. The standard method uses deterministic activity time estimates that account for a constant variance. However, the conducted literature review suggested that the assumption mentioned above poses a limitation in the accurate calculation of project completion times. In particular, the Beta Distribution in PERT cannot capture the low likelihood – great impact events, which appear in the distribution tail. Therefore, the author underlined the need for shifting towards another continuous distribution, essentially a mixture of the Beta and the rectangular (or uniform).
Furthermore, Hahn stated that the conjunction of the aforementioned probabilistic tools could model the behaviour of extreme events in the tails and incorporate the uncertainty in activity time estimates more accurately. Hence, the proposed heavy-tailed distribution uses a Theta (θ) factor (0 ≤ θ ≤ 1), to combine the Beta with the rectangular. The author then pointed out two methods for eliciting the θ factor, after having derived the expectation, variance and median of the proposed PDF. The first method is to set the most likely values and then rate the likelihood of it occurring on a scale of 1 to 10 or 1 to 100. Dividing this measure with the highest value (10 or 100) would provide the θ value.
Alternatively, the second method relies on a computer aided graphical method produced by Kadane et al. [14]. The proposed probabilistic tool was then compared against PERT on an actual project network. As a result, as θ declined, the variance increased, and the weight of the distribution shifted towards the upper tail. Therefore, Kahn’s methodology forms a solid basis for capturing meticulously activity time estimates and the subsequent extreme events in project management.
On a similar basis, Gładysz et al. [15] proposed a mathematical model on the field of project time risk management. The model is a direct modification of PERT. However, it was developed as a new quantitative technique, based on stochastic modelling, which aimed to be applied to construction projects. The authors included a literature review on PERT and accounted for three broad categories of risk: risks at the macro level; construction market risks; and risks at the project level. The two former risk categories are frequently uncontrollable risks, whereas the latter could be controlled by the project management team and thus could be eliminated.
The authors proposed their model, which is based on the work done by Hahn [2], as described above. The main difference to Hahn’s model was that the authors altered the Beta Distribution PDF used in PERT, by taking into consideration not only one, but several uniform distributions. A change percentage Sk is introduced to model the variation of the pessimistic duration and a θk variable is used to capture the weighting on several risks at the project level defined by the Analytical Hierarchy Process. After these variables are defined, the minimum project completion time is computed through a linear programming problem. Should the user of the methodology require a shorter completion date, several project level risks can be eliminated for a certain cost.
Similarly, the authors developed a linear programming model that minimises the cost above, given the target completion time. The methodology was then applied to a medium-scale construction project to prove its usability and efficiency. The case study showed that the proposed quantitative model might help project managers in removing controllable risks for a given cost and produce a project schedule more robust and closer to the client’s requirements.
Mohan et al. [1] proposed another approximation for the activity duration distribution in the PERT method. They pointed out that usually, Project Managers do not have a clear insight, to provide accurate three-point estimates for activity durations. As a result, extensive research has been conducted regarding the issues of the appropriateness of various probabilistic distributions and the effectiveness of the use of two-point estimates. Based on the research, the authors presented a variation of the original PERT method, which featured the lognormal distribution with two-point estimates. The selection of either the optimistic (a) or pessimistic (b) duration values depends on the amount of skewness the user wishes to achieve; the most likely (m) applies to both cases. In other words, the selection depends on whether the overall project completion time falls towards the optimistic or the pessimistic side.
Thus, the researchers experimented on the proposed alteration of PERT, by using critical paths with various numbers of activities (from 10 to 100) and three ranges of skewness for the activity duration distributions. The conducted analysis featured the original PERT method, the Premachandra [13] method (variation of PERT), and the normal and lognormal distributions with two duration estimates (m and either a, or b). The distributions mentioned above were used to calculate the expected time and variance, which were employed as performance measures.
Moreover, average percentage errors for the mean and 95 percentile were computed, so as to offer a direct comparison between the various cases. The analysis shown that the lognormal approximation with two duration estimates outperforms the other distributions in both performance measures, when the activities are right-skewed, i.e. the lean towards the pessimistic side. Conversely, the left-skewed lognormal approximates the normal distribution very closely. However further research has to be implemented to validate the results of the analysis. Nevertheless, the proposed alteration of PERT seemed relatively robust.
According to Hajdu and Bokor [16], the appropriate definition of the probabilistic distributions for the activity durations is essential. They pointed out that erroneous assignment of activity estimates could affect the on-time project completion probability. Therefore, the research aimed to assess whether the appropriate selection of probabilistic distributions has a greater impact on the project completion date than the inaccurate estimation of the activity durations. The authors provided a literature review on the various alternative distributions used in the PERT method. Thus, they ultimately selected the uniform, triangular and lognormal distributions to compare against the standard Beta used in PERT.
Apart from these distributions, the authors used three other cases of the Beta; one with a -10% difference in the initial activity time estimates; one with a +10% difference; and another one with a +15%. As a result, Hajdu and Bokor could illustrate the second assumption regarding the inaccurate time estimates. They relied on MCSs, to better capture the convergence of probabilistic results and to produce the cumulative distribution functions (CDFs) for the overall project durations. Three hypothetical projects were analysed initially, which all showed that the uniform, triangular and lognormal distributions fell within the ±10% range provided by the inaccuracy of time estimates, illustrated by the original Beta one.
In addition, the authors also used four real-world construction projects with complex networks, to account for increased uncertainty. The implementation of the analysis showed that the selection of the particular three distributions satisfied the initial hypothesis. In other words, inaccurate time estimates affect the project completion time more than the various distributions do. Hence, the analysis showed that accurate time estimates are essential in the accurate planning of construction projects. However, research has to be made, to consolidate the findings of the paper.
Trietsch and Baker [17] presented in the paper an alternative proposal for upgrading the algorithm of the PERT method to fit the demands of the 21st century. At first, the writers addressed the various limitations of the PERT method; “the lack of statistical calibration; the independence assumption, i.e. no correlation between activity durations; and the reliance on deterministic estimates”. The paper aimed to propose a new scheduling framework for modern project management decision supporting systems (DSS) that would minimise these limitations.
The basis of this scheduling framework, called PERT 21, relies on the lognormal distribution (instead of the Beta), to capture the stochastic sequencing, activity durations and release dates more precisely. As such, users can account for the criticalities of project activities, as the lognormal seems to be the most appropriate distribution for modelling the pessimistic skewness of activity durations.
Furthermore, historical data should be elicited to estimate and calibrate the necessary statistical parameters. Thus, the variation between similar projects undertaken by the organisation that uses the proposed framework could be captured. By introducing linear association combined with appropriate usage of a systematic error random variable, the authors managed to fit historic data, as to provide PERT 21 practitioners with an objective perspective on activity correlations.
What is more, the authors relied on two graphical tools to extend the simplicity of PERT 21. The first one was the predictive Gantt chart, which allows the user to incorporate graphically the probabilistic start and due date on an activity level in the CDF. The second one was the flag diagram, which depicts the criticality and normalised delay cost of the activity. Lastly, the authors provided a qualitative comparison of PERT 21 against existing risk modules and the Critical Chain Project Management method (CCPM). They suggest that the proposed methodology could be retrofitted, as a complement, into existing frameworks and therefore provide holistic and robust stochastic activity scheduling.
2.5 Summary
Upon completing the literature review, two main methodologies were selected to be implemented in the quantitative risk analysis that follows. Following the findings of Hajdu and Bokor [16], a distribution comparison will be conducted to test the validity of several types of probabilistic tools.
Furthermore, the FARE technique was decided to be employed in the main body of the analysis. This technique reduces the number of paths in a project network by taking into consideration two different criteria. Its details will be discussed in Section 3.6.
At this point, note that both methodologies will be applied to two construction projects, as part of the quantitative risk analysis that follows in Chapter 4.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related Services
View allDMCA / Removal Request
If you are the original writer of this literature review and no longer wish to have your work published on the UKDiss.com website then please: