Jump to content

Human cognitive reliability correlation

From Wikipedia, the free encyclopedia

Human Cognitive Reliability Correlation (HCR) is a technique used in the field of Human Reliability Assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.

HCR is based on the premise that an operator's likelihood of success or failure in a time-critical task is dependent on the cognitive process used to make the critical decisions that determine the outcome. Three Performance Shaping Factors (PSFs) – Operator Experience, Stress Level, and Quality of Operator/Plant Interface - also influence the average (median) time taken to perform the task. Combining these factors enables “response-time” curves to be calibrated and compared to the available time to perform the task. Using these curves, the analyst can then estimate the likelihood that an operator will take the correct action, as required by a given stimulus (e.g. pressure warning signal), within the available time window. The relationship between these normalised times and Human Error Probabilities (HEPs) is based on simulator experimental data.

Background

[edit]

HCR is a psychology/cognitive modelling approach to HRA developed by Hannaman et al. in 1984.[1] The method uses Rasmussen's idea of rule-based, skill-based, and knowledge-based decision making to determine the likelihood of failing a given task,[2] as well as considering the PSFs of operator experience, stress and interface quality. The database underpinning this methodology was originally developed through the use of nuclear power-plant simulations due to a requirement for a method by which nuclear operating reliability could be quantified.

HCR Methodology

[edit]

The HCR methodology is broken down into a sequence of steps as given below:

  1. The first step is for the analyst to determine the situation in need of a human reliability assessment. It is then determined whether this situation is governed by rule-based, skill-based or knowledge-based decision making.
  2. From the relevant literature, the appropriate HCR mathematical model or graphical curve is then selected.
  3. The median response time to perform the task in question is thereafter determined. This is commonly done by expert judgement, operator interview or simulator experiment. In much literature, this time is referred to as T1/2 nominal.
  4. The median response time, (T1/2), requires to be amended to make it specific to the situational context. This is done by means of the PSF coefficients K1 (Operator Experience), K2 (Stress Level) and K3 (Quality of Operator/Plant Interface) given in the literature and using the following formula:

T1/2 = T1/2 nominal × (1 + K1)(1 + K2)(1 + K3)

Performance improving PSFs (e.g. worker experience, low stress) will take negative values resulting in quicker times, whilst performance inhibiting PSFs (e.g. poor interface) will increase this adjusted median time.

5. For the action being assessed, the time window (T) should then be calculated, which is the time in which the operator must take action to correctly resolve the situation.

6. To obtain the non-response probability, the time window (T) is divided by T1/2, the median time. This gives the Normalised Time Value. The probability of non-response can then be found by referring to the HCR curve selected earlier. This non-response probability may then be integrated into a fuller HRA; a complete HEP can only be reached in conjunction with other methods as non-response is not the sole source of human error.

Worked example

[edit]

The following example is taken from Human Factors in Reliability Group[3] in which Hannaman describes analysis of failure to manually SCRAM (perform an ermegency shutdown) in a Westinghouse PWR (Pressurized water reactor, a type of nuclear power reactor).

Context

[edit]

The example concerns a model in which failures occurs to manually SCRAM in a Westinghouse PWR. The primary task to be carried out involves inserting control rods into the core. This can be further broken down into two sub-tasks which involve namely detection and action, which are in turn based upon recognising and identifying an automatic trip failure.

Assumptions

[edit]

Given that there exists the assumption that there is simply one option in the procedures and that within training procedures optional actions are disregarded, the likelihood that a reactor trip failure will be incorrectly diagnosed is minimal.

It is also assumed that the behaviour of the operating crew under consideration is skill-based; the reactor trip event which takes place is not part of a routine, however the temporary behaviour adopted by the crew when the event is taking place is nevertheless recognised. Moreover, there are well set procedures which determine how the event should be conducted and these are comprehended and practised to required standards in training sessions.

The average time taken by the crew to complete the task is 25 seconds; there is no documentation as to why this is the case. The average completion times for the respective subtasks are therefore set as 10 seconds for detection of the failure and 15 seconds for taking subsequent action to remedy the situation.

Method

[edit]

The PSFs (K factor) judged to influence the situation are assessed to be in the following categories: -operator experience is “well trained” -stress level is “potential emergency” -quality of interface is “good” The various K factors are assigned the following values:

  • K1 = 0.0
  • K2 = 0.28
  • K3 = 0.0

Referring to the equation in Step 4 above, the product is therefore equal to the value of 1.28. In response, the average tasks times are altered from 10 and 15 seconds to 12.8 and 19.2 seconds respectively. Given that the PSFs are identical for both of the given subtasks, it is therefore possible to sum the median response times to give a total of 32 seconds, adjusting the figure for stress, compared to a previous total of 25 seconds.

The time window (T) to perform the task as part of the overall system is given as 79 seconds. This time is derived from a study conducted by Westinghouse in which it was discovered that the crew had approximately 79 seconds to complete the task of inserting the control rod to the reactor and then to shut the reactor down in order to inhibit over-pressuring within the main operating system.

Results/Outcome

[edit]

Consulting the graphical curve central to the technique, the normalised time for the task can thus be established. It is determined by the division of 79 seconds and 32 seconds, giving a result of 2.47 seconds. Identifying this point on the abscissa (the HCR curve model) provides a non response probability of 2.9 x 10–3; this can also be checked for validation utilising the formula:-

PRT (79) = exp – [ (79/32) – 0.7 / 0.407] 1.2 PRT (79) = 2.9 x 10 -3/ demand

Where PRT (T) equals the probability of non success within the system time window T.

Provided below is the graphical solution for the assessment using the HCR technique:

Advantages of HCR

[edit]
  • The approach explicitly models the time-dependent nature of HRA [3]
  • It is a fairly quick technique to carry out and has a relative ease of use [3]
  • The three modes of decision-making, knowledge-based, skill-based and rule-based are all modelled [3]

Disadvantages of HCR

[edit]
  • The HEP produced by HCR is not complete; it calculates the probability that a system operator will fail to diagnose and process information, make a decision and act within the time available. It does not give any regard to misdiagnoses or rule violations.[3]
  • The same probability curves are used to model non-detection and slow response failures. These are very different processes, and it is unlikely that identical curves could model their behaviour. Furthermore, it is uncertain as to whether such curves could be applied to situations in which detection failures or processing difficulties are the primary dominating factors of influence.[3]
  • The rules for judging Knowledge-based, Skill-based and Rule-based behaviour are not exhaustive. Assigning the wrong behaviour to a task can mean differences of up to two orders of magnitude in the HEP.[3]
  • The method is very sensitive to changes in the estimate of the median time. Therefore, this estimate requires to be very accurate otherwise the estimation in the HEP will suffer as a consequence.[3]
  • It is highly resource intensive to collect all the required data for the HCR methodology, particularly due to the necessity of evaluation for all new situations which require an assessment.[3]
  • There is no sense of output from the model that indicates in any way of how human reliability could be adjusted to allow for improvement or optimisation to meet required goals of performance.[3]
  • Only three PSFs are included in the methodology; there are several other PSF's that could affect performance which are unaccounted for.
  • The model is relatively insensitive to PSF changes as opposed to, for example, time parameter changes.[3]
  • As the HCR correlation was originally developed for use within the nuclear industry, it is not possible to use the methodology for applications out-with this domain.[3]

References

[edit]
  1. ^ Hannaman, G.W., Spurgin, A.J. & Lukic, Y.D., Human cognitive reliability model for PRA analysis. Draft Report NUS-4531, EPRI Project RP2170-3. 1984, Electric Power and Research Institute: Palo Alto, CA.
  2. ^ Rasmussen, J. (1983) Skills, rules, knowledge; signals, signs and symbols and other distinctions in human performance models. IEEE Transactions on Systems, Man and Cybernetics. SMC-13(3).
  3. ^ a b c d e f g h i j k l Humphreys, P. (1995). Human Reliability Assessor’s Guide. Human Factors in Reliability Group.