Introduction
Klein’s RPDM | Proposed model | Model A | Model B | Model C |
---|---|---|---|---|
Experience the situation | Input cues | Input cues | Input cues | Input cues |
SA based on experience | ELDS module that uses MLN | Bayesian classifier (BC) | Feature by feature matching | Bayesian classifier |
SA: Diagnostic mechanism/story building/feature matching | OBR module that uses ontology for story building | Not modeled separately. Diagnostics is done by providing more cues to the BC | Not modeled. | Not modeled |
Expectation modelling | Stored as part of plans | Stored as part of solution (plan) | Stored as part of solution (plan) | Not modeled |
Action Evaluation: Mental simulation | BN | Specific to midair collision scenarios | Not specified | Not modeled |
Action (plan) selection | BDI framework | Not modeled | Not specified | Not modeled |
Plan execution | BDI framework | Not modeled | Not specified | Not modeled |
Background concepts
Ontology
Markov network
Markov logic network
true
or false
. Formally, an MLN L is defined as a set of pairs (Fi, wi) with Fis being the formulas and wis being the weights assigned to the formulas.true
groundings of Fi in
x, x[i] is the state or configuration (i.e., the truth assignments) of the predicates in Fi, and \(\phi_{i} \left( {x_{\left[ i \right]} } \right) = e^{{w_{i} }}\).Methodology
DIAGNOSE-SITUATION
that queries the agent’s ontology by using available cues as concepts and then extracts CS-Rules that satisfy the concepts. The working of the DIAGNOSE-SITUATION
method can be understood as actions taken in steps Steps 9–13 in Fig. 3. For example, if an agent has a visual of smoke in the messhall, and for some reasons it is unable to get other cues, then the agent will take smoke and messhall as concepts and search the ontology for possible relations. If a relation is found the agent applies inference to explore connected or related situations that contain specific or doable actions. These actions are the final output of the agent. The DIAGNOSE-SITUATION
method corresponds to Klein’s variation 2 of the RPDM model [33], p. 26) as explained in the preceding section. Steps 1–8 correspond mainly to recognize the given situation, where there are a finite number of observable cues represented as {c1, c2, …}, based on MLN L that is developed by using the FOL rules.
The experiential-learning and decision-making module
FIRE
) and evacuate (EVACUATE
) emergencies are proposed in Table 2. As an example of how agents with different experiences can be made in a real system, consider rule #9 in Table 2:# | Rules |
---|---|
1. | ¬L(ag, al, t) ⇒ ¬R (ag, al, t) |
2. | L(ag, + al,t)^HITR(ag, + mloc,t)^ BST(ag, + al,t) ⇒ R(ag, + al,t) |
3. | L(ag, al, t) ⇒ HSES(ag) |
4. | ST(ag, thrt, t) ⇒ HSES(ag) |
5. | (HFO(ag, + p_a,t)^FPA(ag, + p_a,t)^KETPA(+p_a, + emgType))v (ST(ag, + thrt,t)^KETT(+thrt, + emgType))v (L(ag, + al,t)^HITR(ag, + mloc,t)^KETA(+al, + emgType)^BST(ag, + al,t)) ⇒ HES(ag, +emgType, t) |
6. | HES(ag,FIRE,t0)^HES(ag,EVACUATE,t1)^Gt(t1,t0)
⇒ ¬HES(ag, FIRE, t1) |
7. | HES(ag,FIRE,t) ⇒ ¬HES(ag, EVACUATE,t) |
8. | HES(ag, EVACUATE, t) ⇒ ¬HES(ag, FIRE, t) |
9. | HFO(ag, +p_a, + t)^FPA(ag, +p_a, + t)^KMLPA(+p_a, + mloc) ⇒ HITR(ag, + mloc, + t) |
10. | L(ag, + al,t)^R(ag, + al,t)^BST(ag, + al,t)^KMLA (+al, + mloc) ⇒ HITR(ag, + mloc,t) |
a
1
) Has Focus On (HFO
) the PA (p_a
) announcement at some time t
, and a
1
is able to understand or Follow the PA (FPA
), and a
1
knows what to do in that specific PA announcement (the predicate KMLPA(p_a, mloc)
is stored as a fact that means the agent knows which muster location is used in which PA), then a
1
should develop an intention (represented by the predicate HITR
) according to its knowledge about that specific PA and, thereby, the associated weight, w, of the rule. For example, in the case
of a PA related to the GPA alarm, a
1
’s intention should be to move to the primary muster station; in the case of a PAPA alarm, the intention should be to move to the alternate muster station. However, if an agent keeps repeating a mistake by, say, attributing GPA to alternate muster station rather than the primary, then in the event of a FIRE
emergency this agent will likely move to the alternate muster station even though it is contrary to the required action.p_a
, t
, and mloc
belong to the sets A = {PAGPA
, PAPAPA
}, T = {t0
, t1
}, and M = {MESSHALL
, LIFEBOAT
}, respectively. This gives rise to eight different permutations resulting from grounding rule#9 for the constants in the sets A, T, and M. As there are four predicates in rule #9, there will be 24 × 8= 128 total number of different worlds altogether. For brevity, assume that the variables p_a
, t
, and mloc
belong to sets each having a single constant. So, let p_a
= {pa}, t
= {t} and mloc
= {m}. Then, there will be 24 = 16 possible worlds, as shown in Table 3, where w shows the weight assigned to the rule, and the table excludes the parameters of each predicate for better readability. The probability that the world that is inconsistent with rule #9 occurs, i.e., the probability p({HFO,FPA,KMLPA,¬HITR}
) is equal to 1/Z is less likely than all other probabilities as shown in Table 3, provided w > 0. Here Z is the partition function as described in “Background concepts” section. The probability for a world to be true depends on the weight w assigned to each rule. Agents with the same rules differing in respective weights are expected to behave differently.HFO
| FPA
| KMLPA
| J1 = HFO^FPA^KMPLA | J2 = HITR | J1 ⇒ J2 | p(.) |
---|---|---|---|---|---|---|
¬HFO
| ¬FPA
| ¬KMLPA
| False
| ¬HITR
| True |
\(e^{w} /Z\)
|
¬HFO
| ¬FPA
| ¬KMLPA
| False
| HITR
| True |
\(e^{w} /Z\)
|
¬HFO
| ¬FPA
| KMLPA
| False
| ¬HITR
| True |
\(e^{w} /Z\)
|
¬HFO
| ¬FPA
| KMLPA
| False
| HITR
| True |
\(e^{w} /Z\)
|
¬HFO
| FPA
| ¬KMLPA
| False
| ¬HITR
| True |
\(e^{w} /Z\)
|
¬HFO
| FPA
| ¬KMLPA
| False
| HITR
| True |
\(e^{w} /Z\)
|
¬HFO
| FPA
| KMLPA
| False
| ¬HITR
| True |
\(e^{w} /Z\)
|
¬HFO
| FPA
| KMLPA
| False
| HITR
| True |
\(e^{w} /Z\)
|
HFO
| ¬FPA
| ¬KMLPA
| False
| ¬HITR
| True |
\(e^{w} /Z\)
|
HFO
| ¬FPA
| ¬KMLPA
| False
| HITR
| True |
\(e^{w} /Z\)
|
HFO
| ¬FPA
| KMLPA
| False
| ¬HITR
| True |
\(e^{w} /Z\)
|
HFO
| ¬FPA
| KMLPA
| False
| HITR
| True |
\(e^{w} /Z\)
|
HFO
| FPA
| ¬KMLPA
| False
| ¬HITR
| True |
\(e^{w} /Z\)
|
HFO
| FPA
| ¬KMLPA
| False
| HITR
| True |
\(e^{w} /Z\)
|
HFO
| FPA
| KMLPA
| True
| ¬HITR
| False |
\(1/Z\)
|
HFO
| FPA
| KMLPA
| True
| HITR
| True |
\(e^{w} /Z\)
|
An explanation of the FOL rules
FIRE
and EVACUATE
situations in the similar way as a human counterpart recognizes them. The preconditions (antecedents of FOL rules) used here are common among experts and have been suggested in earlier studies [8, 21, 49, 56, 57, 61‐65]. Similar work is reported in [38] where the authors constructed decision trees based on some of the preconditions used in this study, such as the presence of hazard, route direction in PA that actually
is a byproduct of understanding the
PA.BST
ensures that the intention was formed before seeing a threat), it clearly means that the alarm has been recognized at the same time. Nonetheless, the agent cannot act upon the intention unless recognition of the alarm is made, because deliberation requires the location of the muster station, which can only be decided after recognition of the alarm. Therefore, as in rule #2, if the intention is made before recognition, then it needs to be updated with the value of the muster location (i.e., MESSHALL
or LIFEBOAT
) at some later time, say t2, before performing the actions in the intention and according to the result of the recognition of the alarm. Rule #2 thus models HITR
as a policy-based intention as explained in the literature [4], p. 56), that is, the agent will form a general intention of moving to a muster location right at the time of listening to an alarm, and will later determine which muster station is the right choice.HSES
(see Table 2). A true value of HSES
means that the agent knows there is some emergency. Having HSES
true does not necessarily tell the agent-specific details about the kind of emergency that has occurred. Rules # 3 and 4 say that an agent will be aware of ‘some’ emergency situation if it just listens to an alarm or observes a threat.HFO
is true
when the agent has a focus on a PA being uttered. An agent that is engaged in all activities except what is communicated in the PA is defined to have no focus, whereas one that suspends its current engagements and begins performing the required actions is considered to have focus on the PA. Similarly, if an agent, while moving, suddenly changes its course because of instructions given in the PA a moment before, this also considered to have exhibited a clear sign of deliberative intention [4] in response to the PA. This deliberative intention is captured in rule #9 by the predicate HITR
when the agent considers HFO
and FPA
, and has a prior knowledge about possible deliberation steps (the predicate KMLPA
that stands for Knows Muster Location according to PA). The predicate FPA
is used to demonstrate the requirement of following the PA. If HFO
is true
, but FPA
is false
, it means that, though the agent had focus on the PA’s words, it is confused or does not have understanding of the situation, and therefore, the agent is unable to follow the PA. Rule #5 is a disjunction of three different rules: the first determines SA about the emergency based on focus and understanding of PA, the second uses direct exposure to the threat/hazard, and the third is based on the recognition of alarms. This last disjunct in rule #5 uses the predicate KETA to link an alarm to the corresponding situation or emergency type because that is needed to conclude in the consequent predicate HSES
.L
), recognition of alarm (the predicate R
), and belief about what is needed in that particular alarm type (the predicate KMLA
that stands for Knows Muster location against the Alarm). In this case, the formation of deliberative intention [4] is based on deliberation about the act of listening and recognizing the alarm type.Training the ELDS module
The ontology-based reasoning module
agnt
), attribute (attr
), characteristic (chrc
), experiencer (expr
), instrument (inst
), object (obj
), and theme (thme
) are used here as defined in [59], p. 415–419). The concept agnt
does not refer to the concept of agent as defined in AI literature, rather it is a relation used in conceptual structures to refer to a relation that links an [ACT] to and an [ANIMATE], where the ANIMATE concept represents the actor of the action. The concept of ACT is defined as an event with an animate agent.agnt
links the concept [ACT] to [ANIMATE], where the ANIMATE concept refers to an actor of the action. Example: A CG for “A Man moves to a destination” in the linear form (LF) will be represented as:attr
links [Entity: *x] to [Entity: *y], where *x has an attribute *y. Example: Fire has flame. The CG is: [Fire] → (attr
) → [Flame] such that Fire and Flame are represented as two concepts of type Entity, and Fire has an attribute Flame.chrc
links [Entity: *x] to [Entity: *y] such that *x has a characteristic *y. Example: Emergency is a danger to people and property. The CG is: [Emergency
] → (chrc
) → [danger
] → [Person_Property
].expr
links a [State] to an [Animate], who is experiencing that state. For example, because Emergency is defined here as a situation as well as a state, therefore, the concepts in the sentence, “Emergency is experienced by people”, are described as CG by [Emergency
] → (expr
) → [Person
].inst
links an [Entity] to an [Act] in which the entity is causally involved. For example, the CG [Fire
] ← (obj
) → [Produce
] → (inst
) → [Combustion
] reflects a causal relationship between the chemical process of combustion and the birth of a fire.obj
links and [Act] to an [Entity], which is acted upon. For example, in the event of an emergency “a person moves to the secondary muster station (LIFEBOAT
)”, is represented in the ontology as an descendent of a CS-Rule as:-
Antecedent part$$\begin{aligned} \left[{\tt{MESSHALL}}\right] & - ({\tt{attr}}) \to \left[{\tt{Compromised}}\right], \\ & - \;({\tt{expr}}) \to \left[{\tt{Person}}\right]. \\ \end{aligned}$$
-
Descendent part$$\begin{aligned} \left[{\tt{MoveTo}}\right] & - ({\tt{agnt}}) \to \left[{\tt{Person}}\right], \\ & - \, ({\tt{attr}}) - \left[{\tt{Destination}}\right] - ({\tt{obj}}) \to \left[{\tt{LIFEBOAT}}\right]. \\ \end{aligned}$$
thme
is to represent a thematic role. For example to express the intent in the sentence, “Muster station has hazard”, one can write the CG as [MusterStation
] – (thme
) → [Hazard
] (see [60] for a detail account on thematic roles in ontologies).req)
and (involve
) links a [Person] to an [Action], and an [Action: x] to an [Action: *y], respectively where *x involves *y. As an example the descendent in following CS-Rule represents the use of req
relation.-
Antecedent part:$$\begin{aligned} \left[{\tt{Place}}\right] & - ({\tt{thme}}) \to \left[{\tt{Hazard}}\right], \\ & - \;{\tt{expr}} \to \left[{\tt{Person}}\right]. \\ \end{aligned}$$
-
Descendent part:$$\begin{aligned} \left[{\tt{Person}}\right] & - ({\tt{req}}) \to \left[{\tt{ImmediateAction}}\right] - ({\tt{involve}}) \to \left[{\tt{RaiseAlarm}}\right], \\ & \leftarrow ({\tt{agnt}}) - \left[{\tt{MoveOut}}\right]. \\ \end{aligned}$$
Combustion
is defined as an act of burning. The CG is:Hazard
] that is produced as a result of combustion. The CG is:-
Antecedent:[
MusterStation
: *x] –(thme)
→ [Hazard
]. -
Consequent:$$\begin{aligned}\left[{\tt{MusterStation}}:*{\tt{x}}\right] & \>- \\ & - \, ({\tt{attr}}) \to [{\tt{Compromised}}], \\ & - \, ({\tt{expr}}) \to [{\tt{Person}}]. \\ \end{aligned}$$
MESSHAL
compromised, then the person should move to the LIFEBOAT
station.-
Antecedent:$$\begin{aligned} \left[{\tt{MESSHALL}}\right] &\> - \\ & - ({\tt{attr}}) \to \left[{\tt{Compromised}}\right], \\ & - ({\tt{expr}}) \to \left[{\tt{Person}}\right]. \\ \end{aligned}$$
-
Consequent:[
Person
] ← (agnt
) – [MoveTo
] –(attr
) → [Destination
] – (obj
) → [LIFEBOAT
].
LIFEBOAT
station compromised, then the person should escape from the platform as quickly as possible.-
Antecedent:$$\begin{aligned} \left[{\tt{LIFEBOAT}}\right] & \>- \\ & - ({\tt{attr}}) \to \left[{\tt{Compromised}}\right], \\ & - ({\tt{expr}}) \to \left[{\tt{Person}}\right]. \\ \end{aligned}$$
-
Consequent:[
Person
] ← (agnt
) – [Escape
] –(actOf
) → [ImmediateAction
] – (involve
) → [EMERGENCY
].
-
Antecedent:$$\begin{aligned} \left[{\tt{Place}}\right] & \>- \\ & - ({\tt{thme}}) \to \left[{\tt{Hazard}}\right], \\ & - ({\tt{expr}}) \to \left[{\tt{Person}}\right]. \\ \end{aligned}$$
-
Consequent:$$\begin{aligned} \left[{\tt{Person}}\right] & \>- \\ & \leftarrow ({\tt{agnt}}) - \left[{\tt{MoveOut}}\right], \\ & - ({\tt{req}}) \to \left[{\tt{ImmediateAction}}\right] - ({\tt{involve}}) \to \left[{\tt{RaiseAlarm}}\right]. \\ \end{aligned}$$
Implementing the proposed realization of RPDM model: a case study
Human-competence measurement in a virtual environment
Evacuation scenarios and decision tasks
Factor/variable | Predicate | Parameters | Descriptions |
---|---|---|---|
Alarm Recognition | R
| ( agent , alarm , t ) | An agent recognizes an alarm as being GPA or PAPA during time interval t |
Focus of attention or has focus on | HFO
| ( agent , pa , t ) | An agent has focus on pa messagea during time t |
Encounters or sees a threat or hazard | ST
| ( agent , thtType , t) | An agent has seen a hazard of type, thtType , during t |
Follows a PA | FPA
| ( agent , pa , t ) | An agent understands the wording in PA. |
Intention to move to a specific muster location | HITR
| ( agent , musterLoc , t ) | An agent has (developed) an intention during t to reach a specific muster location |
Situation awareness of emergency | HES
| ( agent , emgSitType , t ) | An agent got awareness about the situation type, emgSitType , during time t |
Paying attention to alarm | L
| (agent,alarm,t)
| An agent listens to an alarm during time interval t |
Assessment of alarm recognition based on listening alarm | BST
| (agent,alarm,t)
| An agent listens to an alarm before seeing the threat. This predicate is used in conjunction with others in rules 2, 5, 9 to assess if the alarm recognition is done before seeing a threat (BST). This concludes
that the alarm is recognized otherwise the SA might be due to some other factor such as watching a fire |
Sensing of an emergency | HSES
| (agent)
| Based on the antecedent in rules 3 and 4 an agent will get a sense of some emergency without getting further details |
Data collection
Participant | Predicates | |||||||
---|---|---|---|---|---|---|---|---|
L (.,GPA,t0)
| R (.,GPA,t0)
| BST (.,GPA,t0)
| HITR (.,MSH,t0)
| ST
| HFO (.,PAGPA,t0)
| FPA (.,PAGPA,t0)
| HES (.,FIRE,t0)
| |
P4G1 | True
| False
| False
| False
| True
a
| True
| False
| True
|
P5G1 | True
| True
| True
| True
| True
b
| True
| True
| True
|
P6G1 | True
| True
| True
| False
c
| True
d
| True
| True
| True
e
|
P7G1 | True
| True
| True
| False
| True
f
| True
| True
| True
|
P10G1 | True
| True
| True
| True
| True
g
| True
| True
| True
|
Participant | Predicates | |||||||
---|---|---|---|---|---|---|---|---|
L (.,PAPA,t1)
| R (.,PAPA,t1)
| BST (.,PAPA,t1)
| HITR (.,LBS,t1)
| ST
| HFO (.,PAPAPA,t1)
| FPA (.,PAPAPA,t1)
| HES (.,EVAC,t1)
| |
P4G1 | True
| False
| False
| True
| False
h
| True
| True
| True
|
P5G1 | True
| True
| True
| True
| True
i
| True
| True
| True
|
P6G1 | True
| True
| True
| True
| True
j
| False
| False
| True
k
|
P7G1 | True
| True
| True
| True
| True
l
| False
| False
| True
m
|
P10G1 | True
| False
| False
| False
| True
n
| False
| False
| False
o
|
HFO
and FPA
. The primary way to determine whether a participant had focus on PA wordings was to see if the participant’s movement has changed starting with the PA. For instance, if it is observed that as soon as the PA begins the participant starts getting slowed down in speed, or stopped, or kept walking slowly as if trying to listen to the words. Only one participant ignored PA for FIRE
situation. This participant ignored all other cues too. This participant’s behaviors were tracked in other scenarios, not reported here, and it is found that he had developed a tendency to move to the lifeboat station, irrespective of any situation. Four other participants were found who only ignored or did not focus on PA related with EVACUATE situation as their gestures showed no change in their pace of their previously selected actions. For instance, all of them were heading towards the messhall when the GPA alarm turned to PAPA and the PA related with PAPA started being announced. But none of them re-routed to show their understanding or vigilance with the new demands in the PA. That is the reason, the authors inferred that these four participants did not put their attention on the PA. So, in all these five cases the predicate HFO was assigned with a Boolean false
value.FPA
, if a person does not focus on PA wordings, no actions according to the PA should be expected unless another cue triggers the same. In all cases, where a participant did not focus on the PA wording, we assigned FPA
a false value. Also, there were four cases where the participants showed focus on the PA by pausing their activities, which they were engaged in before the PA announcement began, and then resuming after the PA is over, but they did not act according to the PA wordings. These PAs were related to EVACUATE situation but none of these participants re-routed immediately after listening to the PA. Therefore, we have assigned false
values to FPA for these cases too with corresponding HFO having true
values. In the rest of the cases FPA
takes a true
value.Simulation results
R
, HITR
, and HES
, and the empirical observations. MC-SAT [47] inference algorithm is used for querying the ELDS module. Table 6 reports the simulated results along with the evidence data Te that is used to make inference from the MLN in ELDS module.Sit# | Predicates | |||||||
---|---|---|---|---|---|---|---|---|
L (.,GPA,t0)
| ?R (.,GPA,t0)
| BST (.,GPA,t0)
| ?HITR (.,M1|M2,t0)
| ST
| HFO (.,PAGPA,t0)
| FPA (.,PAGPA,t0)
| ?HES (.,H1|H2,t0)
| |
1A | True | True (0.87) | True | True (0.66M1, 0.46M2) | False | True | True | Truea (0.94H1, 0.64H2) |
2A | True | True (0.87) | True | True (0.66M1, 0.43M2) | Trueb | True | True | True (0.96H1, 0.26H2) |
3A | False | False (0.0) | False | False (0.51M1, 0.51M2) | Truec | False | False | False (0.44H1, 0.24H2) |
Sit# | Predicates | |||||||
---|---|---|---|---|---|---|---|---|
L (.,PAPA,t1)
| ?R (.,PAPA,t1)
| BST (.,PAPA,t1)
| ?HITR (.,M1|M2,t1)
| ST
| HFO (.,PAPAPA,t1)
| FPA (.,PAPAPA,t1)
| ?HES (.,H1|H2,t1)
| |
1B | False | False (0.0) | False | True (0.46M1, 0.50M2) | Trued | False | False | False (0.01H1, 0.38H2) |
2B | True | True (0.9) | True | True (0.44M1, 0.92M2) | False | True | True | True (0.07H1, 0.96H2) |
3B | True | True (0.9) | True | True (0.43M1, 0.94M2) | False | True | True | True (0.14H1, 0.96H2) |
Situation # 1A
t0
, in which a GPA
alarm begins sounding followed by the relevant PA while the participant was in the cabin. The agent’s ELDS module was set with the values of the predicates L
, BST
, ST
, HFO
, and FPA
as evidence as mentioned in Sit#1A in Table 6. MC-SAT algorithm was executed with queries ?R
, ?HITR
, and ?HES
(with required arguments) and the probabilities that these predicates are true are found to be 0.87 for recognizing the alarm (i.e., the predicate R
), 0.66 for developing the intention to move to MESSHALL
during t0
, and 0.46 for moving to LIFEBOAT
station during t0
(i.e., HITR(., LIFEBOAT, t0)
),MESSHALL
and LIFEBOAT
represent the primary and alternate muster locations, respectively. The probabilities for the agent to recognize and be aware of FIRE
and EVACUATE
emergencies during t0
are found to be 0.94 and 0.64 respectively. As there are two sets of probabilities for each of the queried predicate, the agent needs to decide which value to use. Algorithm 1 has been implemented to resolve this issue. The parameter α1 has been set to 0.6, and the value of α2 has been set to 20% of the value of α1. These values were obtained so that the simulated results are found to be as close to the empirical values as possible. The result of Algorithm 1 based on its implementation in Appendix C.1, determines that during t0
the agent will move to the primary muster station. This result is the same that was observed in the empirical finding where the participant chose to move to the primary muster station (see the value true
in the last column of row 1A in Table 6) during interval t0
.Situation # 1B
true
or false
) values. The numeric parenthesized values are obtained by running the simulation using the agent. The agent was provided with the same evidence that was perceived by the participant P1G1. The evidence formed the collection of Boolean values for the predicates L
, BST
, ST
, HFO
, and FPA
. Although, P1G1 was able to form the intention of moving to the right muster station, i.e., the LIFEBOAT station during t1, despite the fact that P1G1 was not found to focus on listening to the PAPA alarm and following relevant PA. The moment when P1G1 was entering into the MESSHALL during t0, the interval t0 had ended and the PAPA alarm started sounding. The presence of smoke was a visual cue that has a dominance [52] over the other cues like audio signals (such as listening to the PAPA alarm and PA), therefore, we argue that P1G1 could not utilize the PAPA alarm and the relevant PA to come to form the intention of moving to the LIFEBOAT station. The only cue that was used during t1 was the presence
of smoke in the MESSHALL. P1G1 made intention to move to the LIFEBOAT station because he found the MESSHALL compromised already. The simulation results for this part of the emergency are given here as under:HITR
is consequent (rule#9, 10 in Table 2) are based on HFO
, FPA
, L
, R
, and BST
. All of these predicate values were set to false
because of the inability of P1G1 to perceive the corresponding cues. The probability that HITR(P1G1, M2
=
LIFEBOAT, t1)
results true
has been found to be 0.5. This value is inconclusive based on Algorithm 1. While the agent is present in the MESSHALL (due to the decision in Situation 1A as reported in “Situation # 1A” section), and smoke was in the MESSHALL, the agent perceived the smoke, determined its current position (which was MESSHALL), and passed this information in the form of the following CG:MusterStation
, and Smoke
is a subtype of Hazard
. The inferred consequent that comes from CS-rule#1 is:Situation # 2A
true
. During simulation, the agent, was provided with the evidence predicates, L
, BST
, ST
, HFO
, and FPA,
all having the Boolean values true
. The ELDS module computed the probability of forming intention to move to the MESSHALL
as 0.66. At the same time, the probability of moving to the LIFEBOAT
station was found to be 0.43. Algorithm 1 decides the MESSHALL
as the destination location during the interval t0 because the probability of HES has been calculated as 0.96 for the FIRE emergency. As the agent knows the plan about what to do in case of FIRE emergency, which is to move to the MESSHALL
, the agent performs the action of moving to the MESSHALL
.Situation # 2B
LIFEBOAT
station during t1. The agent, in simulating the participant P2G1, was given the same values of the predicates as was perceived by the participant, and the ELDS module arrived at the same result by computing the probability of moving to the LIFEBOAT station as 0.96.Situation # 3A
L
, R
, BST
, HITR
, HFO
, FPA
and HES
are assigned the value false
during t0, as shown in Table 6 (line 3A). The ELDS module (during simulation), correspondingly, resulted in low probabilities that ultimately brought the OBR-module in action. Here, the agent exploits the only available cue, which was the observation that there was smoke coming out of the MESSHALL vent, and therefore, the agent determined that MESSHALL is compromised. The CG: [MESSHALL
]-(thme
) → [Smoke
] is used to initiate memory-based inference on the OBR-module. This CG is matched with the antecedent of CS-rule#1, which is a more general form in the ontology, and the consequent was generated as:Situation # 3B
R
, HITR
, and HES
have been found to be 0.9, 0.94, and 0.96 respectively (see Table 6, line 3B). In order to show how the process of mental simulation works in accordance with RPDM literature, the agent’s beliefbase has been slightly modified by setting the primary escape route (PER) as ‘not learned’. The problem of learning by remembering waypoints along a route considering landmarks as opportunities for better retention is considered in [11, 12]. Now what are the consequences, in a hazard, when the agent adapts a route that it does not know? For the present case, the agent exploits a Bayesian network (see Fig. 7) to assess the consequences of choosing PER and the secondary escape route (SER) under current circumstances when a hazard has already been recognized and the agent did not know the primary escape route. The probability of being trapped is found to be higher in choosing PER than that of SER in case PER is not remembered or has not been learned. Therefore, the agent acts on the plan of moving to the LIFEBOAT station using the secondary escape route.
Conclusion
LIFEBOAT
station. We think that the participant made the choice based on his/her training sessions that show the same trend of moving to the LIFEBOAT
station no matter what the circumstances demand. On the contrary, during simulation, when the agent was given the same input cues as was perceived by P3G1, the agent used the only available cue, smoke coming out from the MESSHALL
vent, and decided to move to the LIFEBOAT
station. In situation 3B, the agent retained the initial decision that it made during interval t0 in situation 3A.