Hazard Prevention in Mission Plans for Aerial Vehicles Based on Soft Institutions
2017-09-22FlavioCorreadaSilvaPaulChungMarceloZuffoPetrosPapapanagiotouDavidRobertsonWambertoVasconcelos
Flavio S. Correa da Silva, Paul W. H. Chung, Marcelo K. Zuffo, Petros Papapanagiotou, David Robertson, Wamberto Vasconcelos
(1. University of Sao Paulo, Sao Paulo Brazil; 2. Loughborough University, Loughborough UK;3. University of Edinburgh, Edinburgh UK; 4. University of Aberdeen, Aberdeen UK)
Hazard Prevention in Mission Plans for Aerial Vehicles Based on Soft Institutions
Flavio S. Correa da Silva1, Paul W. H. Chung2, Marcelo K. Zuffo1, Petros Papapanagiotou3, David Robertson3, Wamberto Vasconcelos4*
(1. University of Sao Paulo, Sao Paulo Brazil; 2. Loughborough University, Loughborough UK;3. University of Edinburgh, Edinburgh UK; 4. University of Aberdeen, Aberdeen UK)
Hazard prevention in mission plans requires careful analysis and appropriate tools to support the design of preventive and/or corrective measures. It is most challenging in systems with large sets of states and complex state relations. In the case of sociotechnical systems, hazard prevention becomes even more dicult given that the behaviour of human centric components can at best be partially predictable. In the present article we focus on a specic class of sociotechnical systems-namely air spaces containing pilot controlled as well as autonomous aircrafts and introduce the notion ofrelevanthazards. We also introducesoftinstitutionsas an appropriate basis for analysis, with the aim of addressing relevant hazards. The concept of soft institutions is drawn from specication languages for interaction between agents in multi-agent systems but, in our case, is adapted for use in systems that combine human and automated actors.
safety engineering; hazard prevention; sociotechnical systems; soft institutions
1 Introduction
Hazard prevention requires the assessment of all possible behaviours of a system so that safety engineers can intervene in the system design to ensure that each behaviour leads to planned, foreseen and safe states[1], providing information support to design preventive and/or corrective measures for each potential hazard.
Hazard prevention is most challenging in systems with large sets of states and complex state relations, which require careful planning and appropriate tools to generate and analyse potential hazard states, avoiding issues related to undecidability or combinatorial explosion during exhaustive scan of state spaces. In the case of sociotechnical systems, hazard prevention becomes even more difficult given that the behaviour of human centric components can at best be partially predictable.
The concept of sociotechnical systems was coined in the early 50s to analyse the impact of the introduction of novel technologies in coal mining, after the empirical observation that gains in productivity were not uniform in all studied workgroups. Its roots can be traced back to the analysis of the introduction of mechanisation in jute milling in Scotland during the 30s[3, 11]. Sociotechnical systems can be characterised as open asynchronous concurrent systems in which some entities are humans and others are machines. Hence, interactions involving heterogeneous entities are a central concept to design, implement and analyse sociotechnical systems.
In the present article we focus on safety and reliability and, more specifically, on the construction of tools to support systems design based on hazard prevention. Given that it can be impossible or too dificult to fully predict the behaviour of a sociotechnical system as a whole, we introduce the notion ofrelevanthazardsto be considered during the design of a system.
In brief, we characterise a well determined subset of the set of all potential hazards for a system and perform backward induction to identify all initial states and chains of events that can lead to them. We then revise the system design in order to identify points in which design interventions can either prevent hazards or inject remedial procedures to be taken in case they occur.
We focus on a specific class of sociotechnical systems for which hazard prevention is particularly relevant-namely, bounded air spaces containing pilot controlled aircrafts as well as unmanned aerial vehicles (UAVs). We introduce a diagrammatic language to support the characterisation of relevant hazards, of sequences of events that can lead to them and of events to which can be associated actions to be kept in store for each relevant hazard.
We also introducesoftinstitutionsas an appropriate platform for hazard prevention based on relevant hazards, and illustrate how soft institutions can be used as a formal counterpart to diagrams employed to design a system for safe operations in bounded air spaces in which pilot controlled aircrafts share space with UAVs.
This paper is organised as follows:
— In section 2 we detail a characterisation of sociotechnical systems, highlighting as a relevant special case mission planning for coordinated UAVs with diversied levels of autonomy.
— In section 3 we briefly introduce the main concepts related to hazard prevention and characterise in detail the notion of relevant hazards. We also introduce a diagrammatic language to represent sociotechnical systems aiming specically at the prevention and analysis of failures.
— In section 4 we illustrate how the proposed diagrammatic language can be used to characterise complex agent interactions in such way that hazard prevention is supported. As a concrete example, we illustrate how it can be used to support the design of missions in bounded air spaces in which pilot controlled aircrafts share space with UAVs.
— In section 5 we introduce the concept of soft institutions, a corresponding computational platform based on this concept and how it can be used as a platform to support hazard prevention for the design of sociotechnical systems.
— Finally, in section 6 we present a brief discussion, conclusions and proposed future work.
2 Sociotechnical systems
A sociotechnical system can be characterised as an open network of heterogeneous interacting entities which can exchange messages and, therefore, coordinate their actions. Some of these entities are engineered and can be programmed to behave according to rules which are explicitly determined and fully understood, even in the cases when they are not fully deterministic; other entities are human centric and therefore their behaviour can, at best, be nudged towards desired patterns of behaviour.
Following Davis et alli[3]we can characterise six facets of sociotechnical systems:
1.Peoplecharacterised as interacting entities who can have different competences, attitudes, skills and interests, based on which they coordinate their actions with other entities as well as are considered by other entities in proposals for coordination and collaboration;
2.Technologiesand tools that characterise engineered interacting entities which have dierent capabilities to sense, interpret and act upon the environment, based on which they can engage into interactions;
3.Processes/proceduresembodied as programs and rules for engineered entities as well as norms, regulative policies, sanctioning and incentive mechanisms to steer people towards expected patterns of behaviour;
4.Buildings/infrastructurewhich characterise environmental resources as well as constraints for interactions;
5.Goalsand metrics to characterise whether the system as a whole as well as its individual entities are approaching or diverting from goals; and
6.Culturewhich characterises defeasible assumptions and heuristics shared and adopted by groups of entities participating in a sociotechnical system.
Depending on the combination and organisation of these facets, different design strategies for sociotechnical systems are most appropriate and require different strategies for design, implementation and management of sociotechnical systems:
1.Opennessto admit or dismiss entities: a system can beclosed,partiallyopenorfullyopento the admission or dismissal of entities. Partially open systems can require certain conditions to be fullled in order to admit or dismiss entities from it;
2.Coordinationlevels among entities: a system can beuncoordinated,locallycoordinatedorgloballycoordinated. In other words, entities participating in a sociotechnical system can act fully on their own, based on coordination rules involving groups of entities or based on coordination rules that engage the whole system to behave globally as a mechanism;
3.Heterogeneityof entities in a system: a system can be comprisedprimarilyofhumans-thus characterising a social network in which human entities communicate and interact;primarilyoftechnologicalentities-thus characterising a distributed computational system, possibly containing entities whose behaviour is not fully deterministic; or havevaryingproportionsofhumansandtechnologicalentities;
4.Statefulness: a sociotechnical system can bestateless, i.e. the global state of the system as well as the internal states of entities are static, and therefore do not need to be managed;globallystateful, i.e. the global state of the system can change but the internal states of entities are static, and therefore entities can be reactive and their modelling is simplied; orfullystateful, i.e. the global state of the system as well as the internal states of entities are dynamic and must be monitored and managed;
5.Contextsensitiveness: updates in the environment can beirrelevantorunnoticeable, in which case context needs not be managed;dynamicalthoughirrespectiveofthestatesofthesystem, in which case the system as a whole as well as its components must be able to monitor changes in the environment and to adapt accordingly; anddynamicandsensitivetosystemstates, in which case system components must be able to monitor changes in the environment, correlate these changes with their actions and adjust actions to manage the environment while they pursue their goals.
In the present work we are specically interested in bounded air spaces in which pilot controlled aircrafts share space with UAVs. In this scenario, a system is typically:
1.Partiallyopen, as aircrafts are allowed in and out of the air space provided that well specied rules and norms are followed;
2.Locallycoordinated, as entities communicate and coordinate their actions following strict protocols which induce a hierarchy of control;
3.Heterogeneous, as we are considering autonomous vehicles interacting with pilot controlled vehicles and control systems comprised by sensors and actuators as well as human operators;
4.Fullystateful, as the states of individual entities-especially engineered entities-must be stored and managed in order to manage the whole system, particularly with respect to hazard prevention and engineering;
5.Sensitivetosystemstatesand changes resulting from external factors as well as from consequences of state updates of entities.
Our focus in the present article is on hazard prevention during system design. We are interested in structuring the interactions among entities in this scenario in such way that all relevant hazards are taken into account and design decisions are made in order to avoid failures or to build readiness to fix them in case they occur.
3 Hazard prevention based on relevant hazards
We adopt the simplifying assumption that all participating entities have been admitted to the system by following the interaction protocols that characterise it. Entities which do not follow certified interaction protocols are considered as external entities which can influence but are not part of the system and, therefore, are not subject to design decisions related to it.
We also assume that the behaviour of an entity can be completely described by the interactions in which it is prepared to participate. The internal functioning of any entity is not taken into account explicitly. This way, human centered entities can be considered uniformly together with complex engineered entities, and entities can be described using different levels of abstraction, according to the level of detail used to specify each interaction protocol under consideration.
Two fundamental strategies can be considered for hazard prevention during systems design[6]:
1.Avoidingthatthingsgowrong, i.e. anticipating hazards and their corresponding causes, to allow system re-design in order to prevent those causes to occur, and
2.Ensuringthatthingsgoright, i.e. identifying hazards and their corresponding causes, and then looking ahead to events that can be a consequence of those hazards, so that corrective measures can be included in the system for each of the considered failures and/or their causes.
We focus on a subset of the set ofallhazards, which are considered to be therelevantones, which are in fact the ones we are able to advance during synthesis and scrutiny of a system design. The design of complex systems that are resilient to failures must combine these two strategies in such way that all relevant hazards are considered.
In summary, our proposed strategy for hazard prevention during the design of a sociotechnical system is based on the principles outlined in Figure 1.
In order to support this strategy, we introduce a simple diagrammatic language to abstract entities in a sociotechnical system based on interaction protocols. The proposed language is presented in Figure 2.
Each element in the proposed language can be represented using standardised notation as presented in Figures 3 and 4. Our purpose while designing this language was to make it as simple and compact as possible, as well as easy to translate as declarative executable specications using the existing infrastructure based onsoftinstitutions, as detailed in section 5.
In Figure 3 we depict an entity which can participate in several contexts and assume several states within each of these contexts. For each state there are several interaction protocols which can be triggered by the entity. Some protocols have hand-offs in dierent contexts and/or states. Interaction protocols are portrayed as graphs inside white rectangles and hand-offs are represented as dashed arrows connecting graphs.
In Figure 4 we depict all possible types of actions that can belong to an interaction protocol.
As a brief example to illustrate the use of the diagrams, we feature in Figure 5 two entities-namely, a UAV and the Air Traffc Control (ATC)-during a simple interaction5. In this interaction, if necessary the UAV refuels and then it asks for permission to take-off. The ATC confirms the permission to take-off, and then the UAV changes state fromstandingtotaxiing.
Figure1Principlesforhazardprevention
5 A detailed example is presented in section 4.
Figure2Diagrammaticlanguagetorepresententitiesinsociotechnicalsystems
Hazard prevention can raise the possibility that the message from the ATC never gets to the UAV. Backward reasoning could suggest that the exchange of messages between the UAV and the ATC should contain additional steps, so that the UAV would acknowledge receipt of the message and the ATC would not stop sending copies of the permission to take-off until receiving an acknowledgment. Forward reasoning could suggest the inclusion of a time-out sensing operation as part of the interaction protocol for the UAV instandingstate, to prevent the UAV from staying idle in case the message from the ATC never arrives. Both strategies could be combined in order to design a system that is resilient to failures.
Our purpose in building this diagrammatic language has been to support system designers activities with a clear and intuitive pictorial language capable of exposing hazards in a system which can then be considered accordingly.
In the next section we present a detailed example in which a UAV is followed from standing off-lane through flying to landing.We use this example to illustrate how the proposed diagrammatic language can be used to represent complex systems in operation and how it can be used to identify hazards and help in the renement of system design to provide appropriate care to potential hazards.
4 An illustrative example
In order to show how the proposed diagrammatic language can be used for hazard prevention, we consider a slightly more sophisticated example in which a complete mission for a UAV is depicted and analysed. This mission corresponds to a complete flight-from standing off-lane through flying to landing-and requires interactions involving the UAV and an ATC. The number of states through which the UAV passes is seven:Standing,Taxiing,Take-off,Initialclimb,Enroute,ApproachandLanding.
The diagrams corresponding to each state are depicted in Figures 6 to 12.
In Figure 6 the entity UAV001 is initially switched off and off-lane. It is assumed that it is listening to the appropriate channel for messages to receivea message requiring it to start the engine, which takes entity UAV001 to the context of UAV and standing state. The message triggers the interaction protocol depicted in Figure 6. When it receives a message to start the engine, it updates the knowledge base and performs the action of starting the engine. It then queries the knowledge base to check whether the engine has started. If there is a failure, then it tries again to start the engine, otherwise it updates the knowledge base and checks fuel level and systems. If there is a problem, then it stops the engine and tries to start again, otherwise it updates the knowledge base and hands off control to an interaction protocol in Taxiing state.
The proposed strategies for hazard prevention and prevention/recovery have resulted in the loops back to the engine start message, together with the action to stop the engine in case fuel and system messages indicate that the UAV is not ready for flying.
In Figure 7 we have two entities, resp. UAV001 and ATC001. UAV001 stays in the context of UAV but now moves to taxiing state. ATC001 assumes context ATC and state to authorise taxiing towards take-off.
The interaction protocol for UAV001 in context UAV and taxiing state is slightly more complex than the protocol for standing state. UAV001 sends a message to an entity that is available in the context of ATC. In our example, ATC001 receives this message and replies back with eithertake-offOK ortake-offdenied. If take-off is denied, then UAV001 loops back and re-sends the message, until take-off is OK. When take-off is OK, then UAV001 checks whether power back is required. In case it is, then it performs appropriate operations and checks again. When power back is not required, then it finally performs taxiing and hands off control to an interaction protocol in Take-off state.
In Figure 8, UAV001 moves to take-off state and requests authorisation to take-off. If ATC001 authorises take-off, then UAV001 performs fuel and systems verification. If there is something wrong, then take-off is aborted and a new authorisation is requested; if verication succeeds then UAV001 proceeds to take-off. If ATC001 does not authorise take-off, then UAV001 checks its knowledge base to decide whether to hold take-off or to give up. If decision is to hold take-off, then a new authorisation is requested, otherwise mission is aborted.
In Figure 9, UAV001 performs the transition from take-off to climb, which is itself a transition state towards en route state.
In Figure 10, UAV001 moves to en route state and maintains communication with ATC001 anytime it requests change in cruise level, until it identies it is time to start descent. When this situation arises, then UAV001 requests permission to start descent. When ATC001 grants permission for descent then UAV001 performs descent and state moves to approach.
In Figure 11, UAV001 moves to approach and maintains communication with ATC001 to request permission to start approach for landing. In case meteorological conditions are not adequate, permission is denied and, depending on what conditions are occurring, appropriate measures are taken before a second attempt to start approach for landing is started. In case meteorological conditions are fine, permission is granted and approach is started. In case some operation does not succeed during approach, UAV001 goes to circling and approach is restarted, otherwise approach is finalised and the entity moves to landing, which is the final state in this mission.
Finally, in Figure 12, UAV moves to landing and attempts to perform landing. If it succeeds, then it goes to taxiing and switches off engines, otherwise it takes-off again.
A design tool to support hazard prevention in these terms must allow the representation of complex systems based on this vocabulary, and the exhaustive simulation of interactions involving entities in a system once an event (or set of events) is highlighted. In the next section we introducesoftinstitutionsasan appropriate platform to build one such tool.
5 Soft institutions
We argue that soft institutions can be used as a tool to design and implement sociotechnical systems which is particularly useful for hazard prevention, given that a translation from the diagrammatic language presented in the previous sections to interactions protocols in a soft institution is immediate.
Soft institutions generalise the concept of electronic institutions[4, 5, 10]to provide means to model complex systems comprised by human as well as engineered peers[7]. They have been proposed as an appropriate platform to design and implement sociotechnical systems[2].
Electronic institutions are a powerful framework to build systems comprised by multiple entities based on the principle that the global behaviour of a complex system can be managed by the establishment of norms, rewards for entities that abide by these norms and sanctions for those who challenge them. In order for an entity to participate in an electronic institution, it must be prepared to respond to norms, rewards and sanctions, as well as interact with other participating entities.
Norms, rewards and sanctions in an electronic institution form anormativesystemwhich should be flexible in order to adjust to the observed behaviour of participating entities in an institution. The normative system dictates the way entities should behave in order to be allowed into an electronic institution and an entity (or organisation comprised by entities) must comply with the normative system in order to be able to request participation in an electronic institution.
Technological entities can be designed and built to comply with normative systems and, therefore, participate in electronic institutions. Human entities, however, may feel uncomfortable to need to learn and then to be submissive to third party rules as a prerequisite to join into a network of peers.
Soft institutions, in contrast, allow entities to act freely and adjust their behaviour in a minimalist way to be able to join into local interaction protocols. Instead of having a centralised control around the normative system (as is the case with electronic institutions), soft institutions have a decentralised, possibly asynchronous control, centered on entities which choose to interact according to available protocols. This way, the barrier to enter a soft institution is significantly lower for humans, hence an interaction platform based on soft institutions can be more appealing to human entities than one based on electronic institutions, at the cost of only being able to have partial control over design, operation and management of a system based on soft institutions.
From the perspective of hazard prevention, soft institutions are a good modeling language for complex sociotechnical systems, well aligned with the strategy for hazard prevention proposed in section 3. Soft institutions also consider as a basic principle that a full account of all states of the systems being modeled is not feasible, hence hazard prevention can only-and at best-be based on relevant hazards as characterised in section 3.
Soft institutions are organised in four layers:
1.Theentitycontrolledlayer: this layer caters for individual capabilities and actions corresponding to each entity. Entities can be human individuals (e.g. pilots and flight controllers), technological entities (e.g. aircrafts, sensing and communicating devices), or organisations constituted of other entities (e.g. teams of aircrafts flying in formation, teams of controllers);
2.Thecommunicationslayer: this layer comprises the infrastructure and processing power to manage message exchanges between entities. In principle, messaging is peer-to-peer with unique addressing. Additional message control structures can be built using the entity controlled and the communications layer.
3.Thecoordinationlayer: this layer consists of social norms that constrain and regulate interactions among selected peers (e.g. rules to enter a controlled air space, navigate in it, interact with other entities and leave the air space).
4.Theenvironment: this layer comprises all other phenomena that can influence the behaviour and state of the soft institution.
1.Terms: correspond to constant or atomic expressions of dierent types;
2.Variables: are uniquely identied strings to which dierent values can be assigned;
3.Functions: are collections of mappings from tuples of terms to terms.
Messages are passed from entity to entity via the communications layer. To each entity is assigned a unique ID, and messages depend upon contexts and states to be properly treated. A messageMis assumed to have the formatM=
The institutional knowledge base also contains two constructs that represent the state of the entity with respect to the soft institution:
1.Commstores the status of communications. It contains the entity ID and two message queues containing incoming and outgoing messages respectively.
2.Coordstores the status of coordination. It contains the list of contexts and states already held by the entity including the current context/state as head of the list, the protocol being followed, the stage of execution of the current protocol and the set of variable assignments / substitutions.
Protocols are dened as a variation and extension of theLightweightCoordinationCalculus(LCC)[9]according to the specication presented in Figure 13. Carefully crafted sets of protocols embedded into appropriate states and contexts can implement sophisticated patterns of interaction, servicing large and complex sociotechnical systems. Interaction protocols work as support services for entities to engage into well regulated and carefully designed interactions, but they are not mandatory and they do not necessarily cover all aspects of all interactions that connect entities participating in the same sociotechnical system. System modeling based on soft institutions can be used to highlight facets of a system that are considered most relevant. For hazard prevention, relevant hazards can be characterised in detail and simulations can be performed, so that forward and backward reasoning can be performed and the design of a system can be rened and improved towards resilience with respect to failures.
Figure13ProtocolsinLCC
6 Conclusion and future work
In this work we have considered hazard prevention during the design of systems for flight control of autonomous UAVs, based on a diagrammatic language that can be translated to protocols insoftinstitutions.
Implementations of platforms for soft institutions have already been presented elsewhere[7], and frameworks for formal verification of interaction protocols with respect to desired properties have also been developed[8]. In future work, we plan to employ these systems as a platform to support the activities of safety engineers during the design of complex systems, by providing them with tools to identify potential relevant hazards.
[1]F.Belmonte,W.Schon,L.Heurley,andR.Capel.Interdisciplinarysafetyanalysisofcomplexsocio-technologicalsystemsbasedonthefunctionalresonanceaccidentmodel:Anapplicationtorailwaytracsupervision.ReliabilityEngineeringandSystemSafety, 96:237-249, 2011.
[2]F.S.CorreadaSilva,P.Papapanagiotou,D.Murray-Rust,andD.Robertson.Softinstitutions-aplatformtodesignandimplementsociotechnicalsystems(submitted)[C]//In20thInternationalConferenceonKnowledgeEngineeringandKnowledgeManagement,Italy, 2016.
[3]M.C.Davis,R.Challenger,D.N.W.Jayewardene,andC.W.Clegg.Advancingsocio-technicalsystemsthinking:acallforbravery.AppliedErgonomics, 45:171-180, 2014.
[4]M.Esteva,J.A.Rodriguez-Aguilar,C.Sierra,P.Garcia,andJ.L.Arcos.Ontheformalspecicationofelectronicinstitutions.InAgentmediatedelectroniccommerce,pages126-147.Springer, 2001.
[5]M.EstevaandC.Sierra.ElectronicInstitutions:fromspecicationtodevelopment.ConsellSuperiord’InvestigacionsCientques,Institutd’InvestigacióenIntelligènciaArticial, 2003.
[6]E.Hollnagel.Ataleoftwosafeties.NuclearSafetyandSimulation, 2013.
[7]D.Murray-Rust,P.Papapanagiotou,andD.Robertson.Softeningelectronicinstitutionstosupportnaturalinteraction.HumanComputation, 2(2), 2015.
[8]P.Papapanagiotou,D.Murray-Rust,andD.Robertson.Evolutionofthelightweightcoordinationcalculususingformalanalysis.Personalcommunication, 2016.
[9]D.Robertson.Multi-agentcoordinationasdistributedlogicprogramming,pages416-430.Proceedings20thInternationalConferenceonLogicProgramming-SpringerLNCS3132. 2004.
[10]C.Sierra,J.A.Rodriguez-Aguilar,P.Noriega,M.Esteva,andJ.L.Arcos.Engineeringmulti-agentsystemsaselectronicinstitutions.EuropeanJournalfortheInformaticsProfessional, 4(4):33-39, 2004.
[11]E.Trist.Theevolutionofsocio-technicalsystems.Occasionalpaper, 2:1981, 1981.
10.19416/j.cnki.1674-9804.2017.03.018
* This work has been partially supported by FAPESP-Brazil and by the EPSRCUK. The present article is a revised and extended version of the articleHazardidenticationforUAVsbasedonsoftinstitutions, by the same authors, presented at the workshopCoordination,Organisations,InstitutionsandNorms-AAMAS2017. Many important comments and criticisms on early versions of this work have beengenerously provided by Dr. David Murray-Rust (Edinburgh, UK) and Dr. Amanda Whitbrook (Derby, UK).