i. Abstract
The Incident Management Information Sharing (IMIS) Internet of Things (IoT) Pilot established the following objectives.
· Apply Open Geospatial Consortium (OGC) principles and practices for collaborative development to existing standards and technology to prototype an IoT approach to sensor use for incident management.
· Employ an agile methodology for collaborative development of system designs, specifications, software and hardware components of an IoT-inspired IMIS sensor capability.
· Develop profiles and extensions of existing Sensor Web Enablement (SWE) and other distributed computing standards to provide a basis for future IMIS sensor and observation interoperability.
· Prototype capabilities documented in engineering reports and demonstrated in a realistic incident management scenario.
ii.Business Value
The IMIS IoT Pilot aimed to develop, test and demonstrate the use of networked sensor technologies in a real-world scenario, developed in collaboration with the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and first responder stakeholders. This pilot demonstrated an IoT approach to sensor use for incident management. Prototype capabilities included ad hoc, nearly automatic deployment, discovery and access to sensor information feeds, as well as derivation of actionable information in common formats for use in Computer Aided Dispatch (CAD), Emergency Operations Center (EOC) and Geographic Information Systems (GIS), as well as mobile devices.
1 Introduction
1.1 Scope
The Incident Management Information Sharing (IMIS) Internet of Things (IoT) Pilot developed, tested and demonstrated the use of networked sensor technologies in a real-world scenario developed in collaboration with the Department of Homeland Security (DHS) and first responder stakeholders. Among the types of sensors tested were in situ environmental sensors, wearable sensors and imaging sensors on mobile platforms such as Unmanned Aerial Vehicles (UAV) and autonomous vehicles. The key objectives of the IMIS IoT Pilot were to:
• Apply IoT principles to sensing capabilities for incident management;
• Test the feasibility of ad hoc sensor deployment and exploitation by first responder groups (e.g., law enforcement, fire, emergency medical and emergency management);
• Prototype a standards-based architecture for sensor-derived situational awareness that is shared across multiple responder organizations; and
• Create IoT specifications and best practices for incident management through a process of broad collaboration among stakeholders, rapid iterative development and running code.
1.2 Document Contributor Contact Points
All questions regarding this document should be directed to the editor or the following contributors:
Name |
Organization |
---|---|
Greg Schumann |
Exemplar City, Inc. |
Josh Lieberman |
Tumbling Walls |
Simon Jirka |
52°North Initiative for Geospatial Open Source Software GmbH |
Marcus Alzona |
Noblis |
Farzad Alamdar |
The University of Melbourne |
Mike Botts |
Botts Innovative Research Inc. |
Roger Brackin |
Envitia |
Chris Clark |
Compusult |
Flavius Galiber |
Northrup Grumman Corporation |
Mohsen Kalantari |
The University of Melbourne |
Steve Liang |
SensorUp |
1.3 Revision History
Date |
Release |
Editor |
Primary Clauses Modified |
Description |
---|---|---|---|---|
2015-11-05 |
0.5 |
Greg Schumann |
All |
First draft |
2016-03-08 |
0.6 |
Greg Schumann |
All |
Posted draft |
2016-08-05 |
0.7 |
Josh Lieberman |
All |
Editorial update and fill-in |
2016-08-18 |
0.8 |
Josh Lieberman |
All |
Response to DHS review |
|
|
|
|
|
1.4 Future Work
This Engineering Report (ER) is intended to provide recommendations on the development of IMIS profiles of different OGC standards. Thus, the recommendations on future work can be found at the end of each section.
1.5 Pilot Overview
This pilot explores the emerging IoT enabled technologies as a means to provide first responders with better situational awareness and communications. Evolving IoT technologies now make it possible to establish basic network connectivity automatically with these sensors as soon as they are deployed. Basic connectivity is not enough, however. Actionable observations, analysis, alerts and predictions are needed. They must be easily discoverable and accessible from emergency response information systems and mobile devices to provide a dynamic and shared view of changing conditions. Standards are needed to make sensors easily and immediately identifiable, accessible, usable and useful across all teams (on-scene and Operation Centers) and information management platforms joining an incident response.
1.6 OGC Features and IoT Things
Although IoT principles and practices have been developed largely for the Worldwide Web and Industrial Internet communities, an IoT approach to sensors and sensor observation data is a natural one for a standards organization such as OGC that has been developing standards for publishing geospatial Web services and enabling webs of sensors for more than a decade. The key bridging concept is that the “things” which IoT connects to the Internet and Web are precisely the defined, located real world features that form the basis of almost all OGC standards. IoT eases the difficulty of working with networked sensor information, while OGC contributes the necessary rigor to define the Things being observed and the properties of Things being measured. The combined SWE-IoT architecture implemented in this Pilot activity is intended to leverage and reconcile both sets of standards and engineering practices.
1.7 Forward
This Engineering Report (ER) documents the SWE-IoT Architecture used in the IoT Pilot demonstration. The SWE-IoT Architecture includes networked sensors to quickly make a wide range of pertinent observations of an incident environment and its effects on people, including responders themselves.
2 Architecture
2.1 Layered Protocols – Segmentation and Integration
Establishing interoperable standards are a central requirement for enabling ad-hoc integration of sensor resources (e.g., catalogs, data access services, portrayal services). To ensure coverage of a broad range of use cases and application domains, many OGC standards were intentionally defined in a flexible manner. This leads to the need to specify profiles for specific application domains to restrict this flexibility and thereby further increase interoperability. Recommendations on such profiles are provided in the IMIS Profile Recommendations for OGC Web Services ER. During the IMIS IoT Pilot, the following standards were used.
- OGC Web Service Common (OWS Common; OGC 06-121r9): Specification of common aspects for all OGC interface standards. These include, for example, the definition of the GetCapabilities operation and the Capabilities response structure, as well as the definition of principles for XML (eXtensible Markup Language) and KVP (Key-Value Pair) encodings of operation requests and responses.
- OGC Catalog Service Implementation Specification (OGC 07-006r1): This interface specification enables clients to publish and/or discover geospatial datasets as well as services. It provides the metadata needed to decide whether specific datasets or services can be used by clients to fulfil a certain goal. The standard specifies interfaces and bindings for publishing and accessing digital catalogs of metadata for geospatial data, services and related resource information. The interface provides operations for managing the metadata records, e.g., for harvesting records, discovering metadata records, describing record types or querying certain records.
- OGC Web Map Server (WMS; OGC 06-042): The OGC defined the WMS standard for publishing and retrieving maps as images (e.g., providing background maps or pre-rendered satellite data). A WMS server lists its available map layers in the Capabilities document and allows retrieval of these layers with several query parameters (e.g., Bounding Box, using the GetMap operation). The optional GetFeatureInfo operation provides additional information about the features located at a certain pixel.
- OGC Web Feature Service (WFS; OGC 09-025r1/ISO 19142): The WFS standard specifies a service interface for retrieving geographic features (vector data) encoded in GML (Geographic Markup Language). The supported feature types are listed in the capabilities document. The DescribeFeatureType operation provides a description of a specific feature type. The central operation is the GetFeature operation which allows users to query features from a WFS server. Further optional operations are available, such as the Transaction operation for inserting, updating or deleting features.
- The OGC Sensor Observation Service (SOS; OGC 12-006): The OGC SOS standard was developed to provide a standard web service interface for accessing sensor observations. The standard also provides the ability to retrieve a sensor system description using a DescribeSensor request which typically returns a SensorML document. Two alternatives exist for retrieving observation values from SOS. The GetObservation operation is the core operation for retrieving observations using several filters for different observation properties. The response of a successful GetObservation request is an Observation which includes metadata about the observations as well as one or more measurement values. An alternative approach for retrieving archived or real-time measurements is to use the combination of GetResultTemplate and GetResult requests. The latter approach was designed to provide maximum efficiency for accessing results, including complex tuple or time series data, provided as single values, large blocks of ASCII or binary data, or streaming data. The response of a GetResultTemplate is Sensor Web Enablement (SWE) Common Data description of the data components, data structure and data encoding. This only needs to be called once for the client software to then understand how to parse the data values. The GetResult request returns only the requested data values with no metadata.
Several extensions exist for transactional data publication or result handling in case the same request and response metadata should not be repeated in each request and response message. The Transactional Sensor Observation Service (SOS-T) transactions operation enables a sensor and/or sensor hub (S-Hub) to push measurements into a local or remote SOS-T instance, residing perhaps in the cloud. - 52 North (52N) and Open Sensor Hub (OSH): Two open source software stacks that implement and provide ease of deployment of OGC SWE standards such as SOS, Sensor Planning Service (SPS), SensorML, Observations and Measurements (O&M), and SWE Common. Both were deployed by various teams to meet the needs of this scenario. This highlights one of the advantages of using open standards such as SWE. One is not solely dependent on one software stack or one software vendor, as long as the various software components are built upon and are compliant with the approved standards.
- OGC Sensor Things Application Programming Interface (STA): This standard (currently in the adoption vote process) provides an interface for the retrieval of observation data relying on O&M. In contrast to SOS, STA services are based fundamentally on Representational State Transfer (REST) principles and specify JavaScript Object Notation (JSON) as the default encoding for observations. As such, it is particularly Web-friendly and lightweight. STA services make it relatively easy to develop browser-based IoT client applications.
2.2 Sensor Hub (S-Hub) Services
S-Hub Services are gateways between one or more local sensor devices on one side and Internet users of the sensors on the other. S-Hubs provide standard, predictable and interoperable access to minimally connected, often proprietary sensor devices and are vital to both SWE and IoT. Within the SWE-IoT architecture, S-Hubs are components implemented as software stacks that conform to OGC SWE standards as well as other industry standards, in order to fill this mediation role. S-Hubs provide standard protocols and encodings for accessing real-time or archived observations, as well as for tasking sensor or actuator systems.
Two overlapping flavors of S-Hubs were developed in the IoT Pilot: those mainly supporting the original OGC SWE suite of services and those focusing on the REST-based approach of STA services. Both S-Hub types supported real-time and archived observations and complemented each other’s capabilities.
Software stacks that implement S-Hub services can typically be deployed on a range of platforms and at a range of scales. For instance, OSH has been deployed on platforms ranging from Android smartphones, tablets, and microcontroller boards (e.g. Raspberry Pi and Arduino), to Linux/Windows/OS-X devices, and the Amazon Web Service (AWS) Cloud. Specific computing hardware may be required to support specific network protocols and sensor devices. It is primarily the software implementation of open standards, however, that enables the interoperability required for easily deployed sensor web and IoT.
One of the powerful aspects of the design and implementations of OGC-standard S-Hubs, as shown in Figure 2-1, is that they can be distributed throughout the global environment and can be hierarchically deployed. Thus, an S-Hub might be deployed onboard an Arduino-based sensor system providing tasking and observation capabilities. This S-Hub might be one of hundreds of S-Hubs that are managed and made accessible to the public through a local or regional S-Hub, while an S-Hub deployed in the cloud might receive observations from a collection of locally deployed S-Hubs in order to provide large-scale persistent storage and advanced processing. Processing and data storage can occur anywhere within such a distributed architecture thereby allowing one to configure particular data access, tasking, processing and storage capabilities on whichever S-Hubs make the most sense.
The design of these S-Hubs provides important scalability. This scalability allows the S-Hub software to be deployed and configured on platforms ranging from simple microprocessor boards to cloud-based services, and allows the S-Hub to support very simple, every day, mass market sensors to highly specialized and complex national sensor systems.
Another important aspect of the S-Hubs is their ability to support the original purpose of the sensor deployment while still being able to meet additional needs. For example, with proper authorization, a video camera deployed for store security could be repurposed to view an emergency event. Similarly, a laser rangefinder to measure remote locations could be repurposed to task a web camera to look at a particular geospatial location or guide an Unmanned Aerial System (UAS) to a particular place.
2.3 IoT, Web of Things (WoT) and O&M
The essence of IoT principles is that real world things have corresponding identities on the Internet, i.e. Internet Protocol (IP) addresses, that support access to the values of Thing properties, whether those are temperature, color, or simply how the Thing appears in an image. The most common implementations of this, however, focus on the use of URL’s and Web protocols to provide identity and access; this specialization of IoT is known as the Web of Things or WoT. The connectedness of WoT carries additional benefits, however, in expressing the OGC O&M model for the various types of interrelated information that make up the process of sensing and measuring properties of real world Things. The SWE-IoT Pilot architecture leverages these benefits in several ways, especially in the context of STA services that provide URL links explicitly for such connections.
2.4 Registration and Discovery
Key to the IMIS IoT is the registration and subsequent discovery of sensors allowing them to be exploited. A number of key components work together to provide the infrastructure for discovering sensors; these are the S-Hub, the Hub Catalog (HubCat) and the Web Registration Processing Service (WRPS).
- The S-Hub provides either an OGC SOS compliant interface or an OGC SensorThings API (STA) compliant interface. SOS includes the key request ‘GetCapabilities’ which reports both the functional capabilities (supported functions) and the primary content of the service. SensorThings services have systematically defined URLs which also allow users to retrieve the functions and content of the service. This allows SOS and STA service metadata information to be harvested and catalogued.
- The second component, the HubCat, provides the metadata record management and discovery interfaces that enable clients to identify relevant services. It is not enough to just find a service; for a client to operate effectively it needs to be able to qualify the relevance of sensors accessible through a given S-Hub quickly; the HubCat is the key to this capability.
- The Publishing Service WPS, which is a sort of utility service, facilitates the population of HubCat metadata records. Although it would be possible for an S-Hub to populate the HubCat itself, this is a relatively complex process requiring strict metadata record formatting and interpretation of service information to generate key semantics, such as the phenomenon type that a particular sensor measures. The WRPS service takes the SOS or STA URL, harvests the required metadata and populates the HubCat, returning the Universally Unique Identifier (UUID) of the entry to the client. The trigger to initiate this service is typically the S-Hub, although it could be another actor depending on the architecture. The WRPS is also involved in “de-registering” sensors and S-Hubs in dynamic ad hoc sensor network environments.
The HubCat is the primary focus for clients (either user interfaces or other web services) to discover S-Hub services, sensors, and sensor observations. The HubCat for the Pilot was a component of Compusult’s Web Enterprise Suite (WES). This component is an implementation of the OGC Catalog Services for the Web (CSW) standard, v2.0.2. As its datastore model, it uses the OASIS-standard ebXML Registry Information Model (ebRIM 3.0) and so is termed a CSW-ebRIM service. The flexibility of this standard means that there is no bespoke element required to be coded into the catalog software for a particular type of metadata, simply the loading of an information model configuration, known as an ebRIM Registry Extension Package (eREP). The eREP defines classification schemes, record types, associations, and other structural elements needed to catalog sensors. It can be thought of as similar to a relational database Schema definition (DDL).
Ingestion of sensor and service metadata then involves mapping the metadata elements into the eREP defined elements. Both the eREP and the mappings evolved during the Pilot, but these may be formalized as a standard profile of the ebRIM mode for HubCat implementations to maintain broad interoperability among themselves.
2.4.1 Registration Process
The overall registration process is shown in the sequence diagram below (Figure 2-2). When a S-Hub boots/comes online, it sends a request to the publishing service which then harvests the S-Hub capabilities and populates the catalog as necessary. The publishing service returns the UUID of the entry so that the S-Hub can subsequently update or remove the entry as its status changes.
This workflow depends on the S-Hub knowing to which catalog or publishing service it needs to connect. An alternative is an external trigger which performs the ‘Add’ request, which might be relevant in some circumstances.
2.4.2 Update Process
The update process is initiated by the S-Hub requesting an update using the ID returned during the registration process, as shown in Figure 2-3.
2.4.3 De-Registration Process
A similar process occurs when S-Hubs shut down. They will initiate a de-registration process using the ID returned during registration (Figure 2-4). The result is that the HubCat will only show sensors that are currently registered (and by implication operational).
Of course, while de-registration could remove an S-Hub record from the HubCat (the method used in the experiment), it could also just mark it as offline or manage its availability in other ways. This is a decision related to the permanence of the sensor and the need to keep records of sensor availability/use.
Compusult’s implementation of the HubCat will poll any registered services at a configurable rate, and change the status of the service from online to offline or vice versa if required.
2.4.4 Catalog WMS
Both Compusult and Envitia CSWclients were used to access the HubCat directly, but the limited number of clients able to interact with a CSW-ebRIM service was mitigated by the additional provision of a CSW-linked Web Map Service (CSW-WMS) which provided, in effect, access to the catalog contents as phenomenon-specific map layers. Clients could select specific phenomena and visualize all sensors from all registered S-Hubs providing measurements for those phenomena. The WMS GetFeatureInfo operation returned more detailed information for a given measurement on the map, including links to invoke a complete SOS Web client to view the observation records. Envitia exploited this in the demonstration for use in lightweight and mobile clients even though a CSW-ebRIM client search was available in the Envitia Client. The CSW-ebRIM query is useful for advanced queries, but the CSW-WMS provides a useful graphical shorthand access to available data for each phenomenon type.
2.4.5 Discovery Process via Catalog Services
Once ‘auto-registered,’ all SOS services, STA services and WMS services deployed before or during an incident can be discovered via the HubCat. Because the HubCat implements the CSW-ebRIM service, it is extremely flexible and technically able to catalog significantly more artifact types than just sensors. In fact, the HubCat was used to also catalog available maps, OGC Web Services (OWS) Context documents and other artifacts discussed later.
A wide range of other service catalogs may exist, of course, keeping track of a wide range of framework or other geospatial data. Envitia also provided a CSW-ebRIM service to catalog geospatial data and imagery as background to the incident location (HubCat2). Because its interface conformed to the same standard as that provided by the HubCat, and the discovery client already accessed the Envitia CSW-ebRIM service, it was possible with very little configuration to allow it to access both vendors’ services. Situations with multiple catalogs are likely in actual deployments and so the configuration offered a useful demonstration.
The sequence in Figure 2-5 shows the initial discovery of information from both HubCat and HubCat2, and the subsequent storage of an OWS Context document (describing the collection of information assembled from the discovery process by the client user) in HubCat2, although this could also be stored in the Compusult HubCat.
Note the Envitia Client actually performs a relatively complex federated query. The user asks for all data in a given geographic area. The query is issued independently to both catalog services which subsequently respond. The Envitia Client then combines the results and presents them on the map display and in list form. This avoids the user needing to query each catalog in turn.
An alternative model, which is common to simplify the client, is to have a federating catalog service (again a CSW-ebRIM service) which is a single point of presence but, when a query arrives, distributes it out to all connected catalogs.
2.4.6 Discovery via OWS Context Documents
Another key method of discovery used during the experiment was discovery using contextual views. This delineates discovery and search. The concept is that one user or group of users, typically in a command center, prepare views and save them into the catalog. These views can then be discovered and loaded by mobile users, for example. Search in this case can be very simple; for example, finding all context documents in a specific area would be much more limited that finding all data. This was the process used in the experiment where the utilities user, using the Envitia InSight Client, simply searched for views and loaded them (Figure 2-6).
To move away from search altogether, other options include direct emailing of an OWS Context document to a user, or the more advanced action of setting up Communities of Interest (COI) within the catalog. The mobile user can log in as a community member and will see icons which show the views in that community, providing immediate access the relevant operational view. This process was demonstrated in the desktop environment with the Envitia Horizon GeoPortal using a single COI.
2.5 Events and Notification
Figure 2-7 shows an overview of the event notification architecture. Central element is the Web Processing Service for Event Processing (Web Event Processing Service, WEPS) which controls the overall workflow. On the one hand, the WEPS receives from the client event subscriptions through WPS Execute requests. On the other hand, it controls the event processing module which performs the analysis and pattern matching of incoming sensor data streams against the event pattern rules contained in the event subscriptions. To push all relevant new observations into the event processor, a feeder is used which regularly checks a data source (in this case SOS servers) for new observations. As soon as a new observation is available, it is pushed into the event processor. Finally, the output of the event processor (i.e., all detected events that match to a subscription) are sent to the Notification Store. This is an RSS-based component so that clients can consume RSS-feeds containing those notifications that correspond to their subscriptions. A more detailed explanation of this architecture is provided in the IMIS Profile Recommendations for OGC Web Services ER.
3 Scenario Overview
3.1 Context
The incident occurs in the late summer, during the workweek at the beginning of rush hour. A cold front is approaching and expected to impact the vicinity within two hours. A tractor trailer truck carrying unknown cargo is traveling north on Memorial Parkway (the main north/south artery through town) near the Airport Road intersection that parallels a rail track (Figure 3-1). The truck collides with another vehicle as it approaches the Airport Road overpass, loses control and crashes through the barrier at the crest of the overpass. The truck tumbles some 15 feet onto the congested intersection below. The truck lands on the traffic in the intersection; the cargo it is carrying dislodges knocking down power lines and hits numerous cars, buildings and a transformer. The truck comes to rest near the railroad track and the cargo which appears to be numerous (15-20) very large cylinders (approximately 7-8 feet in length) is strewn about the crash scene. Some cylinders are dangerously close to a multi-story masonry constructed commercial building adjacent to an apartment complex. There are several vehicles trapped under and blocked by the truck and cargo, resulting in a complete traffic stoppage on both sides of the highway as well as local roadways. Several other cars and trucks are also involved in the accident, both on the northbound side of the Memorial Parkway overpass and on the Airport Road intersection below. There are numerous injuries at both crash sites. One or more vehicles are in danger of catching fire and several cylinders of the truck’s unknown but possibly hazardous (i.e., toxic, volatile, flammable) cargo begin to leak. The Department of Transportation placard on the truck is absent or not visible due to the wreckage and location of the other vehicles. North and southbound traffic on Memorial Parkway comes to a halt as does the west and eastbound traffic on Airport Road. People are beginning to emerge from their vehicles; some nearest the crash scene are gasping for breath.
Bystanders immediately call 911 and begin tweeting photos and descriptions. City law enforcement arrives a minute later and calls dispatch for fire, hazmat and emergency medical responders from the main city jurisdiction. The dispatcher notes the proximity of the rail line involved and alerts the proper authorities that the track is involved in the accident scene.
The initial priorities are to alert the rail line; contact the trucking company to determine the type and amount of cargo being transported by the truck; triage the scene and establish awareness of the situation; determine lifesaving requirements; and determine the extent of the incident and its magnitude as law enforcement begins to secure the scene and deal with the traffic. A site for the on-scene incident command is established by the city fire district chief at a safe setback distance from the scene (Joe Davis Stadium). The district chief makes an initial situation report to the city Public Safety Access Point (PSAP) describing the situation as a commercial truck accident with dislodged cargo that is leaking (yet unknown) contents, threat of fire, and multiple injuries at both crash sites. The district chief requests additional alarms, hazmat response, an emergency medical services (EMS) task force and for the police commander to report to the command post. The chief quickly develops the initial strategy for this incident which includes deactivating the downed power line, rescuing the walking wounded that can be safely reached, extinguishing fires, and a top priority of containing spilled/leaking cargo and the diesel fuel that is now leaking from the truck’s onboard fuel tanks. The command post, scene perimeter and hot zone are immediately established. Available staff quickly pull up data from multiple sources to characterize and visualize both incident scenes including resource staging, incident perimeter, access and egress points, as well as potential spill and flow patterns and/or smoke plume size and direction. The wind is currently blowing out of the west, across Memorial Parkway and onto nearby John Hunt Park; there are several soccer teams practicing in the park and hundreds of people in the area. The National Weather Service (NWS) forecast office has been monitoring an approaching cold front expected to hit the area within the next few hours and is now working with local hazmat teams and the emergency management agency (EMA) to generate continually updated plume models based on current and changing weather conditions. It is anticipated that the cold front could cause the winds to shift to a more westerly flow, further impacting populated areas around the incident scene.
Responding organizations contribute a variety of sensor resources to aid in awareness of the situation. City law enforcement and fire service task camera mounted vehicles and personnel, and request a video-equipped UAS as well as access to other existing cameras (both public and private) such as property surveillance/security, TV station weather cameras and traffic cameras to contribute periodic imagery of the incident area. Hazmat teams begin to arrive onsite, donning wearable biometric sensors for monitoring personnel entering and exiting the scene, placing environmental monitoring sensors around the scene, and deploying a UAS equipped with video and air quality sensors. All begin to transmit data about location and concentration of the hazardous materials. Initial investigation reveals the cargo is approximately 15-20 one-ton cylinders of chlorine gas. An unknown number of cylinders dislodged from the truck on impact and are strewn about the crash scene. A mutual aid request to the adjoining county for additional hazardous material resources is issued. Each of the respective jurisdictions (city and county) dispatch their first responder resources [via Computer Aided Dispatch and stand up their respective Emergency Operation Centers (EOCs)]. Further requests go out for any information regarding the truck’s payload and number of cylinders onboard. It is confirmed from the UAS video that the payload is chlorine gas and 17 cylinders are quickly located. The hazmat teams have deployed portable hazmat sensors around the incident perimeter to track migration and concentration of the gas. The data are used to evaluate shelter-in-place strategies and safe evacuation routes of the immediate area. In the EOC, GIS analysts work closely with the hazmat teams and NWS local forecast office to monitor the changing weather conditions and develop situation products such as migration models to share with all stakeholders (and set triggers/alerts on the air sensor observations to guide evacuation planning). City emergency managers activate agreements with managers of a nearby building and access building sensor systems to monitor internal environmental conditions for a possible evacuation and/or determine suitability as shelter-in-place sites.
The multi-building apartment complex is evaluated for evacuation as there is concern about both hazardous material and fuel seepage into the underground infrastructure and explosion/fire impact to the buildings. Downed power lines interrupt electricity to the building as nightfall approaches.
Public observations on social media begin to report symptoms and locations of citizens impacted in the area. Social media serves to detect and map out a migration of leaked fluid into nearby underground infrastructures (mainly storm sewers) and a natural creek, triggering a revision of the incident perimeter, re-deployment of medical responders in the vicinity, and call up of an environmental management unit with hazmat cleanup capabilities. Some report a strong smell of diesel fuel emanating from storm drains west of the incident site approaching the nearby Spring Branch waterway. Social media is used by the authorities to alert citizens of the impacted areas, provide public evacuation routes and shelter-in-place safe zones. The media monitors the authoritative social media feeds (EMA, PD, FD and NWS) to stay informed of rapidly changing conditions.
As the threat of fires and hazardous materials are contained, accident victims are treated and evacuated, fire and rescue are required to canvas buildings in the immediate area to search for and treat victims that have been sheltering in place. Northbound traffic on Memorial Parkway is temporarily re-routed (through the Martin Road gate of Redstone Arsenal); the situation evolves from response to recovery. Deployed sensor units are recovered, maintained and stored for future use. Links to incident data are organized as a record of the incident response for use in retrospective training/learning activities and for use in tracking any subsequent effects on incident responders or victims.
3.2 Roles
- Citizens – Report the accident, provide situational awareness through social media updates and are evacuees. Citizens also represent accident victims.
- 911 operator (s) – Respond to 911 calls and gather additional situational awareness. Operators connect to the appropriate first responder dispatch to respond to the accident.
- First responders – Includes firefighters, police/law enforcement and EMS personnel. Hazmat crews are specially trained firefighters.
- Public utility crews – Neutralize risks from downed powerlines, and leaking gas and water lines.
- On-scene commander – Provides command and control of the incident response. The on-scene commander is a senior first responder, usually a fire chief.
- Emergency Operations Center (EOC) – Carries out the principles of emergency preparedness and emergency management, or disaster management functions at a strategic level during an emergency. This centralized command and control facility ensures continuity of operations. The EOC is responsible for the strategic overview, or big picture, of the disaster, and does not normally direct field assets, instead making operational decisions and leaving tactical decisions to lower commands. The common functions of all EOC’s is to collect, gather and analyze data; make decisions that protect life and property; maintain continuity of the organization within the scope of applicable laws; and disseminate those decisions to all concerned agencies and individuals. In most EOC’s the emergency manager is the individual in charge.
3.3 Narrative Cycle
The IMIS IoT Pilot demonstration was arranged in five stages, consistent with a standard disaster response.
- Notification – 911 call, build incident, citizen observations, backup response, tracking, imagery;
- Build awareness – establish scene, secure scene, utility operations, establish command post, hazmat on scene, EMS on scene, assess weather;
- Response – UAS monitoring, traffic and crowd control, evacuation/shelter-in-place, building monitoring, threshold, hazmat planning, EOC activation;
- Mitigate event – configure plume models, sweep area/buildings, health hazard detection, environmental hazard detection, traffic control update, EOC operational; and
- Recover – coordinate EOC, conduct triage, mitigate cylinders, recalculate plume model, EMS/search and rescue (SAR) sweep, social media monitoring, transition to recovery, normalize traffic, spin down EOC, stand down sensors, close incident.
4 Technology Themes
Innovations that were advanced and deployed during the pilot activity can be organized into a set of technical capability themes. Although the technologies themselves are not necessarily of direct interest to first responders, the capabilities they represent are responsible for the user features that do have value to IMIS. Each of these themes relates in a significant way to the overall goal of getting the right information to the right person at the right time so they can be aware of the situation at hand and take the right action.
4.1 Shared Awareness – Incidents and Contexts
A number of technologies deployed in the pilot addressed the issue of finding sensor and information resources through dynamic catalog registration and search. Participants also recognized that traditional catalog search methods do not work for fast-paced responders. Virtual incident folders and context documents allow resources to be organized by an analyst or commander for each incident and responder role, then shared back and forth with multiple responders who receive the benefits of discovery without writing queries on their tablets or smartphones. Exchange of these link documents with connected responders literally represents the critical shared awareness of incident information that has mainly been communicated in patchwork fashion by voice communications up until now.
4.2 Driven by Events – Publish-Subscribe-Notify
Even when organized by event folders and context documents, ad hoc availability of incident sensor data quickly transforms not enough awareness into too much information to stay on top of. It then becomes necessary for IoT information systems to provide filtering, analysis and delivery of observations so that responders are notified of events when and if they are critical to their work without forcing them to wade through all the less significant data in between. This event-driven awareness has three important phases for the responder:
- Publication of criteria and availability of observation events that someone needs to know about;
- Subscription to those events by the users who have the interest or need to know; and
- Notifications pushed to subscribers without delay when a published event occurs.
The appropriate scope of event recognition and degree of user involvement can vary tremendously, from an ad hoc user filter in a mobile app to configuration of a Message Queue Telemetry Transport (MQTT) service topic, to mandatory delivery of preconfigured health alerts from a complex event processing workflow. The common denominator is an informed awareness where users know what events will be generated, when events will be delivered to them and how they will be notified.
4.3 Tracking Resources – Resources and Responders
Given the present ubiquity of GPS receiver chips, it would seem trivial to count location as an observation. As with many other sensors, however, turning location observations into useful incident information for responders has its pitfalls, from the exact identity of the thing being located to the quality of GPS fixes to whether a location has gone “stale” from inadequate measurement frequency. The identity of the thing being located is critical, since this is the connection to other sensors, from videocams to health sensors, observing the same thing or from it. To provide open system federation of sensors, tracking information also may need to be “liberated” from proprietary Automated Vehicle Location (AVL) systems that hold tightly both the location data and the identity of the vehicle, person or resource being tracked.
4.4 Sightful Sensing – Imagery Sources and Targets
A wide variety of imaging sensors are practical (or soon will be) for incident response use. The pilot sought to georeference, time-index and otherwise transform imagery from vehicle, body, fixed and airborne cameras into searchable observation data that an incident manager or responder could effectively use as “eyes on the scene” to record a particular place at a particular time. This is a departure from the traditional linear paradigm of viewing and perhaps reviewing video footage one stream at a time. The opportunity to search for all available/captured views of a specific feature at a specific time from any imaging device is often imagined in TV procedurals, but rarely automated in public safety practice. Sensor Web and IoT capabilities advanced in the pilot have the potential to make this a reality.
4.5 Environmental Sensing – Air Quality
An essential component of assuring public health and safety is an awareness of environmental conditions and impacts around an incident scene, both for responders and for the public. The pilot scenario focused on air quality and aerosol hazards from a chlorine cylinder spill. Adequate sampling, resolution, reproducibility and reliability are all technical challenges even for fixed air quality sensors. An IoT approach to sensor integration can provide a weight of evidence of air quality conditions that helps responders have confidence in the decisions they make to protect the public and themselves. The pilot also integrated the results of air dispersion modeling into the incident management awareness as “future” observation results able to be tasked, recorded, queried and sent in notifications in the same way as current or historical sensor observations.
4.6 Health Sensing – Wearables and Physiology
Sensing human physiological parameters took on two challenges in the pilot. The first was to provide objective assessments of responder health and safety, alerting fire chiefs, for example, if a firefighter’s heart rate or body temperature became dangerously high even when their radio communications insisted, “I’m just fine, don’t pull me out.” Essential to this benefit is an easy-to-use, low-maintenance system for capturing such measurements for each responder at risk and generating reliable alerts of critical measurements.
The other challenge the pilot addressed was triage of incident victims (possibly including responders themselves). Health sensors have the potential to monitor automatically for vital signs and condition changes in a large number of patients while triage examinations and dispositions are carried out, usually, by inadequate numbers of EMS personnel. Again, ease of use, non-intrusiveness and reliability where lives may be at imminent risk are all success factors for this capability. Quickly alerting EMS which patient’s pulse is weakening and skin temperature is decreasing, as well as where they are situated at the scene, is exactly the type of challenge that the IMIS IoT Pilot was in a position to address.
5 Technical Implementation
5.1 Station 1
Demonstration Station 1 covers the time period from zero to five minutes and is focused on preparation and notification. Preparation takes place long before an event occurs and is often referred to as “left of boom.” Notification takes place when information is first received that an event has occurred; this information can come from a variety of sources including 911 calls and social media observations. This is often referred to as “right of boom.”
5.1.1 Preparation of Sensor Capability
To provide IoT support to an incident, several components need to be prepared in advance. These components can be broken into three categories.
S-Hubs
S-Hubs need to be available and ready to deploy. The job of the S-Hub is to speak to the individual sensors using whatever proprietary language it supports and make the data available using open standards such as SOS and STA. When an S-Hub is activated it will register itself with the catalog, making it discoverable.
Catalog
The catalog (HubCat), which in this case was implemented using the CSW 2.0.2 specification, needs to be active and its address URL known by all of the organizations who may want to use it. The job of the catalog is to allow for the registration of S-Hubs as they become active, as well as the discovery of framework data services. Framework data is discussed in Section 5.1.2.
Clients
Clients can come in many different forms, whether it is desktops, mobile apps or EOC systems. These clients need to support discovery from the catalog as well as maintain support for SOS or SensorThings. The clients need to be preconfigured with the location of the catalog and be ready for use.
Figure 5-1 illustrates how these three components come together: