Published

OGC Engineering Report

OGC Testbed-17: OGC API - Moving Features Engineering Report
Dean Younge Editor
OGC Engineering Report

Published

Document number:21-028
Document type:OGC Engineering Report
Document subtype:
Document stage:Published
Document language:English

License Agreement

Permission is hereby granted by the Open Geospatial Consortium, (“Licensor”), free of charge and subject to the terms set forth below, to any person obtaining a copy of this Intellectual Property and any associated documentation, to deal in the Intellectual Property without restriction (except as set forth below), including without limitation the rights to implement, use, copy, modify, merge, publish, distribute, and/or sublicense copies of the Intellectual Property, and to permit persons to whom the Intellectual Property is furnished to do so, provided that all copyright notices on the intellectual property are retained intact and that each person to whom the Intellectual Property is furnished agrees to the terms of this Agreement.

If you modify the Intellectual Property, all copies of the modified Intellectual Property must include, in addition to the above copyright notice, a notice that the Intellectual Property includes modifications that have not been approved or adopted by LICENSOR.

THIS LICENSE IS A COPYRIGHT LICENSE ONLY, AND DOES NOT CONVEY ANY RIGHTS UNDER ANY PATENTS THAT MAY BE IN FORCE ANYWHERE IN THE WORLD. THE INTELLECTUAL PROPERTY IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE DO NOT WARRANT THAT THE FUNCTIONS CONTAINED IN THE INTELLECTUAL PROPERTY WILL MEET YOUR REQUIREMENTS OR THAT THE OPERATION OF THE INTELLECTUAL PROPERTY WILL BE UNINTERRUPTED OR ERROR FREE. ANY USE OF THE INTELLECTUAL PROPERTY SHALL BE MADE ENTIRELY AT THE USER’S OWN RISK. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR ANY CONTRIBUTOR OF INTELLECTUAL PROPERTY RIGHTS TO THE INTELLECTUAL PROPERTY BE LIABLE FOR ANY CLAIM, OR ANY DIRECT, SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM ANY ALLEGED INFRINGEMENT OR ANY LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR UNDER ANY OTHER LEGAL THEORY, ARISING OUT OF OR IN CONNECTION WITH THE IMPLEMENTATION, USE, COMMERCIALIZATION OR PERFORMANCE OF THIS INTELLECTUAL PROPERTY.

This license is effective until terminated. You may terminate it at any time by destroying the Intellectual Property together with all copies in any form. The license will also terminate if you fail to comply with any term or condition of this Agreement. Except as provided in the following sentence, no such termination of this license shall require the termination of any third party end-user sublicense to the Intellectual Property which is in force as of the date of notice of such termination. In addition, should the Intellectual Property, or the operation of the Intellectual Property, infringe, or in LICENSOR’s sole opinion be likely to infringe, any patent, copyright, trademark or other right of a third party, you agree that LICENSOR, in its sole discretion, may terminate this license without any compensation or liability to you, your licensees or any other party. You agree upon termination of any kind to destroy or cause to be destroyed the Intellectual Property together with all copies in any form, whether held by you or by any third party.

Except as contained in this notice, the name of LICENSOR or of any other holder of a copyright in all or part of the Intellectual Property shall not be used in advertising or otherwise to promote the sale, use or other dealings in this Intellectual Property without prior written authorization of LICENSOR or such copyright holder. LICENSOR is and shall at all times be the sole entity that may authorize you or any third party to use certification marks, trademarks or other special designations to indicate compliance with any LICENSOR standards or specifications. This Agreement is governed by the laws of the Commonwealth of Massachusetts. The application to this Agreement of the United Nations Convention on Contracts for the International Sale of Goods is hereby expressly excluded. In the event any provision of this Agreement shall be deemed unenforceable, void or invalid, such provision shall be modified so as to make it valid and enforceable, and as so modified the entire Agreement shall remain in full force and effect. No decision, action or inaction by LICENSOR shall be construed to be a waiver of any rights or remedies available to it.

None of the Intellectual Property or underlying information or technology may be downloaded or otherwise exported or reexported in violation of U.S. export laws and regulations. In addition, you are responsible for complying with any local laws in your jurisdiction which may impact your right to import, export or use the Intellectual Property, and you represent that you have complied with any regulations or registration procedures required by applicable law to make this license enforceable.



I.  Abstract

The OGC Testbed-17 Moving Features thread conducted an interoperability feasibility study that examined specific scenarios that could be supported by a Moving Features Application Programming Interface (API). The use cases considered tracking objects based on motion imagery, analytical processing and visualization. This Engineering Report presents a specification of a prototype Moving Features API, that could serve as the foundation for a future draft OGC API — Moving Features standard.

II.  Executive Summary

Digital representation of moving objects is gaining increasing interest across industry, from both the Observations and Modeling perspectives. An increasing number of scenarios requires expanding three-dimensional (3D) geospatial environments with time and linking data representing same or related objects. Due to its character, it may require a specific approach to the information representation. There are abstract models of the International Organization for Standardization (ISO) available and data encodings within OGC Standards that address these scenarios. The understanding of the moving features idea changes as we find more sophisticated scenarios with new challenges and reveal the need to aggregate, link and represent specific information.

OGC is continuing its efforts to simplify the use of geospatial standards, proposing a suite of modern OGC API Standards. The idea is to continue the proven approach that enabled hundreds of the applications already exploiting legacy OGC Standards to easily use geospatial data. New Web APIs are modern in terms of best practices, protocols, encodings and the definition of tailored solutions.

Previously, Testbed-16 explored technologies to transform detections of moving objects reported using motion imagery standards (e.g. MISB Std. 0903) into the model and encoding defined in the OGC Moving Features Standard (OGC 18-075). That work suggested the following notional workflow:

  1. Extract moving object detections from the motion imagery stream

  2. Encode the detections as moving features

  3. Correlate the detection of moving features into tracks of moving features

  4. Perform analytics to enrich and exploit the tracks of moving features

The Testbed-16 work is documented in the Testbed-16 Full Motion Video to Moving Features Engineering Report (OGC 20-036). That work motivated the OGC Moving Features Standard Working Group (SWG) to update their Charter to include work on an OGC API Standard dedicated to objects on the move. The Testbed-17 participants worked closely with the MF SWG to ensure that all efforts were coordinated accordingly.

To support these efforts, the OGC Testbed-17 Moving Features thread conducted an interoperability feasibility study that examined specific scenarios that could be supported by a hypothetical Moving Features API. The use cases considered tracking objects based on motion imagery (video recordings in practice), analytical processing and visualization. Real-life visual and hybrid visual-lidar detection of objects presents challenges related to object identification, tracking and inferencing.

Purpose of this document

This document is one of the two Testbed-17 Engineering Reports (D021) that focus on the specification of a hypothetical OGC Moving Features API that could be used to implement an agreed scenario. The document presents the considerations and background to the development of the API, as well as the proposed normative clauses containing requirements that could be included in the future OGC API — Moving Features standard.

For the detailed description of the experiment itself and technical challenges related to objects tracking and use cases one should see the OGC Testbed-17 D020 Moving Features Engineering Report (OGC 21-036).

Architecture

The experiment that grounded this ER and the Testbed-17 Moving Features thread consisted of the components depicted on the figure below.

Figure 1 — Moving Features task work items and deliverables. (Source: OGC Testbed-17 CFP)

III.  Keywords

The following are keywords to be used by search engines and document catalogues.

ogcdoc, OGC document, Moving Features, API


IV.  Preface

Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. The Open Geospatial Consortium shall not be held responsible for identifying any or all such patent rights.

Recipients of this document are requested to submit, with their comments, notification of any relevant patent claims or other intellectual property rights of which they may be aware that might be infringed by any implementation of the standard set forth in this document, and to provide supporting documentation.

V.  Security considerations

No security considerations have been made for this document.

VI.  Submitting Organizations

The following organizations submitted this Document to the Open Geospatial Consortium (OGC):

VII.  Submitters

All questions regarding this document should be directed to the editor or the contributors:

Name Organization Role
Dean Younge Compusult Limited Editor
Martin Desruisseaux Geomatys Contributor
Piotr Zaborowski Open Geospatial Consortium Contributor
Guilhem Legal Geomatys Contributor
Sepehr Honarparvar University of Calgary Contributor
Steve Liang University of Calgary Contributor
Brad Miller Compusult Limited Contributor
Jason MacDonald Compusult Limited Contributor
Sizhe Wang Arizona State University (ASU) Contributor
Zhining Gu Arizona State University (ASU) Contributor
Rob Smith Away Team Software Contributor
Angie Carrillo RHEA Group Contributor
Guy Schumann RSS-Hydro Contributor
Chuck Heazel Heazel Technologies Contributor

OGC Testbed-17: OGC API - Moving Features Engineering Report

1.  Scope

This OGC Engineering Report (ER) is deliverable D021 of the OGC Testbed-17 (TB-17) initiative performed as an activity of the OGC Innovation Program.

The goal of this document is to combine the work of the Moving Features (MF) Standards Working Group (SWG) and the results obtained from TB-17 demonstration to define a draft OGC API specification. This work is the initial step of defining an API for discovery, access, and exchange of moving features and their corresponding tracks such that implementations of the API can be applied in a near real-time scenario.

2.  Normative references

The following documents are referred to in the text in such a way that some or all of their content constitutes requirements of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies.

OGC 19-045r3 — OGC Moving Features Encoding Extension — JSON 1.0

Hideki Hayashi, Akinori Asahara, Kyoung-Sook Kim, Ryosuke Shibasaki, Nobuhiro Ishimaru: OGC 16-120r3, OGC Moving Features Access. Open Geospatial Consortium (2017). http://docs.opengeospatial.org/is/16-120r3/16-120r3.html

Clemens Portele, Panagiotis (Peter) A. Vretanos, Charles Heazel: OGC 17-069r3, OGC API — Features — Part 1: Core. Open Geospatial Consortium (2019). http://docs.opengeospatial.org/is/17-069r3/17-069r3.html

Clements Portele, Panagiotis (Peter) A. Vretanos: OGC 18-058, OGC API — Features — Part 2: Coordinate Reference Systems by Reference. Open Geospatial Consortium (2020). https://docs.ogc.org/is/18-058/18-058.html

ISO: ISO 19141:2008, Geographic information — Schema for moving features. International Organization for Standardization, Geneva (2008). https://www.iso.org/standard/41445.html

3.  Terms, definitions and abbreviated terms

This document uses the terms defined in OGC Policy Directive 49, which is based on the ISO/IEC Directives, Part 2, Rules for the structure and drafting of International Standards. In particular, the word “shall” (not “must”) is the verb form used to indicate a requirement to be strictly followed to conform to this document and OGC documents do not use the equivalent phrases in the ISO/IEC Directives, Part 2.

This document also uses terms defined in the OGC Standard for Modular specifications (OGC 08-131r3), also known as the ‘ModSpec’. The definitions of terms such as standard, specification, requirement, and conformance test are provided in the ModSpec.

For the purposes of this document, the following additional terms and definitions apply.

3.1.  Terms and definitions

3.1.1. Application Programming Interface (API)

a standard set of documented and supported functions and procedures that expose the capabilities or data of an operating system, application or service to other applications (adapted from ISO/IEC TR 13066-2:2016)

3.1.2. duration

the difference between the ending time and the starting time of the trajectory (including the dwell time)

3.1.3. dwell time

corresponds to the time that a moving object is stationary

3.1.4. dynamic attribute

characteristic of a feature in which its value varies with time [OGC 16-140]

3.1.5. feature

abstraction of real world phenomena [ISO 19109]
Note: A feature can occur as a type or an instance. Feature type or feature instance should be used when only one is meant.

3.1.6. feature attribute

characteristic of a feature [ISO 19109]

3.1.7. feature table

table where the columns represent feature attributes, and the rows represent features [OGC 06-104]

3.1.8. geographic feature

representation of a real world phenomenon associated with a location relative to the Earth [ISO 19101-2]

3.1.9. geometric object

spatial object representing a geometric set [ISO 19107:2003]

3.1.10. moving feature

A representation, using a local origin and local ordinate vectors, of a geometric object at a given reference time (ISO 19141:2008). In the context of this ER, the geometric object is a feature, which is an abstraction of real world phenomena (ISO 19109:2015).

3.1.11. track

the entire trajectory of a specific moving object.

3.1.12. tracklet

a fragment of the track followed by a moving object.

3.1.13. property

facet or attribute of an object referenced by a name [ISO 19143]

3.1.14. trajectory

path of a moving point described by a one parameter set of points (ISO 19141:2008).

3.1.15. trajectory time

total time a moving object takes to complete a specific trajectory

3.2.  Abbreviated terms

API

Application Programming Interface

CSV

Comma Separated Values

ISO

International Organization for Standardization

OGC

Open Geospatial Consortium

UML

Unified Modeling Language

XML

Extensible Markup Language

1D

One Dimensional

2D

Two Dimensional

3D

Three Dimensional

4.  Conventions

This section provides details and examples for any conventions used in the document.

4.1.  Identifiers

The normative provisions in this specification are denoted by the URI

http://www.opengis.net/spec/ogcapi-mf-1/1.0

All requirements, permissions, recommendations and conformance tests that appear in this document are denoted by partial URIs which are relative to this base.

5.  Introduction

The Testbed 17 Call for Participation (CFP) identified the following high-level requirements for defining the Moving Features API:

  1. Detection ingest: This component ingests data from a moving object detection system, extracts detections and partial tracks (tracklets), and exports the detections and tracklets as OGC Moving Features. These detections and tracklets may also be exported as SensorThings “Things”.

  2. Tracker: This component ingests detections and tracklets as OGC Moving Features, then correlates them into longer tracks. Those tracks are then exported as OGC Moving Features.

  3. Data Store: Provides persistent storage of the Moving Feature tracks.

  4. Machine Analytics: Software which enriches the existing tracks and/or generates derived information from the tracks.

  5. Human Analytics: Software and tools to help users exploit the Motion Imagery tracks and corresponding detections or correlated tracks. For example, a common operational picture showing both static and dynamic features.

    NOTE  Several of these interfaces were defined as SensorThings APIs as identified in Figure 3 of the Architecture.

Figure 2 — Moving Features task work items and deliverables

Section 7 provides the background of the testbed and cites existing relevant standards and previous work.

Section 8 describes a draft specification for the Moving Features API.

6.  Scenarios and Architecture

6.1.  Scenario (Use Cases)

For Testbed-17, there were three scenarios weighed for consideration:

  • Bus Tracking

  • Autonomous vehicle tracking

  • Hazardous Materials (HAZMAT) vehicle tracking

This ER describes the detection and tracking of moving buses in front of a school. The availability of data to support this scenario was the main factor in the selection of Bus Tracking as a primary scenario. The video was acquired by a light-weight Unmanned Aerial Vehicle (UAV) or drone, courtesy of the University of Calgary.

A separate autonomous vehicle use case was analyzed to detect and track people and vehicles moving nearby with WebVMT, a lightweight open format designed to synchronize video with an animated, annotated map on the web. Video and lidar data were captured from a moving StreetDrone vehicle and provided courtesy of Ordnance Survey.

6.1.1.  Tracking of Buses

In the framework of the Testbed-17 MF thread, a use case was selected where trajectories are represented by school buses at a school parking lot while students are getting on or off the buses.

The project’s raw data was composed of two videos of the same scenario, one recorded with a drone and the other one recorded with a GoPro action camera. The input data for the Machine Analytics Client was the output data from the Tracking Service which was stored and provided by the storage service.

The steps in this scenario included:

  1. Detect objects in each frame and track them in consecutive frames.

  2. Transform objects’ locations from the video space to geographic space.

  3. Link object detections in consecutive frames as a single moving object. Smaller incomplete series of detections are classified as tracklets, while a complete series of detections are classified as the track.

  4. Send collected moving object’s data to a storage service.

  5. Analyze data retrieved from the storage service to generate information such as moving feature distribution, average trajectory time, dwell time, detection of moving feature clusters and detection of trajectory clusters.

6.1.2.  Autonomous vehicle use case

Following on from work in the Testbed-16 initiative, Web Video Map Track (WebVMT) has been used to previzualize and analyze multi-sensor data from an autonomous vehicle in the Testbed-17 initiative.

Ordnance Survey provided video and lidar data from a StreetDrone vehicle navigating the roads around their headquarters in Southampton, UK. The footage was shot from a front-facing camera on the vehicle and shows a number of moving objects including a cyclist, pedestrians and other road vehicles as well as static objects such as hedges, road signs and lamp posts.

  1. Tracking Vehicles: If we could detect and track the oncoming vehicles in the StreetDrone videos, this would produce a set of short tracklets which correspond to different moving (or stationary) features. Being able to track the stationary cars in the car park would also be useful to classify them correctly as parked vehicles, which is easy for humans but trickier for a computer as it’s the camera that’s moving rather than the vehicle itself. Tracking moving features from a moving camera is a particularly useful capability for autonomous vehicles and dashcams.

  2. Tracking Dangers: If we could detect and track the cyclist in the second StreetDrone video, this would produce longer tracklets as the bike stays in shot for a while, though they pass behind a couple of lamp posts around 0:50. There’s also a pedestrian at about 1:25. Being able to track cyclists and pedestrians would also be a useful capability for autonomous vehicles and dashcams.

  3. Associating Views: We could extend these ideas to multiple cameras by using the BikeBro video which includes two views and avoids synchronization issues by baking both into a single video frame. If we could detect and track moving vehicles from the front and rear views, the resulting tracklets could be (trivially) paired together to form longer tracks for the passing vehicles. This capability enables autonomous vehicles to track other vehicles moving around them on roads with multiple lanes as the same vehicle could appear in different camera views at different times and differ in shape due to the different view angles.

All of these capabilities are also useful for the traffic camera use case, which would enable roadside camera footage to be aggregated with dashcam video to track vehicle movements.

6.1.3.  Windsor Scenario

6.1.3.1.  Objective

Illustrate the ability to track and map the location of the HAZMAT vehicle as reported real-time.

6.1.3.2.  Overview

The Critical Infrastructure Protection Initiative (CIPI-1) demonstration scenario involved a leaking hazmat truck which was heading from Windsor over to Detroit. The challenge was to track the truck, project where it was going, plan interception, and assess the impact of released HAZMAT at all points of the route.

6.2.  Architecture

The following figure provides the engineering viewpoint utilized during the initial T17 Moving Features discussions.

Figure 3 — Draft Moving Features Engineering Viewpoint

The sequence diagram in Figure 4 provides the component interactions from the initial data ingestion to retrieval by a moving feature client.

Figure 4 — Draft Moving Features Sequence diagram

6.2.1.  Storage service

The storage service receives and returns moving features as JSON objects. The current format does not conform to OGC Moving Features JSON encoding (OGC 16-140r1), but this shortcoming is due to a lack of time in implementation development rather than a problem with the JSON encoding.

JSON objects are not stored “as-is”. Instead, the objects are deconstructed. Each feature type is represented by a database table and each property such as start_time, duration, etc. is extracted and stored in a column of the feature table. This data organization is described in OGC 06-104 (Simple Features SQL). An inconvenience is that all features and properties must be known in advance since they are hard-coded both in the Java code reading JSON objects and in the database schema (note: a dynamic approach is possible but has not been implemented in this testbed). But advantages of this approach are that:

  • A database index can be applied on those property values,

  • The full power of SQL expressions is available for data manipulation,

  • Data can more easily be exported to other formats such as netCDF (OGC 16-114r3).

For this testbed, a PostgreSQL database was used. The bus coordinates were stored in a geometry column using the PostGIS extension. It would have been possible to go one step further by using the MobilityDB extension, which would allow storing the “moving geometries” instead of the static geometries in a PostgreSQL database, however, that moved to future development.

Queries can be done using either an implementation of OGC API — Features (OGC 17-069r3) or by using a Common Query Language (CQL) query. CQL supports the use of more advanced filtering conditions such as “features at a distance closer than X from geometry Y”. The storage service can parse CQL and translate it to a mix of SQL and Java code. The SQL statement uses instructions such as ST_Within or ST_Intersects. A spatial database will use its index for efficient execution of those statements. If some parts of the query cannot be translated to SQL, then the storage service will execute them in Java code on the remaining features after the initial filtering done by the database.

The set of operations defined by the OGC/ISO Filter Encoding standard (ISO 19143) and SQLMM standard assumes static geometries. For example, the above-cited example “features at a distance closer than X from geometry Y” does not take time into account. If geometries are trajectories, that operation will check the shortest distance between trajectories regardless of whether the moving features were at those locations at the same time or not. The ISO 19141 (Moving Features) standard defines new operators such as nearestApproach which could be used in CQL expressions. There is no OGC standard defining a filter encoding for moving features operations, but this testbed explored what they may look like (without reaching the point of experimenting with it in the storage implementation).

7.  Key findings

7.1.  Implications to OGC API Standardization

OGC API Standards enable access to resources using the HTTP protocol and its associated operations (GET, PUT, POST, etc.) OGC API — Common defines a set of capabilities which are applicable to all OGC APIs. Other OGC API Standards extend OGC API — Common with functionality specific to a resource type.

8.  Future Work

In order to perform more sophisticated queries beyond those by ID or bounding box, the testbed participants recommend implementing OGC API — Features — Part 3: Filtering and the Common Query Language (CQL).

It is also recommended to investigate using the MobilityDB extension of PostgreSQL, which will allow storing “moving geometries” instead of static geometries in a PostgreSQL database. An interesting aspect of MobilityDB is their definition of new operators derived from ISO 19141. MobilityDB is ahead of current standardization state since the use of Moving Feature operators in SQL expressions is not yet backed by an OGC/ISO standard. This is in contrast with operators such as “distance” which are first defined in an abstract specification (ISO 19107: spatial schema), then “implemented” by other OGC/ISO standards in filter expressions (ISO 19143), SQL statements (SQLMM), CQL expressions, etc.

In the Moving Feature case, the operators are only defined in the abstract specification. They have not yet been propagated to the Filter Encoding, CQL nor SQLMM standards. This situation is described in more detail in the “background” section below. This testbed has made good progress toward experimenting with Moving Feature operators in filter expressions, but without completion. Future work could continue with this experiment and, if revisions of relevant OGC Standards are available, test those revisions using MobilityDB, Java or other implementations.

Additionally, detailed future work and recommendations are provided in OGC Testbed-17 D020 Moving Features Engineering Report (OGC 21-036). A summary of these include:

  1. Ingestion service

    1. Update the framework to support stateless SensorThings.

    2. Update Ingestion service to temporarily store observations locally.

    3. Test and implement an ingestion service to handle both fixed and moving camera observations.

    4. Improving the transformation accuracy.

    5. Automation of registering Things based on the stateless STA data model would help to get data from new sources.

  2. Machine analytics client

    1. Segmentation of detected Moving Features

    2. Conduct Seasonality Analysis

  3. Autonomous vehicle use case

    1. combining multi-sensor data to improve detection accuracy and cognitive guidance

    2. data aggregation to associate segmented trajectories from spatially-distributed sensors

    3. autonomous location reporting of obstructions to moving objects;

    4. combination of multi-angle sensor data to track nearby vehicles and obstacles;

    5. use of spatially-distributed data for pre-emptive responses.

9.  Background

Testbed-16 explored technologies to transform detections of moving objects reported using motion imagery standards (MISB Std. 0903) into OGC Moving Features (OGC 18-075).

In that testbed activity, the ability to extract Motion Imagery derived Video Moving Target Indicators (VMTI) from an MPEG-2 motion imagery stream and represented as OGC Moving Features was demonstrated.

That work suggested a notional workflow:

  1. Extract moving object detections from the motion imagery stream.

  2. Encode the detections as moving features.

  3. Correlate the detection of moving features into track moving features.

  4. Perform analytics to enrich and exploit the tracks of moving features.

This work is documented in the Testbed-16 Full Motion Video to Moving Features Engineering Report (OGC 20-036).

9.1.  Existing relevant standards

This Testbed-17 thread is grounded in the following OGC/ISO Standards. The geometry concept is presented first, followed by the feature concept. Note that in this document, a feature is not considered as a geometry. Instead, a feature contains a geometry as one of its attributes. A feature also contains non-geometric attributes such as the bus color.

9.1.1.  Geometry (ISO 19107)

The ISO 19107, Geographic information — Spatial schema standard defines a GM_Object base type which is the root of all geometric objects. A GM_Object instance can be regarded as an infinite set of points in a particular coordinate reference system. The GM_Object base type has various subtypes such as GM_Point, GM_Curve, GM_Surface and GM_Solid. The UML below shows the GM_Object base type with its operations, for example distance(…) for computing the distance between two geometries. All those operations assume static objects, without time-varying coordinates or attributes. For example the distance(…) operator does not take in account whether the two objects were at the same time at the location where shortest distance is found.

UML of GM_Object

Figure 5 — GM_Object from ISO 19107:2003

Geometry, topology and temporal-objects (GM_Object, TP_Object, TM_Object) are not features. These types can provide types for feature properties, but cannot be specialized to feature types. Feature types and properties are described in the next section.

9.1.2.  Features (ISO 19109)

The ISO 19109, Geographic information — Rules for application schema standard defines types for the definition of features. A feature is an abstraction of a real-world phenomenon. The terms “feature type” and “feature instance” are used to separate the following concepts of “feature”:

Feature type

The whole collection of real-world phenomena classified in a concept. For example, the “bridge” feature type is the abstraction of the collection of all real-world phenomena that is classified into the concept behind the term “bridge”.

Feature instance

A certain occurrence of a feature type. For example, “Tower Bridge” feature instance is the abstraction of a certain real-world bridge in London.

NOTE 1  In object-oriented modeling, feature types are equivalent to classes and feature instances are equivalent to objects. The feature properties (presented below) are equivalent to fields.

Feature type instance

The UML shown below contains a subtlety explained here but ignored for simplicity in the rest of this document: FeatureType is defined as a metaclass, i.e. a class for describing other classes. A feature type instance (not to be confused with a feature instance) is a class that represents an individual feature type. For example, “bridge” and “park” are two instances of FeatureType. Then “Tower Bridge” and “Golden Gate Bridge” are two Feature instances of the “bridge” FeatureType instance, and “Jardin des Tuileries” is a Feature instance of the “park” FeatureType instance.

NOTE 2  The assertion that FeatureType is a metaclass means that in statically typed languages such as Java or C++, a FeatureType instance can generally not be represented directly as a Java or C++ class (unless the class properties are known at compile-time). Developers in those languages have to use more indirect mechanisms such as reflections or dictionaries (hash maps). Dynamic languages such as Python can use feature type instances more directly, but at the cost of type safety in the general case.

The UML class diagram in Figure 6 shows the ISO 19109 General Feature Model. A FeatureType contains the list of properties (attributes, associations and operations) that Feature instances of that type can contain. In other words, a feature can be seen as a collection of property values. Geometries are feature properties (more precisely attributes) like any other, without any special treatment. All properties shown below are assumed static, without time-varying values.

UML of feature model

Figure 6 — General Feature Model from ISO 19109:2009

9.1.2.1.  Usage in Testbed 17

The UML class diagram above defines the GeoAPI interfaces in the org.opengis.feature package (development branch not yet released). The storage service in this testbed uses those interfaces as the “universal” internal representation of features. The service supports the transfer of features between JSON, netCDF and database representations, together with operations on features. Of note, the UML shown in Figure 6 represents meta-classes and not classes, providing information on features in much the same way that metadata describes data.

The following provides a simplified view of a feature in JSON format. The UML model on the right-hand side of Figure 7 describes the JSON on the left-hand side.