Publication Date: 2018-03-05

Approval Date: 2018-03-02

Posted Date: 2018-01-31

Reference number of this document: OGC 17-045

Reference URL for this document: http://www.opengis.net/doc/PER/t13-NG008

Category: Public Engineering Report

Editor: Stephane Fellah

Title: OGC Testbed-13: Portrayal Engineering Report


OGC Engineering Report

COPYRIGHT

Copyright © 2018 Open Geospatial Consortium. To obtain additional rights of use, visit http://www.opengeospatial.org/

WARNING

This document is not an OGC Standard. This document is an OGC Public Engineering Report created as a deliverable in an OGC Interoperability Initiative and is not an official position of the OGC membership. It is distributed for review and comment. It is subject to change without notice and may not be referred to as an OGC Standard. Further, any OGC Engineering Report should not be referenced as required or mandatory technology in procurements. However, the discussions in this document could very well lead to the definition of an OGC Standard.

LICENSE AGREEMENT

Permission is hereby granted by the Open Geospatial Consortium, ("Licensor"), free of charge and subject to the terms set forth below, to any person obtaining a copy of this Intellectual Property and any associated documentation, to deal in the Intellectual Property without restriction (except as set forth below), including without limitation the rights to implement, use, copy, modify, merge, publish, distribute, and/or sublicense copies of the Intellectual Property, and to permit persons to whom the Intellectual Property is furnished to do so, provided that all copyright notices on the intellectual property are retained intact and that each person to whom the Intellectual Property is furnished agrees to the terms of this Agreement.

If you modify the Intellectual Property, all copies of the modified Intellectual Property must include, in addition to the above copyright notice, a notice that the Intellectual Property includes modifications that have not been approved or adopted by LICENSOR.

THIS LICENSE IS A COPYRIGHT LICENSE ONLY, AND DOES NOT CONVEY ANY RIGHTS UNDER ANY PATENTS THAT MAY BE IN FORCE ANYWHERE IN THE WORLD. THE INTELLECTUAL PROPERTY IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE DO NOT WARRANT THAT THE FUNCTIONS CONTAINED IN THE INTELLECTUAL PROPERTY WILL MEET YOUR REQUIREMENTS OR THAT THE OPERATION OF THE INTELLECTUAL PROPERTY WILL BE UNINTERRUPTED OR ERROR FREE. ANY USE OF THE INTELLECTUAL PROPERTY SHALL BE MADE ENTIRELY AT THE USER’S OWN RISK. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR ANY CONTRIBUTOR OF INTELLECTUAL PROPERTY RIGHTS TO THE INTELLECTUAL PROPERTY BE LIABLE FOR ANY CLAIM, OR ANY DIRECT, SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM ANY ALLEGED INFRINGEMENT OR ANY LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR UNDER ANY OTHER LEGAL THEORY, ARISING OUT OF OR IN CONNECTION WITH THE IMPLEMENTATION, USE, COMMERCIALIZATION OR PERFORMANCE OF THIS INTELLECTUAL PROPERTY.

This license is effective until terminated. You may terminate it at any time by destroying the Intellectual Property together with all copies in any form. The license will also terminate if you fail to comply with any term or condition of this Agreement. Except as provided in the following sentence, no such termination of this license shall require the termination of any third party end-user sublicense to the Intellectual Property which is in force as of the date of notice of such termination. In addition, should the Intellectual Property, or the operation of the Intellectual Property, infringe, or in LICENSOR’s sole opinion be likely to infringe, any patent, copyright, trademark or other right of a third party, you agree that LICENSOR, in its sole discretion, may terminate this license without any compensation or liability to you, your licensees or any other party. You agree upon termination of any kind to destroy or cause to be destroyed the Intellectual Property together with all copies in any form, whether held by you or by any third party.

Except as contained in this notice, the name of LICENSOR or of any other holder of a copyright in all or part of the Intellectual Property shall not be used in advertising or otherwise to promote the sale, use or other dealings in this Intellectual Property without prior written authorization of LICENSOR or such copyright holder. LICENSOR is and shall at all times be the sole entity that may authorize you or any third party to use certification marks, trademarks or other special designations to indicate compliance with any LICENSOR standards or specifications.

This Agreement is governed by the laws of the Commonwealth of Massachusetts. The application to this Agreement of the United Nations Convention on Contracts for the International Sale of Goods is hereby expressly excluded. In the event any provision of this Agreement shall be deemed unenforceable, void or invalid, such provision shall be modified so as to make it valid and enforceable, and as so modified the entire Agreement shall remain in full force and effect. No decision, action or inaction by LICENSOR shall be construed to be a waiver of any rights or remedies available to it.

None of the Intellectual Property or underlying information or technology may be downloaded or otherwise exported or reexported in violation of U.S. export laws and regulations. In addition, you are responsible for complying with any local laws in your jurisdiction which may impact your right to import, export or use the Intellectual Property, and you represent that you have complied with any regulations or registration procedures required by applicable law to make this license enforceable.

Table of Contents

1. Summary

Portrayal of geospatial information plays a crucial role in situation awareness, analysis and decision-making. Visualizing geospatial information often requires one to portray the information using symbology or cartographic presentation rules from a community or organization. For example, among those in the law enforcement, fire and rescue community, various local, national and international agencies use different symbols and terminology for the same event, location and building, employing syntactic, structural-based and document-centric data models (e.g., eXtensible Markup Language (XML) schemas and Style Layer Descriptors (SLD)). With this approach, interoperability does not extend to the semantic level, which makes it difficult to share, reuse and mediate unambiguous portrayal information between agencies.

This Engineering Report (ER) captures the requirements, solutions, models and implementations of the Testbed 13 Portrayal Package. This effort leverages the work on Portrayal Ontology development and Semantic Portrayal Service conducted during Testbed 10, 11 and 12. The objective of this Testbed 13 is to identify and complete the gaps in the latest version of the portrayal ontology defined in Testbed 12, complete the implementation of the Semantic Portrayal Service by adding rendering capabilities and performing a demonstration of the portrayal service that showcases the benefits of the proposed semantic-based approach.

1.1. Requirements

The Testbed 12 initiative defined and implemented a set of portrayal ontologies and a RESTful API for a Semantic Portrayal Service. Due to time limitations, the API implementation didn’t address the rendering aspect of the service and didn’t fully test the round-trip conversion of Style Layer Descriptor (SLD) documents with the portrayal ontologies. The work presented in this Engineering Report addresses the following requirements:

  • Identify gaps with SLD and prior microtheories developed during Testbed 12

  • Define the renderer Application Programming Interface (API) and output in image formats Portable Network Graphic (PNG).

  • Define workflow of the demonstration scenario

1.2. Key Findings and Prior-After Comparison

OGC has not previously explored an approach for representing portrayal information using semantic-based technologies. Current OGC standards such Style Layer Description (SLD) and Symbol Encoding (SE) are document-centric and assume that the data models are based on Geography Markup Language (GML) encodings making it hard to share and reuse portrayal information that is based on other data models.

1.3. What does this ER mean for the Working Group and OGC in general

This Engineering Report (ER) is relevant to the GeoSemantics Domain Working Group (DWG) and a possible future Portrayal DWG. Both working groups should share the objectives of enabling semantic interoperability of geospatial-related information. The portrayal ontologies produced by this testbed define a reusable, extensible, machine-tractable model that allow the sharing of layer and portrayal information. The portrayal ontology provides an extensible framework to represent style for layers working on different data models and formats such as Extensible Markup Language (XML), JavaScript Object Notation (JSON), and others based on Linked Data. The Semantic Portrayal API showcases how semantic information can be shared using JSON for Linked Data (JSON-LD) and a hypermedia Representational State Transfer (REST) API combining semantic information and hypermedia controls.

1.4. Document contributor contact points

All questions regarding this document should be directed to the editor or the contributors:

Table 1. Contacts
Name Organization

Stephane Fellah

Image Matters LLC

Emilie Mitchell

Image Matters LLC

Simone Giannecchini

Geo-Solutions

1.5. Future Work

1.5.1. Map and Layer Profile

Layers and Maps of geospatial data are very commonplace, but there is no consistent and standard way to describe their metadata. While Layer and Map entities are derived from a Dataset entity, they have their own specific metadata. We propose for the next testbed to investigate a profile for Layer and Map concepts that extends the Registry Item and relates to Datasets, Services and Portrayal Information developed for the Semantic Registry and Semantic Portrayal Service.

1.5.2. Coverage Portrayal

So far, the emphasis on developing the portrayal ontologies has been on modeling and representing portrayal information for feature data. The proposal is that the next testbed focuses on addressing portrayal information for coverage data (in particular grid coverage). This will close the gap of expressiveness of the portrayal ontology with SLD and SE standards.

1.5.3. Composite Symbology Semantic Portrayal Service.

For the next Testbed, we propose to extend the ontology to accommodate more complex symbols such as composite symbols and symbol templates to describe more advanced symbology standards such as the family of MIL-STD-2525D symbols. It is proposed to also extend the portrayal ontology to represent composite symbols and symbol templates. Investigation should also include other renderer outputs such as JSON encoding of the portrayal information, so they can be handled on the client side in HTML5 Canvas or other rendering libraries such as D3.js.

1.6. Foreword

Attention is drawn to the possibility that some of the elements of this document may be the subject of patent rights. The Open Geospatial Consortium shall not be held responsible for identifying any or all such patent rights.

Recipients of this document are requested to submit, with their comments, notification of any relevant patent claims or other intellectual property rights of which they may be aware that might be infringed by any implementation of the standard set forth in this document, and to provide supporting documentation.

2. References

The following documents are referenced in this document. For dated references, subsequent amendments to, or revisions of, any of these publications do not apply. For undated references, the latest edition of the normative document referred to applies.

  • OGC 16-062 - OGC® Testbed-12 Catalogue and SPARQL Engineering Report

  • OGC 15-058 - OGC® Testbed-11 Symbology Mediation Engineering Report

  • OGC 15-054 - OGC® Testbed-11 Implementing Linked Data and Semantically Enabling OGC Services Engineering Report

  • OGC 13-084r2, OGC I15 (ISO19115 Metadata) Extension Package of CS-W ebRIM Profile .0, 2014

  • OGC 12-168r6, OGC® Catalogue Services 3.0 - General Model, 2016

  • OGC 11-052r4, OGC GeoSPARQL- A Geographic Query Language for RDF Data, 2011

  • OGC 09-026r2, OGC Filter Encoding 2.0 Encoding Standard - With Corrigendum

  • OGC 08-125r1, KML Standard Development Practices, Version 0.6, 2009.

  • OGC 07-147r2, KML Version 2.2.0.2008

  • OGC 07-110r4, CSW-ebRIM Registry Service ebRIM profile of CSW (.0.1), 2009

  • OGC 07-045, OGC Catalogue Services Specification 2.0.2 - ISO Metadata Application Profile (.0.0), 2007

  • OGC 07-006r1, OpenGIS Catalogue Service Implementation Specification 2.0.2, 2007

  • OGC 06-129r1, FGDC CSDGM Application Profile for CSW 2.0 (0.0.12), 2006

  • OGC 06-121r9, OGC® Web Services Common Standard

  • OGC 06-121r3, OpenGIS® Web Services Common Specification, version 1.1.0 with Corrigendum 1 2006

  • OGC 05-078r4, OpenGIS Styled Layer Descriptor Profile of the Web Map Service Implementation Specification, Version 1.1.0, 2006

  • OGC 05-077r4, OpenGIS® Symbology Encoding Implementation Specification, Version 1.1.0, 2006.

  • ISO/TS 19139:2007, Geographic information — Metadata — XML schema implementation

  • ISO 19119:2005, Geographic information — Services

  • ISO 19117:2012, Geographic information — Portrayal

  • ISO 19115:2003, Geographic information — Metadata

  • ISO 19115:2003/Cor 1:2006, Geographic information — Metadata

  • ISO 19115-1:2014, Geographic information — Metadata — Part 1: Fundamentals

  • Dublin Core Metadata Initiative, last visited 12-09-2016, available from http://dublincore.org/

  • NSG Metadata Foundation (NMF) – Part 1: Core, version 2.2, 23 September 2014 https://nsgreg.nga.mil/doc/view?i=4123

  • DGIWG 114, DGIWG Metadata Foundation (DMF),last visited 12-09-2016, available from https://portal.dgiwg.org/files/?artifact_id=9189&format=pdf

  • DoD Discovery Metadata Specification (DDMS),last visited 12-09-2016, available from https://metadata.ces.mil/dse-help/DDMS/index.htm

  • SPARQL Protocol and RDF Query Language (SPARQL),last visited 12-09-2016, available from https://www.w3.org/TR/rdf-sparql-query

  • DCAT, last visited 12-09-2016, available from https://www.w3.org/TR/vocab-dcat/

  • National System for Geospatial Intelligence Metadata Implementation Specification (NMIS) – Part 2: XML Exchange Schema

  • Project Open Data Metadata Schema v1.1 https://project-open-data.cio.gov/v1.1/schema/

  • Asset Description Metadata Schema (ADMS) https://www.w3.org/TR/vocab-adms/

  • JSON-LD 1.0 https://www.w3.org/TR/json-ld/

  • OWL-S: Semantic Markup for Web Services https://www.w3.org/Submission/OWL-S/

3. Terms and definitions

For the purposes of this report, the definitions specified in Clause 4 of the OWS Common Implementation Standard [OGC 06-121r9] shall apply. In addition, the following terms and definitions apply.

3.1. feature

representation of some real world object or phenomenon

3.2. interoperability

capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units [ISO 19119]

3.3. layer

basic unit of geographic information that may be requested as a map from a server

3.4. linked data

a pattern for hyperlinking machine-readable data sets to each other using Semantic Web techniques, especially via the use of RDF and URIs. Enables distributed SPARQL queries of the data sets and a browsing or discovery approach to finding information (as compared to a search strategy). Linked Data is intended for access by both humans and machines. Linked Data uses the RDF family of standards for data interchange (e.g., RDF/XML, RDFa, Turtle) and query (SPARQL). If Linked Data is published on the public Web, it is generally called Linked Open Data.

3.5. map

pictorial representation of geographic data

3.6. model

abstraction of some aspects of a universe of discourse [ISO 19109]

3.7. ontology

a formal specification of concrete or abstract things, and the relationships among them, in a prescribed domain of knowledge [ISO/IEC 19763]

3.8. portrayal

portrayal presentation of information to humans [ISO 19117]

3.9. semantic interoperability

the aspect of interoperability that assures that the content is understood in the same way in both systems, including by those humans interacting with the systems in a given context

3.10. semantic mediation

transformation from one or more datasets into a dataset based on a different conceptual  model.

3.11. symbol

a bitmap or vector image that is used to indicate an object or a particular property on a map.

3.12. symbology encoding

style description to apply to the digital features being rendered

3.13. syntactic interoperability

the aspect of interoperability that assures that there is a technical connection, i.e. that the data can be transferred between systems

4. Conventions

4.1. Abbreviated terms

  • API Application Programming Interface

  • CRS Coordinate Reference System

  • CSV Comma Separated Values

  • CSW Catalog Services for the Web

  • DCAT Data Catalog Vocabulary

  • DCAT-AP DCAT Application Profile for Data Portals in Europe

  • DCMI Dublin Core Metadata Initiative

  • EARL Evaluation and Report Language EU European Union

  • EuroVoc Multilingual Thesaurus of the European Union

  • GEMET GEneral Multilingual Environmental Thesaurus

  • GML Geography Markup Language

  • GeoDCAT-AP Geographical extension of DCAT-AP

  • GeoJSON Geospatial JavaScript Object Notation

  • IANA Internet Assigned Numbers Authority

  • INSPIRE Infrastructure for Spatial Information in the European Community

  • ISO International Organization for Standardization

  • JSON JavaScript Object Notation

  • JRC European Commission - Joint Research Centre MDR Metadata Registry

  • N3 Notation 3 format

  • NAL Named Authority Lists

  • OGC Open Geospatial Consortium

  • OWL Web Ontology Language

  • RDF Resource Description Framework

  • RFC Request for Comments

  • SE Symbology Encoding

  • SLD Style Layer Descriptor

  • SKOS Simple Knowledge Organization System

  • SPARQL SPARQL Protocol and RDF Query URI Uniform Resource Identifier

  • SVG Scalable Vector Graphics

  • TTL Turtle Format

  • URI Uniform Resource Identifier

  • URL Uniform Resource Locator

  • URN Uniform Resource Name

  • W3C World Wide Web Consortium

  • WG Working Group

  • WKT Well Known Text

  • XML eXtensible Markup Language

  • XSLT eXtensible Stylesheet Language Transformations

5. Overview

This ER is broken down into three sections. The first section is related to the Portrayal ontology modeling. It documents the changes needed to better align with SLD and SE standards and lessons learned from the implementations. The second section is related to the Semantic Portrayal Service Application Protocol Interface (API). It documents the changes related to the API, in particular the endpoints related to the rendering of geospatial data and legends. The last section focuses on documenting the workflow of the Portrayal Demonstration, challenges and issues found during the Technology Integration Experiments (TIE). The ER also has two appendices: the first documents the portrayal ontology, and the second documents the Representational State Transfer (REST) API of the Semantic Portrayal Service.

6. Portrayal Ontologies

This section summarizes the findings, design approaches and changes in the portrayal ontologies.

6.1. Background

The formalization of portrayal ontologies started in OGC Testbed 10, where the focus was on representing point-based symbologies related to Disaster and Emergency Management.

An Incident Ontology and Taxonomy for Natural Events and Emergency Incidents was developed and used to represent incidents that could be represented in the Homeland Security Working Group Symbology (HSWG) for incidents.

The initial implementation of the Semantic Portrayal Service during the OGC Testbed 11 focused on defining the styles, portrayal rules, point-based symbols and graphics to enable a Web Processing Service (WPS) to produce an SLD document. The initial ontology was heavily based on ISO 19117 Geographic Information-Portrayal standard.

However, during the implementation of style renderers and development of the graphic ontology during Testbed 12, it was concluded the ISO 19117 was mostly designed for runtime implementation (for example use of portrayal function) rather than adapted for a declarative approach.

It was found that the OGC SE standard provides a declarative approach based on XML encoding that is better aligned with modern renderer API approaches such as Java Canvas, Hypertext Markup Language (HTML) Canvas, Scalable Vector Graphics (SVG), MapCSS, ESRI Map Renderer, etc. An update of the portrayal ontologies was done by introducing a symbolizer microtheory aligned with SE and the graphic ontology based on SVG constructs. The scope of the portrayal ontologies was limited to vector-based (feature-based) representation.

6.2. Goals for Testbed 13

The objective of Testbed 13 in terms of Portrayal Ontology development is to identify the gaps between SLD/SE standards and the ontologies developed during Testbed 12. The scope of this analysis is limited to vector data only. Further work is needed for coverage data (raster data in particular) in future testbeds. To conduct this analysis, a round trip conversion from SLD to Linked Data Representation and vice versa was performed. The goal is to have the portrayal ontologies being, at least, as expressive as SLD/SE and able to support rendering tasks. The second objective is to test the ability of the portrayal ontology to work on models different from XML, by testing its application to Linked Data representation.

6.3. Findings

6.3.1. Tight Coupling of SLD/SE with XML Model

The SLD/SE standards are tightly coupled with the OGC Feature Model and its XML encoding in GML. The implementation of the standards in an OGC Web Map Service (WMS) assumes typically that vector data is provided by OGC Web Feature Service (WFS). Some short notation based on CQL has been introduced to try to bridge the gap, but it is mostly used as syntactic sugar. There are many formats that are not based on XML such as GeoJSON, Linked Data formats (Turtle, JSON-LD, NT), and Comma Separated Values (CSV). Each of these standards uses different schema language (JSON Schema, OWL, RDFS, CSV schema). When it comes to enable semantic interoperability of portrayal information with a feature model, there needs to be a way to represent the feature model semantically and map it to the different schemas encoding existing for each of the format. There is also a need to have an extensible addressing framework that can accommodate different data models.

6.3.2. Identifications

One of the main challenges with the SLD/SE standards is the lack of a global unique identifier for representing the different portrayal information expressed by SLD/SE. The identifiers are identified within the scope of the SLD document and make it hard to reuse and link to other artifacts expressed in other SLDs documents or knowledge base such as feature dictionary.

Another challenge is the ability to identify feature types in unified way that is independent of the data model used. Most of the OGC standards use XML schema and GML to represent feature type definitions. In Linked Data (such as in GeoSPARQL), Feature types are represented through an OWL class Uniform Resource Identifier (URI). Property Binding in SLD uses XPath to represent paths in XML structure. In Linked Data, properties are typically defined globally and defined as URI. SPARQL Protocol and RDF Query Language (SPARQL) and the Shapes Constraint Language (SHACL) provide a mechanism to define RDF Path. A unified approach is needed to define property paths on different data models (JSON, XML, Linked Data, CSV,..).

6.3.3. Expression Bindings

The OGC SE standard uses OGC Filter Encoding standard [OGC 09-026r2] to express portrayal rules conditions and binding expressions to symbolizer attributes. The OGC Filter Encoding standard describes an XML and Key-Value Pair (KVP) encoding of a system-neutral syntax for expressing the projection, selection and sorting clauses of a query expression. The intent is that this neutral representation can be easily validated, parsed and then translated into some target query language such as SPARQL or SQL for processing. The OGC SE standard extends the expression model with some pre-built functions commonly used in Portrayal (categorization, formatting functions). While the goal of the OGC Filter Encoding standard is to define a system-neutral syntax, it suffers of many drawbacks.

  • The standard requires to implement converters from OGC Filter to a target native query language (ex. SPARQL, SQL), which are not always trivial to implement against specific target data models.

  • Verbosity: The XML encoding of simple expression can be very verbose compared to other standards query language (CQL, SPARQL).

  • Lack of a standard mechanism to define and share functions.

  • Assumption that the feature model is mappable to XML and can be represented in XML Schema. This is not always the case, always available (ex. JSON, RDF, CSV) or even feasible. Other schema languages can be used such as JSON Schema, OWL, RDF Schema or CSV Schema.

  • Use of XML QNames: The QName to URI Mapping is broken when trying to map to RDF ontology. Fundamentally, using “QNames” as abbreviations for URIs is a bad idea. QNames have a number of restrictions on them because they are built to be legal XML Names: the kinds of things that one can call elements and attributes. URIs don’t have these restrictions: it’s perfectly possible for the last part of a URI to consist purely of numbers, or to have a slash at the end, or even to have request parameters. Fair enough that meaningful QNames can be used for some URIs, but if one cannot use them properly for all URIs, then there has to be a better way.

All the parameter attribute values for the symbolizers defined in the OGC SE standard XML encoding require a OGC expression based on the OGC Filter specification. This makes the encoding very verbose in case a user wants to assign a simple value such as stroke-width to a number. It also makes it difficult to validate the expression as the actual types of parameter attributes are not strongly typed.

6.3.4. Feature Type modeling

SLD/SE defines two properties to refer to FeatureType information:

  • The FeatureTypeName identifies the specific feature type that the feature-type style is for. It can be optional but only if a feature type is in-context or if it is intended for usage for a number of feature types using SemanticTypeIdentifier.

  • The SemanticTypeIdentifier is experimental and is intended to be used to identify what the feature style (or coverages in case of usage inside a CoverageStyle) is suitable to be used for using community-controlled name(s). For example, a single style may be suitable to use with many different feature types. The syntax of the SemanticTypeIdentifier string is undefined, but the strings “generic:line”, “generic:polygon”, “generic:point”, “generic:text”, “generic:raster”, and “generic:any” are reserved to indicate that a FeatureTypeStyle may be used with any feature type with the corresponding default geometry type (i.e., no feature properties are referenced in the feature style).

These properties are not well adapted to accommodate different schema languages and do provide mechanisms to indicate where to get additional information about the feature types. For example, OpenStreet Feature could be encoded in GML, JSON or XML and would require one to define a different SLD encoding for each of these schemas. It would be useful to define the feature type at the conceptual (semantic) level and then provide links to different encodings and distributions of the schema. Having a global unique identifier for each feature type will enable the integration/linking of the feature type dictionary with styling information an minimize duplication.

6.3.5. Layer

The OGC SLD standard defines the concept of “layer” as a collection of features that can be potentially of various mixed feature types. A named layer is a layer that can be accessed from an OGC Web Server using a well-known name. For example, the WMS interface uses the LAYER parameter to reference named layers as in the example parameter:

LAYERS=Rivers,Roads,Houses

The SLD standard defines the concept of NamedLayer to represent map layers that can be referred to by name by a different service. The name that is defined locally to the document is not a global identifier. This is an issue when one wants to refer a layer to other concepts that are defined outside the document, such as a dataset used by the layer (which could be defined by a DCAT Dataset instance).

The NamedLayer element in the SLD standard is defined by the following XML-Schema fragment:

<xsd:element name="NamedLayer">
  <xsd:complexType>
                <xsd:sequence>
                        <xsd:element ref="se:Name"/>
                        <xsd:element ref="se:Description" minOccurs="0"/>
                        <xsd:element ref="sld:LayerFeatureConstraints" minOccurs="0"/>
                        <xsd:choice minOccurs="0" maxOccurs="unbounded">
                                <xsd:element ref="sld:NamedStyle"/>
                                <xsd:element ref="sld:UserStyle"/>
                        </xsd:choice>
                </xsd:sequence>
        </xsd:complexType>
</xsd:element>

The LayerFeatureConstraints element is optional in a NamedLayer and allows the user to specify constraints on what features of what feature types are to be selected by the named-layer reference. It uses OGC Filter as the filter language, which is mostly designed for data that are mappable to XML.

A named styled layer can include any number of named styles and user-defined styles, including zero, mixed in any order. If zero styles are specified, then the default styling for the specified named layer is to be used. A named style, similar to a named layer, is referenced by a well-known name. A particular named style only has meaning when used in conjunction with a particular named layer. One of the issues with this approach is that the name of the layer is defined locally. It does not define a globally unique identifier that is referenceable, so it can be reused by other layers defined outside the document.

A UserLayer is defined as a subclass of NamedLayer for representing a user-defined layer to be built from WFS and WCS data only, and inline features encoded as GML FeatureCollection only. This is too restrictive as it does not allow UserLayer leveraging other data sources such as GeoJSON, Linked Data sources accessible from a SPARQL endpoint or other popular formats such as CSV or Shapefile. UserLayer is defined by the following XML-Schema fragment:

  <xsd:element name="UserLayer">
    <xsd:annotation>
      <xsd:documentation>
        A UserLayer allows a user-defined layer to be built from WFS and
        WCS data.
      </xsd:documentation>
    </xsd:annotation>
    <xsd:complexType>
      <xsd:sequence>
        <xsd:element ref="sld:Name" minOccurs="0"/>
        <xsd:element ref="sld:RemoteOWS" minOccurs="0"/>
        <xsd:element ref="sld:LayerFeatureConstraints"/>
        <xsd:element ref="sld:UserStyle" maxOccurs="unbounded"/>
      </xsd:sequence>
    </xsd:complexType>
  </xsd:element>

A user-defined style allows map styling to be defined externally from a system and to be passed around in an interoperable format. The XML-Schema fragment for the UserStyle SLD element is defined as follows:

<xsd:element name="UserStyle">
        <xsd:complexType>
                <xsd:sequence>
                        <xsd:element ref="se:Name" minOccurs="0"/>
                        <xsd:element ref="se:Description" minOccurs="0"/>
                        <xsd:element ref="sld:IsDefault" minOccurs="0"/>
                        <xsd:choice maxOccurs="unbounded">
                                <xsd:element ref="se:FeatureTypeStyle"/>
                                <xsd:element ref="se:CoverageStyle"/>
                                <xsd:element ref="se:OnlineResource"/>
                        </xsd:choice>
                </xsd:sequence>
        </xsd:complexType>
</xsd:element>
<xsd:element name="IsDefault" type="xsd:boolean"/>

A UserStyle can contain one or more FeatureTypeStyles or CoverageStyles which allow the rendering of features of specific types. These are described in OGC Symbology Encoding. These styles can either be provided inline within the SLD document or they can be referenced using an OnlineResource containing a OGC SE document with a FeatureTypeStyle or CoverageStyle root element. This organization allows the more convenient use of feature-style libraries.

There is no standard that allows the management of user-defined layers and enables global referencing of layers to enable the reuse of the layers in different maps. There is a need for an API and global referencing of layers, as well as the need to create and describe the concept of a layer in multiple data models and APIs.

6.3.6. Legend

Legend plays a central role in the interpretation of map. Legends are typically included with maps to indicate to the user how various features are represented in the map or on the layer. It is therefore important to be able to produce a legend on a map display client. Generating a legend on the client side may involve a significant amount of processing. The client will need to examine the selected style and determine which rules apply at the currently used map scale. While this would save some interactions between the client and server and would allow the viewer client to present consistent sample shapes (across remote map servers from different vendors), the legend graphics might look different from the graphics actually rendered in the map since the viewer and server may have different rendering engines and different graphical capabilities. A better approach is either to associate the legend with a given rule or style or to provide API endpoints to generate legend for a style and legend item glyphs for a given rule or symbolizer.

There are currently three OGC specifications that are relevant to represent Legends: OGC Symbol Encoding (SE), OGC Style Layer Descriptor 1.0 and 1.1, and OGC Web Map Service (WMS) specifications.

In the OGC SE specification, a Rule can be associated with a LegendGraphic element referring to a Graphic Symbolizer. The Graphic Symbolizer can refer to an external graphic using a URL or defined as a Mark. LegendGraphic only has a limited role in building legends. For vector types, a map server would normally render a standard vector geometry (such as a box) with the given symbolization for a rule. But for some layers, such as for Digital Elevation Model (DEM) data, there is not really a “standard” geometry that can be rendered to get a good representative image. This is what the LegendGraphic SE element is intended for, to provide a substitute representative image for a Rule. For example, it might reference a remote URL for a DEM layer called “GTOPO30”:

http://www.vendor.com/sld/icons/COLORMAP_GTOPO30.png

The OGC SLD specification defines the operation GetLegendGraphic to generate a legend as an image from an SLD style, rule and feature type with flexible legend options to render the layout and labels of the legend on the server side. An SLD-WMS operation request for GetLegendGraphic can look like this encoded in KVP:

http://www.vendor.com/wms.cgi?
 VERSION=1.1.0&
 REQUEST=GetLegendGraphic&
 LAYER=ROADL_1M%3Alocal_data&
 STYLE=my_style&
 RULE=highways&
 SLD=http%3A%2F%2Fwww.sld.com%2Fstyles%2Fkpp01.xml&
 WIDTH=16&
 HEIGHT=16&
 FORMAT=image%2Fgif&

This would produce a 16x16 icon for the Rule named “highways” defined within layer “ROADL_1M:local_data” in the SLD. The list of available formats for legend graphics and exceptions can be assumed to be the same as are available for a map in the WMS GetMap request.

In OGC Web Map Service 1.3 specification, a Style may contain a LegendURL that provides the location of an image of a map legend appropriate to the enclosing style. A Format element in LegendURL indicates the MIME type of the legend image, and the optional attributes width and height state the size of the image in pixels. Servers should provide the width and height attributes if known at the time of processing the GetCapabilities request. The legend image should clearly represent the symbols, lines and colors used in the map portrayal. The legend image should not contain text that duplicates the Title of the layer, because that information is known to the client and may be shown to the user by other means.

One of the challenges encountered today by web developers if the display of legends for map. In many instances, legends are returned as bulk images containing multiple items for different symbolizers. Most of the modern web applications today are using Responsive Web design, an approach that suggests that design and development should respond to the user’s behavior and environment based on screen size, platform and orientation. This implies that the client needs to have the ability to layout legend items in a flexible way with positioning of their labels depending of the screen formats. The use of an image for the whole legend is well adapted for responsive design. A more flexible mechanism is needed to convey the whole legend of a map by decomposing legend items that can be customized by a REST endpoint. Very often a legend image is also encoded inline using base64 encoding instead of referring to remote URL, which can add overhead when fetching each individual item.

The Portrayal Service should also accommodate the custom rendering of legend items (glyphs) for symbol, symbolizers and rules to provide visual cues when searching these items in the portrayal service. To address this challenge, an ontology was designed and encoded to describe Legend and Legend Item that can be used in portable way. The full model is documented in Appendix A.

6.4. Design

6.4.1. Expression Ontology

To address the issues related to the encoding of expressions in the previous section, an extensible approach was adopted. It leverages the heterogeneity of the data models, schema languages and query language standards, instead of attempting homogenizing the query language. There are a number of well-defined standards that are used for different data models. In the case of Linked Data, the standard SPARQL and its OGC extension GeoSPARQL are well-established. For XML, XQuery is the W3 standard. Each of these languages has standard mechanisms for defining functions. The advantage of this approach is that the full expressiveness of each language and existing query engines available can be leveraged without requiring a conversion process.

The key idea of the adopted approach is to treat expressions as literals and associate a standard URI for the query language of the expression. A similar approach has been used in OWL-S for representing conditions and parameter bindings in workflow of web services.

A standalone ontology for expression was defined, anticipating that it could be reused in other standards that require query expression in different languages.

The core concept of the ontology is Expression. It has only two properties: expressionBody that captures the expression as a literal and expressionLanguage, which refers to the URI of standard query language. The ontology introduces two convenience subclasses: OGC Expression and SPARQLExpression which have a fixed value to a query language (see Figure 1)

ExpressionUML
Figure 1. Expression Model

The following are URLs for the query languages that have been used:

Language

URL

SPARQL Query 1.0

http://www.w3.org/ns/sparql-service-description#SPARQL10Query

SPARQL Query 1.1

http://www.w3.org/ns/sparql-service-description#SPARQL11Query

OGC Filter

This ontology was used successfully to convert SLD portrayal rules expression, but not fully tested for the rendering of the feature data. The Expression ontology is documented in Annex A.

6.4.2. Binding Ontology

In the symbolizer and graphic ontologies, the graphic properties have a range that corresponds to the type of their values. Using expressions directly as values, will invalidate the model against the ontology rules (i.e. if a range for graphic property is defined as a xsd:integer, assigning an expression object will be invalid in OWL). To address the issue of validation of parameter values (SVGParameter) in SE Encoding, a lightweight Binding ontology was introduced. Symbolizer and PortrayalRule are defined as subclasses of Parameterizable. A Binding can be attached to any Parameterizable Object (see Figure 2). It assigns an expression to a property of the Parameterizable instance.

BindingUML
Figure 2. Binding Model

This ontology was used successfully to convert SLD portrayal rules expression bindings, however was not tested for the rendering of the feature data.

The Binding ontology is documented in Annex A.

6.4.3. Legend Ontology

A Legend plays a central role in the understanding of the meaning of a map or layer. It associates symbols used in a style with its intended denotation (meaning). This meaning is often associated with a human readable label but could also be associated with a machine-processable concept. A layer can have multiple layer styles. For each style, there is a legend associated with it.

A formal model for a legend was designed to enforce best practices that can make it easier for a client to layout legends in a variety of screen sizes automatically. To address this, the legend was broken down into a set of individual items that can be laid out. The model can still represent a legend with different symbolizers as one image using only one item, however it would be preferable to decompose each symbolizer/rule as one legend item to obtain more flexibility in the layout of the legend.

The Legend ontology is documented in Appendix A.

6.4.4. Layer Model

For the Testbed 13, an initial model was developed to represent a map layer that addresses some of the findings described in the previous section. This model was introduced around the end of the implementation phase in order to support the creation, update, cloning and search of map layers. The model may need some refinement in future testbeds.

The layer model is slightly different from the one defined in current OGC services, in the sense that it can accommodate multiple data sources using different models and formats (XML, JSON, CSV, Shapefile, WFS, GML). These data sources are typically accessible from a URL but could be set inline to capture annotations or small datasets (as demonstrated in Testbed with GeoJSON). Each data source can have several parameters that are represented as a set of key value pairs. The description of the parameters should be provided in the capabilities document of the portrayal service or a dedicated endpoint. This aspect has only been partially addressed in this testbed due to lack of time.

The focus of the layer modeling within the Portrayal Service was to capture the information necessary to perform the rendering of the layer. The design was not focused on the capture of metadata to enable search and discovery of layers in a semantic registry, which was investigated in another Testbed 13 Thread related to the Semantic Registry Information Model (SRIM) [11]. While there is some overlap between both models, the focus was to capture the essential metadata needed to render the layer.

Future testbeds will need to investigate the reconciliation of the SRIM profile for layer and map concepts and the layer description of the portrayal service needed to perform its rendering. The role of each service should also be clarified, for example, which service should capture metadata about layers. The definition of Data source needs also to be reconciled with the Distribution defined in DCAT. Coordinating efforts with W3C Spatial Data On The Web Working Group will help to resolve some of these issues.

7. Semantic Portrayal Service

This section summarizes the findings, design approaches, and changes for the Semantic Portrayal Service.

7.1. Findings

During the OGC Testbed 10 and 11, the initial set of portrayal ontologies have been developed to represent point-based symbols. In OGC Testbed 12, the model has been extended to support symbolizers for line and polygons. A design and implementation of a REST-based Semantic Portrayal Service was accomplished to manage portrayal information using the Semantic Registry Service based on the Semantic Registry Information model (SRIM) developed during the Testbed 12. REST CRUD (create, read, update, and delete) operations were implemented to manage and search styles, symbolizers, symbols using dedicated endpoints for each. Due to the lack of time, the rendering endpoints for layers, symbolizers and legends were not implemented. The service also didn’t have the ability to manage layers, perform faceted search on portrayal items and perform bulk import of portrayal information either encoded as SLD documents or Linked Data Format.

The goal of OGC Testbed 13 was to implement the rendering endpoints needed to demonstrate the capabilities of the Semantic Portrayal Service. Another goal was to import and export SLD documents and identify and address the gaps between the ontological model and the SLD model.

During the investigation of these gaps, it was identified that map layer management was also needed. Users want to create layers with customized styles that can be saved, so the layers can be easily accessible through a REST API. None of the current OGC service specifications support the ability to manage and search layers with user-defined styles.

The need of the definition of a profile for the Semantic Registry for layers and maps metadata was identified. This task was addressed by another thread in Testbed 13. The portrayal effort of this thread will need to be reconciled with the results of the activities of the Semantic Registry Thread in future testbeds.

7.2. Design

7.2.1. REST API Design

The Semantic Portrayal Service implementation is accessible through a hypermedia-driven and Linked Data REST-based API to access layer and portrayal information (styles, rules, symbolizers, symbols) from the service. The Semantic Portrayal implements REST API Level 3 and Level 2 of the Richardson Maturity Model (see Testbed 12 ER) and Linked Data API.

The style information is encoded in the following representations: RDF/XML, Turtle, N-Triples, JSON-LD and HAL-JSON. The Semantic Portrayal Service REST API is described in more detail in Annex B.

7.2.2. Importing Portrayal Information

To import portrayal information, a pluggable, extensible design approach was adopted, so it can accommodate a variety of formats, as standards evolve in the industry. Each importer type provides a list of parameters with name, description, type and cardinality. This information is used to build User Interface (UI) forms to capture value bindings for each parameter. For the OGC Testbed 13 initiative, importers for SLD 1.0, SLD 1.1 and Linked Data formats (RDF, Turtle, NTriples), based on the portrayal ontologies, were implemented. The importers can work with remote URL or file attachment.

7.2.3. Import SLD

The OGC SLD importer was designed to import SLD documents from either a URL or a file attachment. The imported document was parsed and mapped to the updated portrayal ontology and registered in the Semantic Registry implementing the Portrayal Information. A complete mapping of all related feature style information to the portrayal ontology was accomplished successfully by updating the ontology model with expression and bindings micro-ontology. It was found that the mapping was complete and feasible. However, the ontology had more expressiveness than OGC SLD as it could accommodate multiple data models (JSON, XML, RDF) and expression languages.

7.2.4. Linked Data Import

The Linked Data model provides a powerful integration mechanism to represent any objects, properties and relationships that can be understood unambiguously by machine (to perform inferences for example). The OGC Testbed 12 implementation of the Semantic Portrayal Service provided the ability to perform transactions of portrayal items in a very granular way by using dedicated endpoints for each item type (symbols, styles, symbolizers). While this was a valid approach, the performance to upload a large set of portrayal information was poor. It also limits one of the main benefits of using Linked Data: linking objects of different types in a semantic graph.

For Testbed 13, an importer for Linked Data formats (RDF/XML, Turtle, N-Triples) was implemented to upload any portrayal concepts managed by the portrayal services and expressed by the portrayal ontologies used by the service. The Linked Data Model was processed on the server side by iterating on all instances of resource types supported by the service and indexing them in the semantic registry. URIs of the resources were used to generate internal identifiers consistently. It was observed that significant improvement in bulk upload time compared to the more granular approach taken in the previous Testbed.

A future improvement, that can be addressed in future OGC Testbeds for this endpoint, is to integrate Shape Constraint Language (SHACL) validator to validate the shapes of the objects to make sure they contain the required properties needed by the service to perform their tasks. An investigatigation of the use of Linked Data for exporting all portrayal information managed by the service should also be performed.

7.2.5. Export SLD

Exporting SLD XML from the portrayal service was done through the /export endpoint. Using the portrayal ontologies, the convertion of FeatureTypeStyle ontological concept to a full SLD document with loss was done without loss. However, the export of symbolizers alone was not possible without breaking the XML validation against the SLD document schema.

7.2.6. Integration with Portrayal Registry

The Semantic Portrayal Service was using the Portrayal Registry to store style items.The SRIM profile was updated to accommodate the portrayal items by using the portrayal ontologies. An endpoint to the portrayal registry was also added to import Linked Data for portrayal information. The import endpoint of the Portrayal Service was interfaced with the Semantic Registry import by forwarding the request to the registry. The search of portrayal items in the Semantic Portrayal Service was delegated to the Semantic Registry. The following client Figure 3 shows the portrayal information stored in the semantic registry.

Semantic Registry Item Search
Figure 3. Portrayal Registry Client

7.2.7. Rendering

For the Testbed 13 initiative, the different types of renderers for the portrayal service were identified:

  • Layer Renderer: render layers managed by the service but also transient layers (layer specification submitted to the renderer but not managed by the service).

  • Map Renderer: Render a map from multiple layers managed by a portrayal service. It was chosen to follow the WMS GetMap operation protocol to facilitate the integration with existing web map client such as Leaflet, OpenLayers or MapBox. These APIs do not require a full implementation of the WMS specification as most of them only use the GetMap operation and ignore the GetCapabilities operation.

  • Glyph Renderer: Render a symbolizer/symbols into an image that can be used as a legend item or preview of symbolizers (or symbols). This is useful when symbolizers are defined by users and clients need a way to generate a legend for the symbolizer.

  • Legend Renderer: Generate a legend object composed of multiple legend items (each one representing a symbolizer corresponding to a portrayal rule. A Legend ontology was developed to describe the legend and legend items in JSON and Linked Data form. This would facilitate the interoperability of sharing legend information but also provide flexibility to clients to perform customized layout of the legend for specific targeted device display. Due to lack of time, the implementation of the renderer was not implemented.

The details of each renderer design are as follows:

Layer Renderer

While developing the client for the portrayal service, the following use cases for rendering layers were identified:

  • Render a layer by using its identifier with its associated styles.

  • Render a layer by using its identifier and overwrite its styling information using different bindings for its associated styles or replacing the associated styles with new styles.

  • Render a transient layer by passing its data source information and styles information explicitly

To address these use cases, different endpoints for layer rendering were introduced:

To address the first use case, which renders a layer by referring its internal id, a convenience rendering endpoint was provided using REST principles following the following pattern:

/layers/{id}/renderer

This endpoint renders the layer based on its definition using its associated source and styles. When this endpoint is used without parameter, the default extent of the data sources is used, a default size proportional to the bounding box and the CRS of the dataset. By making all the parameters optional, this provides a quick way to get a rendering of the layer by using a simple URL address. However, the rendering endpoint also accepts parameters such as CRS, BBOX, Width, Height, Style or Symbolizer identifiers to provide custom rendering of the layer within a given bounding box in a given CRS and rendered in a given image size.

When URI of layers are used to reference a layer, a second endpoint was defined. It works in a more flexible way to render the layer using the following URL pattern:

/renderer/layer

This endpoint requires either a layer internal id or a URI to refer to a layer and uses the same parameters as the former endpoint. This endpoint allows the rendering of a layer that can be accessible remotely using a resolvable URL. It was suggested to investigate distributed rendering of layers in future testbeds.

Details of the layer renderer endpoints are provided in Appendix B.

Map Renderer

To facilitate the integration of the Portrayal Service with existing clients, a map renderer endpoint was implemented using the WMS GetMap protocol parameters of the HTTP Get operation such as BBOX, CRS, WIDTH, HEIGHT, STYLE, FORMAT parameters. The Map Renderer endpoint is documented in Annex B.

Most of the Web Map clients that support WMS protocol such as OpenLayer, MapBox, Leaflet, only use the GetMap operation endpoint of the standard. The GetCapabilities is typically used by developers to get the identifier of the layer. The approach taken by the client is compatible with this approach. The only difference is that the layers can be discovered with the layer search endpoint and that each layer is modeled as a REST resource that is addressable with a unique URL and id. To test the viability of the approach, the integration of the Portrayal Service with the popular Leaflet Map Client was successfully implemented, by getting a portrayal service layer overlapping OpenStreetMap base layer using reprojection and rendering of the layer with custom styles. For the demonstration, symbolizer identifiers were used to refer to the styles (see Figure 4).