Enriching GeoJSON Data to Render a Map of Smart City IoT Sensors

DZone 's Guide to

Enriching GeoJSON Data to Render a Map of Smart City IoT Sensors

Learn how you can build a smart city with IoT sensors and GeoJSON.

· IoT Zone ·
Free Resource

The City of San Diego deployed the world’s largest smart city platform where thousands of streetlights around the city have been equipped with IoT sensors. These sensors collect metadata that can be fetched from a set of public access web services providing traffic, pedestrian flow, parking, and environment data.

It can be hard to get your head around how immense this network is until you look at it on a map. Let’s look at how to do that, first fetching and scrubbing the data with Python and then using JavaScript, Tangram, and HERE XYZ to visualize the results.

Smart Streetlights of San Diego

CityIQ IoT Platform

An asset is a physical thing and, in the context of CityIQ, refers to street lights. Each asset has sensor nodes including a camera to capture video and audio along with environmental sensors to capture things like temperature, humidity, and pressure. To maintain privacy, the assets also include an edge device with computing capabilities to process the data (computer vision) and store metadata about what was captured to the cloud. It is this metadata that the city provides free public access. You can learn more from the San Diego Sustainability website.

There are two initial steps before accessing the valuable asset data:

  1. Get an OAuth token
  2. Get the Asset metadata

Get an OAuth Token

Here’s an example in Python for authenticating and retrieving the OAuth token.

import json
import base64
import requests

def get_client_token(client, secret):
    uri = 'https://auth.aa.cityiq.io/oauth/token'
    credentials = base64.b64encode(client + ':' + secret)
    headers = {
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cache-Control': 'no-cache',
        'Authorization': 'Basic ' + credentials
    params = {
        'grant_type': 'client_credentials'

    response = requests.post(uri, headers=headers, params=params)
    return json.loads(response.text)['access_token']

def main():
    token = get_client_token('PublicAccess', 'uVeeMuiue4k=')


Get the Asset Metadata

So, what do you do with that token? You use it in any subsequent requests such as fetching the locations of assets. Here’s a bit more Python to demonstrate this second step.

def get_assets(token):
    uri = 'https://sandiego.cityiq.io/api/v2/metadata/assets/search'
    headers = {
       'Content-Type': 'application/x-www-form-urlencoded',
       'Predix-Zone-Id': 'SD-IE-TRAFFIC',
       'Authorization': 'Bearer ' + token

    # Bounding Box for San Diego Camera locations because these are the
    # nodes that have traffic, parking and pedestrian data
    params = {
            'bbox': '33.077762:-117.663817,32.559574:-116.584410',
            'page': 0,
            'size': 200,
            'q': 'assetType:CAMERA'
    response = requests.get(uri, headers=headers, params=params)

def main():
  token = get_client_token('PublicAccess', 'uVeeMuiue4k=')
  assets = get_assets(token)


The first thing to note about the response is that it only includes 200 assets out of more than 11k. You’ll need to make multiple calls to fetch the entire list of assets or reduce the bounding box of the search area. The second thing is the content itself, which is an array of dictionaries that look like this:

            "assetType": "CAMERA",
            "mediaType": "IMAGE",
            "coordinates": "32.71573892:-117.133679",
            "parentAssetUid": "0b0e643f-473d-482b-927e-b1e7e7a6ec2c",
            "eventTypes": [
            "assetUid": "049e6af3-6865-4c80-a0a3-46218c859fde"

The event types indicate the types of data available from this particular node, which includes PKOUT and PKIN for parking out and in respectively for a particular location. There are also traffic (TFEVT) and pedestrian (PEDEVT) events that can be interesting.

Now that we have access to the assets, we need to do a bit more cleanup.

GeoJSON and Reverse Geocoding

One of the reasons for choosing Python in this exercise so far is the ease of editing and transforming data. To display this data on a map, it’s easier to transform it into something like GeoJSON, which is a standardized specification for representing geospatial objects in JSON. It would look something like this:

    "geometry": {
        "type": "Point",
        "coordinates": [
    "type": "Feature",
    "properties": {
        "assetType": "CityIQ Camera Node",
        "street": "La Jolla Hermosa Ave",
        "coords": "32.81047216:-117.2642206",
        "assetUid": "000b2365-5309-422a-9be6-1b7127ca18db",

As you can see from this example, an object in GeoJSON is called a Feature and has geometry associated with it, in this case, a point (longitude comes first). It also has a set of properties which is just a dictionary of any relevant key-value pairs.

It was a small exercise to take the JSON response from the services above and then split the location into its corresponding latitude and longitude. I also did one other bit of data enrichment here with an operation called reverse geocoding. Taking a latitude and longitude reverse geocoding is a way of looking up the place identifier for that location, such as a street address.  I used a HERE reverse geocoder to identify the name of the street. This lets me index, not just the coordinates, but also to build a filter later to identify assets by street name.

Here’s an example:

import geojson

def reverse_geocode(lat, lon, app_id_here, app_code_here):
    uri = 'https://reverse.geocoder.api.here.com/6.2/reversegeocode.json'
    headers = {}
    params = {
            'app_id': app_id_here,
            'app_code': app_code_here,
            'prox': str.join(',', [lat,lon]),
            'mode': 'retrieveAddresses',
            'maxresults': 10,

    response = requests.get(uri, headers=headers, params=params)
    return json.loads(response.text)

def get_street_name(lat, lon):
    results = reverse_geocode(lat, lon)
    street = None
    for a in results['Response']['View'][0]['Result']:
        if 'Street' in a['Location']['Address']:
            street = a['Location']['Address']['Street']
            return street

    print("No address found for %s,%s" % (lat,lon))
    return street

def enrich_feature(lat, lon):
    point = geojson.Point(coordinates=(lon, lat))

    props = {}
    props['coords'] = lat + ':' + lon
    props['street'] = get_street_name(lat,lon)
    props['assetUid'] = feature['properties']['description']
    props['assetType'] = 'CityIQ Camera Node'

    return geojson.Feature(geometry=point, properties=props)

# loop over data to make a list of enriched features
# for each asset then write it to disk

with open('assets.geojson', 'w') as output:
  geojson.dump(features, output)

It's an additional step to aggregate all the features and finally calling geojson.dump() to a file, which gives a useful GeoJSON dataset.


As datasets get large, it becomes challenging to find a way to render it. There are two tools I used for this, Tangram and HERE XYZ.

Tangram is a real-time WebGL map tool for Vector Data. Instead of splitting geospatial data up into raster images (like PNGs) for each region, a vector tile gives far more flexibility to customize the style and presentation. In combination with Leaflet, you can create the experience many expect from web maps today with panning and zooming by using these two open-source libraries.

How do you get vector tile data? That’s where HERE XYZ is valuable. With the free tier, you can upload GeoJSON data and fetch vector tile data with the API from what is called space. The quick-start is to do something like this which also creates a tag by street name:

here xyz create -t 'san-diego-streetlights'
here xyz upload xyz-space-id -f assets.geojson -p street

There’s a tutorial called Using the HERE XYZ CLI, which explains how that works so I won’t go into more detail here.

To get started quickly with Tangram, run npx tangram-make mymap xyz-space-id xyz-token, which will bootstrap you with a skeleton tangram web map. The following files are initialized into a folder with the name you choose:

├── index.css
├── index.html
├── index.js
└── scene.yaml

If you run a web server like python -m http.server or with node.js http-server from this directory, you should be able to see a map running locally in your browser. By default, it’s centered somewhere in Seattle, but if you pan and zoom down to San Diego with the dataset, you can find points for the streetlight locations pulled from the HERE XYZ space layered on top of Open Street Map Tiles.

Look and Feel

If you open in a browser a tool called Tangram Play, you can read the scene.yaml from this folder and change colors, etc. to style the map to your preferences. 

Tangram and HERE XYZ can be a powerful toolset for viewing data on a map, so it’s worth spending some time learning them for applications like this where data that includes a latitude and longitude is much easier to understand.

You can find the source code and live demo if you want to reproduce the final result.

internet of things, json, maps, python, sensors, smart city

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}