ArubaOS 8 API: AP Database

In my previous posts about the ArubaOS API, I’ve given a general framework for pulling data from the AOS Mobility Conductor or a Mobility Controller. Today I’m going to show how to retrieve the AP database and dump it into a CSV file which you can then open in Excel or anything else and work all kinds of magic (yes, I know, Excel is not a database engine, but it still works rather well with tabular data)

The long-form AP database command (show ap database long) provides lots of useful information about the APs in the system:

  • AP Name
  • AP Group
  • AP Model
  • AP IP Address
  • Status (and uptime if it’s up)
  • Flags
  • Switch IP (primary AP Anchor Controller)
  • Standby IP (standby AAC)
  • AP Wired MAC Address
  • AP Serial #
  • Port
  • FQLN
  • Outer IP (Public IP when it is a RAP)
  • User

This script will take the human-readable uptime string (“100d:12h:34m:28s”) and convert that to seconds so uptime can be sorted in your favorite spreadsheet. It will also create a grid of all the AP flags so that you can sort/filter on those after converting the CSV to a data table.

The code is extensively commented so you should be able to follow along. Also available on github.

#!/usr/bin/python3

# ArubaOS 8 AP Database to CSV output via API call
# (c) 2021 Ian Beyer, Aruba Networks <canerdian@hpe.com>
# This code is provided as-is with no warranties. Use at your own risk. 

import requests
import json
import csv
import sys
import warnings
import sys
import xmltodict
import datetime


aosDevice = "1.2.3.4"
username = "admin"
password = "password"
httpsVerify = False

outfile="outputfilename.csv"


#Set things up

if httpsVerify == False :
    warnings.filterwarnings('ignore', message='Unverified HTTPS request')

baseurl = "https://"+aosDevice+":4343/v1/"


headers = {}
payload = ""
cookies = ""

session=requests.Session()
## Log in and get session token

loginparams = {'username': username, 'password' : password}
response = session.get(baseurl+"api/login", params = loginparams, headers=headers, data=payload, verify = httpsVerify)
jsonData = response.json()['_global_result']

if response.status_code == 200 :

    #print(jsonData['status_str'])
    sessionToken = jsonData['UIDARUBA']

else :
    sys.exit("Login Failed")

reqParams = {'UIDARUBA':sessionToken}

def showCmd(command, datatype):
    showParams = {
        'command' : 'show '+command,
        'UIDARUBA':sessionToken
            }
    response = session.get(baseurl+"configuration/showcommand", params = showParams, headers=headers, data=payload, verify = httpsVerify)
 
    if datatype == 'JSON' :
        #Returns JSON
        returnData=response.json()
    elif datatype == 'XML' :
        # Returns XML as a dict
        returnData = xmltodict.parse(response.content)['my_xml_tag3xxx']
    elif datatype == 'Text' :
        # Returns an array
        returnData =response.json()['_data']
    return returnData

apdb=showCmd('ap database long', 'JSON')

# This is the list of status flags in 'show ap database long'

apflags=['1','1+','1-','2','B','C','D','E','F','G','I','J','L','M','N','P','R','R-','S','U','X','Y','c','e','f','i','o','s','u','z','p','4']

# Create file handle and open for write. 
with open(outfile, 'w') as csvfile:
 write=csv.writer(csvfile)
 
 # Get list of data fields from the returned list
 fields=apdb['_meta']
 
 # Add new fields for parsed Data
 fields.insert(5,"Uptime")
 fields.insert(6,"Uptime_Seconds")
 
 # Add fields for expanding flags
 for flag in apflags:
  fields.append("Flag_"+flag)
 write.writerow(fields)
 
 # Iterate through the list of APs
 for ap in apdb["AP Database"]:
 
   # Parse Status field into status, uptime, and uptime in seconds
   utseconds=0
   ap['Uptime']=""
   ap['Uptime_Seconds']=""
   
   # Split the status field on a space - if anything other than "Up", it will only contain one element, first element is status description. 
   status=ap['Status'].split(' ')
   ap['Status']=status[0]

   # Additional processing of the status field if the AP is up as it will report uptime in human-readable form in the second half of the Status field we just split
   if len(status)>1:
    ap['Uptime']=status[1]

    #Split the Uptime field into each time field and strip off the training character, multiply by the requisite number of seconds an tally it up. 
    timefields=status[1].split(':')
    # If by some stroke of luck you have an AP that's been up for over a year, you might have to add a row here - I haven't seen how it presents it in that case
    if len(timefields)>3 :
        days=int(timefields.pop(0)[0:-1])
        utseconds+=days*86400
    if len(timefields)>2 :
        hours=int(timefields.pop(0)[0:-1])
        utseconds+=hours*3600
    if len(timefields)>1 :
        minutes=int(timefields.pop(0)[0:-1])
        utseconds+=minutes*60
    if len(timefields)>0 :
        seconds=int(timefields.pop(0)[0:-1])
        utseconds+=seconds
    ap['Uptime_Seconds']=utseconds

   # FUN WITH FLAGS
   # Bust apart the flags into their own fields 
   for flag in apflags:

    # Set field to None so that it exists in the dict
    ap["Flag_"+flag]=None
    
    # Check to see if the flags field contains data
    if ap['Flags'] != None :

     # Iterate through the list of possible flags and mark that field with an X if present
     if flag in ap['Flags'] :
      ap["Flag_"+flag]="X"
   
   # Start assembling the row to write out to the CSV, and maintain order and tranquility. 
   datarow=[]

   # Iterate through the list of fields used to create the header row and append each one
   for f in fields:
    datarow.append(ap[f])

   # Put it in the CSV 
   write.writerow(datarow)

   #Move on to the next AP

# Close the file handle
csvfile.close()

## Log out and remove session

response = session.get(baseurl+"api/logout", verify=False)
jsonData = response.json()['_global_result']

if response.status_code == 200 :

    #remove 
    token = jsonData['UIDARUBA']
    del sessionToken

else :
    del sessionToken
    sys.exit("Logout failed:")

The human-readable output of the show ap database gives you a list of what the flags are, but the API call does not, so in case you want a handy reference, here it is in JSON format so that you can easily adapt it. (Github)

sheldon={
'1':'802.1x authenticated AP use EAP-PEAP',
'1+':'802.1x use EST',
'1-':'802.1x use factory cert',
'2':'Using IKE version 2',
'B':'Built-in AP',
'C':'Cellular RAP',
'D':'Dirty or no config',
'E':'Regulatory Domain Mismatch',
'F':'AP failed 802.1x authentication',
'G':'No such group',
'I':'Inactive',
'J':'USB cert at AP',
'L':'Unlicensed',
'M':'Mesh node',
'N':'Duplicate name',
'P':'PPPoe AP',
'R':'Remote AP',
'R-':'Remote AP requires Auth',
'S':'Standby-mode AP',
'U':'Unprovisioned',
'X':'Maintenance Mode',
'Y':'Mesh Recovery',
'c':'CERT-based RAP',
'e':'Custom EST cert',
'f':'No Spectrum FFT support',
'i':'Indoor',
'o':'Outdoor',
's':'LACP striping',
'u':'Custom-Cert RAP',
'z':'Datazone AP',
'p':'In deep-sleep status',
'4':'WiFi Uplink'
}

Location, Location, Location, Part Deux

Yesterday, I posted about leveraging the Meridian API to get a list of placemarks into a CSV file. Today, We’ll take that one step further, and go the other way – because bulk creating/updating placemarks is no fun by hand.

For instance, in this project, I created a bunch of placemarks called “Phone Room” (didn’t know what else to call them at the time). There were several of these on multiple floors. To rename them in the Meridian Editor, I would have to click on each one, rename it, and save.

So, once I got guidance on what they were to be called, I fired up Excel and opened up the CSV that I created yesterday, and made the changes in bulk, and then ran them back into Meridian the other way – I changed the name, the type, and the icon:

Sounds easy, right?

Not so much. I ran into some trouble when I opened it in excel, and all my map IDs looked like this:

This is because Excel is stupid, but trying to be smart. It sees those as Really Big Numbers, and converts them to scientific notation, because the largest it can store is a 15-digit integer. And of course, these map IDs are… 16 digits. But I can’t just convert them back as integer number formatting because it then takes the first 15 digits and adds a zero. This, of course, breaks the whole thing. Excel will also do some similar shenanigans when parsing interface names from AOS or Cisco that look like “2/1/4”, which excel assumes is a date, because excel assumes everything that looks vaguely numeric must be a number, because it is a spreadsheet after all, and spreadsheets are made for numbers, even if people abuse them all the time as a poor substitute for a database.

So, this means you either have to make the changes directly in the CSV with a text editor, or find another sheets application that doesn’t mangle the map IDs. Fortunately, for us Mac users, Apple’s spreadsheet application (“Numbers”) handles this just fine. So make the changes, export to CSV, and run it all back into the API. (you can also name it as a .txt and manually import into Excel and specify that as a text column, but that’s tedious, which is what we’re trying to avoid)

I’ve built a bit of smarts into this script, since I don’t want to break things on Meridian’s end (although Meridian does a great job of sanity checking and sanitizing the input data from the API). The first thing it does is grab a list of available maps for the location. Then it goes through all the lines in the spreadsheet, converts them to the JSON payload that Meridian wants, and checks to see if there’s existing data in the id field. If there is, it assumes that this is an update (it does not, however, check to see if the data already matches the existing placemark since Meridian does that already when you update). If there is no ID, it assumes that this is a new object to be created, and verifies that it has the minimum required information (name, map, and x/y position), and in both cases, checks to make sure the map data in the object is a map ID that exists at this location (this is how I found out that excel was mangling them)

Running the script spits out this for each row in the CSV that it considers an update:

Update object id XXXXXXXXXXXXXXX_5666694473318400
object update passed initial sanity checks and will be placed on 11th Floor.
Updating existing object with payload:
{
  "id": "XXXXXXXXXXXXXXX_5666694473318400",
  "name": "Huddle Space",
  "map": "XXXXXXXXXXXXXXX",
  "type": "conference_room",
  "type_name": "Conference Room",
  "color": "f2af1d",
  "x": "177.20516072322900",
  "y": "597.6184874989240",
  "latitude": 41.94822,
  "longitude": -87.65552,
  "area": "",
  "description": "",
  "phone": "",
  "email": "",
  "url": ""
}
Object ID XXXXXXXXXXXXXXX_5666694473318400 named Huddle Space updated on map XXXXXXXXXXXXXXXX

If it doesn’t find an id and determines that an object needs to be created, it goes down like this:

Create new object: 
object create passed initial sanity checks and will be placed on 11th Floor.
Creating new object with payload:
{
  "name": "Test Placemark",
  "map": "XXXXXXXXXXXXXXXX",
  "type": "generic",
  "type_name": "Placemark",
  "color": "f2af1d",
  "x": "400",
  "y": "600",
  "latitude": "",
  "longitude": "",
  "area": "",
  "description": "",
  "phone": "",
  "email": "",
  "url": "https://arubanetworks.,com"
}
Object not created. Errors are
{
  "url": [
    "Enter a valid URL."
  ]
}

As you can see here, I made a typo in the URL field, and the data returned from the API call lists the fields that contain an error. If the call is successful, it returns an ID, which the script checks for to verify success. The response from a successful API call looks like this :

{
  "parent_pane": "",
  "child_pane_ne": "",
  "child_pane_se": "",
  "child_pane_sw": "",
  "child_pane_nw": "",
  "left": -1,
  "top": -1,
  "width": -1,
  "height": -1,
  "next_pane": null,
  "next_id": "5484974943895552",
  "modified": "2021-08-11T15:12:52",
  "created": "2021-08-11T15:12:52",
  "id": "XXXXXXXXXXXXXXXX_5484974943895552",
  "map": "XXXXXXXXXXXXXXXX",
  "x": 400.0,
  "y": 600.0,
  "latitude": 41.94822,
  "longitude": -87.65552,
  "related_map": "",
  "name": "Test Placemark",
  "area": null,
  "hint": null,
  "uid": null,
  "links": [],
  "type": "generic",
  "type_category": "Markers",
  "type_name": "Placemark",
  "color": "88689e",
  "description": "",
  "keywords": null,
  "phone": "",
  "email": "",
  "url": "https://arubanetworks.com",
  "custom_1": null,
  "custom_2": null,
  "custom_3": null,
  "custom_4": null,
  "image_url": null,
  "image_layout": "widescreen",
  "is_facility": false,
  "hide_on_map": false,
  "landmark": false,
  "feed": "",
  "deep_link": "com.arubanetworks.aruba-meridian://ZZZZZZZZZZZZZZZZ/placemarks/XXXXXXXXXXXXXXXX_5484974943895552",
  "is_disabled": false,
  "category_ids": [],
  "categories": []
}

Of course, the script doesn’t have to spit out all that output, but it’s handy to follow what’s going on. Comment out the print lines if you want it to shut up.

So, without further ado, here’s the script. This has not been debugged extensively, so use at your own risk. If you break your environment, you probably should have tested it in the lab first.

#!/usr/bin/python3

# Aruba Meridian Placemark Import from CSV
# (c) 2021 Ian Beyer
# This code is not endorsed or supported by HPE

import json
import requests
import csv
import sys

auth_token = '<please insert a token to continue>'
location_id = 'XXXXXXXXXXXXXXXX'

baseurl = 'https://edit.meridianapps.com/api/locations/'+location_id


header_base = {'Authorization': 'Token '+auth_token}

def api_call(method,endpoint,headers,payload):
	response = requests.request(method, baseurl+endpoint, headers=headers, data=payload)
	resp_json = json.loads(response.text)
	return(resp_json)

#File name argument #1
try:
	fileName = str(sys.argv[1])
except:
	print("Exception: Enter file to use")
	exit()


# Get available maps for this location for sanity check
maps={}

# print("Available Maps: ")
for floor in api_call('GET','/maps',header_base,{})['results']:
 	maps[floor['id']] = floor['name']
# 	print (floor['name']+ ": "+ floor['id'])



import_data_file = open(fileName, 'rt')

csv_reader = csv.reader(import_data_file)
count = 0

objects = []

csv_fields = []

for line in csv_reader:
	placemark = {}

	# Check to see if this is the header row and capture field names
	if count < 1 :
		csv_fields = line
	else:
		# If this is a data row, capture the fields and put them into a dict object
		fcount = 0
		for fields in line:
			objkey = csv_fields[fcount]
			placemark[objkey] = line[fcount]
			fcount += 1

		# Add the placemark object into the object list
		objects.append(placemark)		
	count +=1

#print(json.dumps(objects, indent=2))

import_data_file.close()

#Check imported objects for create or update. If it has an ID, then update. 
for pm in objects:
	task = 'ignore'
	if pm['id'] == "" :
		task = 'create'
		print("Create new object: ")
		# Delete id from payload
		del pm['id']
	else:
		task = 'update'
		print("Update object id "+ pm['id'])


	# Remove floor from payload as it is not valid
	del pm['floor']

	# Check to see if the basics are there before making the API calls
	reject = []
	if pm['x'] == "":
		reject.append("Missing X coordinate")
	if pm['y'] == "":
		reject.append("Missing Y coordinate")
	if pm['map'] == "":
		reject.append("Missing map id")
	if pm['name'] == "":
		reject.append("Missing object name")

	if len(reject)>0:
		#print("object "+ task + " rejected due to missing required data:")
		for reason in reject:
			print(reason)
		task = 'ignore'
	else:
		if maps.get(pm['map']) == None:
			print ("Map ID "+pm['map']+" Not found in available maps. Object will not be created. ")
			task = 'ignore'
		else:
			print("object "+ task + " passed initial sanity checks and will be placed on "+ maps[pm['map']] +".")


	#print ("Object Payload:")	
	#print (json.dumps(pm, indent=2))

	method = 'GET'
	
	if task == 'create':
		#print ("Creating new object with payload:")	
		#print (json.dumps(pm, indent=2))
		method = 'POST'
		ep = '/placemarks'
		result = api_call(method,ep,header_base,pm)

		if result.get('id') != None:
			print ("Object ID "+result['id']+" named "+result['name']+ " created on map "+ result['map'])
		else:
			print ("Object not created. Errors are")
			print (json.dumps(result, indent=2))
	if task == 'update':
		#print ("Updating existing object with payload:")	
		#print (json.dumps(pm, indent=2))
		method = 'PATCH'
		ep = '/placemarks/'+pm['id']
		result = api_call(method,ep,header_base,pm)

		if result.get('id') != None:
			print ("Object ID "+result['id']+" named "+result['name']+ " updated on map "+ result['map'])
		else:
			print ("Object not updated. Errors are")
			print (json.dumps(result, indent=2))


baseurl = 'https://edit.meridianapps.com/api/locations/'+location_id


header_base = {'Authorization': 'Token '+auth_token}

def api_call(method,endpoint,headers,payload):
	response = requests.request(method, baseurl+endpoint, headers=headers, data=payload)
	resp_json = json.loads(response.text)
	return(resp_json)


# Get available maps for this location for sanity check
maps={}

# print("Available Maps: ")
for floor in api_call('GET','/maps',header_base,{})['results']:
 	maps[floor['id']] = floor['name']
# 	print (floor['name']+ ": "+ floor['id'])


# I've hard coded the file name here because I'm lazy. 

import_data_file = open('placemarks_update.csv', 'rt')

csv_reader = csv.reader(import_data_file)
count = 0

objects = []

csv_fields = []

for line in csv_reader:
	placemark = {}

	# Check to see if this is the header row and capture field names
	if count < 1 :
		csv_fields = line
	else:
		# If this is a data row, capture the fields and put them into a dict object
		fcount = 0
		for fields in line:
			objkey = csv_fields[fcount]
			placemark[objkey] = line[fcount]
			fcount += 1

		# Add the placemark object into the object list
		objects.append(placemark)		
	count +=1

#print(json.dumps(objects, indent=2))

import_data_file.close()

#Check imported objects for create or update. If it has an ID, then update. 
for pm in objects:
	task = 'ignore'
	if pm['id'] == "" :
		task = 'create'
		print("Create new object: ")
		# Delete id from payload
		del pm['id']
	else:
		task = 'update'
		print("Update object id "+ pm['id'])


	# Remove floor from payload as it is not valid
	del pm['floor']

	# Check to see if the basics are there before making the API calls
	reject = []
	if pm['x'] == "":
		reject.append("Missing X coordinate")
	if pm['y'] == "":
		reject.append("Missing Y coordinate")
	if pm['map'] == "":
		reject.append("Missing map id")
	if pm['name'] == "":
		reject.append("Missing object name")

	if len(reject)>0:
		#print("object "+ task + " rejected due to missing required data:")
		for reason in reject:
			print(reason)
		task = 'ignore'
	else:
		if maps.get(pm['map']) == None:
			print ("Map ID "+pm['map']+" Not found in available maps. Object will not be created. ")
			task = 'ignore'
		else:
			print("object "+ task + " passed initial sanity checks and will be placed on "+ maps[pm['map']] +".")


	#print ("Object Payload:")	
	#print (json.dumps(pm, indent=2))

	method = 'GET'
	
	if task == 'create':
		#print ("Creating new object with payload:")	
		#print (json.dumps(pm, indent=2))
		method = 'POST'
		ep = '/placemarks'
		result = api_call(method,ep,header_base,pm)

		if result.get('id') != None:
			print ("Object ID "+result['id']+" named "+result['name']+ " created on map "+ result['map'])
		else:
			print ("Object not created. Errors are")
			print (json.dumps(result, indent=2))
	if task == 'update':
		#print ("Updating existing object with payload:")	
		#print (json.dumps(pm, indent=2))
		method = 'PATCH'
		ep = '/placemarks/'+pm['id']
		result = api_call(method,ep,header_base,pm)

		if result.get('id') != None:
			print ("Object ID "+result['id']+" named "+result['name']+ " updated on map "+ result['map'])
		else:
			print ("Object not updated. Errors are")
			print (json.dumps(result, indent=2))

It’s also worth noting here that the CSV structure and field order isn’t especially important since it reads in the header row to get the keys for the dict – as long as you have the minimum data (name/map/x/y) you can create a table of new objects from scratch. Any valid field can be used (although categories requires some additional structure)

Questions/comments/glaring errors? Comment section is right here. Scripts on this page can also be found on github.

Location, Location, Location

It seems fitting that the week in which I became the first to pass the CWIDP (Certified Wireless IoT Design Professional) certification would be one where I happened to be onsite doing a BLE location project with Aruba Meridian.

While the web based editor is great, it is mildly annoying that one of the things it can’t do is copy and paste placemarks between floors, which would be really handy when you’re deploying an office building that has almost the same layout on every floor. For this, you have to dig into the Meridian API.

By using the API, you can spit out a list of placemarks in JSON (easiest way to do this is in Postman, with a GET to {{BaseURL}}/locations/{{LocationID}}/placemarks), grab the JSON objects for the placemarks you wish to copy from the output, update the fields in the JSON for which map ID they need to go on, and any other data, and then make a POST back to the same endpoint with a payload containing the JSON list of objects you want to create. Presto, now you’ve been able to clone all the placemarks from one floor to another.

Note that point coordinates are relative to the map image – So cloning from one map to another can bite you in the rear if they’re not properly aligned. (If you want to get really clever, You could manually create a placemark on every floor whose sole purpose is to align each floor in code – much like an alignment point in Ekahau, when you generate a placemark list, you can calculate any offsets between them and apply those to coordinates before sending them back to the API. But this trick could (and probably will) be a whole post of its own.

The structure of a placemark object from a GET:

       {
            "parent_pane": "",
            "child_pane_ne": "",
            "child_pane_se": "",
            "child_pane_sw": "",
            "child_pane_nw": "",
            "left": -1,
            "top": -1,
            "width": -1,
            "height": -1,
            "next_pane": null,
            "next_id": "XXXXXXXXXXXXXXXXXX",
            "modified": "2021-08-06T22:57:29",
            "created": "2021-08-06T15:28:13",
            "id": "XXXXXXXXXX_XXXXXXXXXX",
            "map": "XXXXXXXXXXXXXXXX",
            "x": 2152.0189227788246,
            "y": 499.59448403469816,
            "latitude": 41.94822,
            "longitude": -87.65552,
            "related_map": "",
            "name": "Pitch Room",
            "area": "2132.722,480.101,2228.177,480.498,2230.003,634.737,2132.412,634.722",
            "hint": null,
            "uid": null,
            "links": [],
            "type": "conference_room",
            "type_category": "Office",
            "type_name": "Conference Room",
            "color": "596c7c",
            "description": null,
            "keywords": null,
            "phone": null,
            "email": null,
            "url": null,
            "custom_1": null,
            "custom_2": null,
            "custom_3": null,
            "custom_4": null,
            "image_url": null,
            "image_layout": "widescreen",
            "is_facility": false,
            "hide_on_map": false,
            "landmark": false,
            "feed": "",
            "deep_link": null,
            "is_disabled": false,
            "category_ids": [
                "XXXXXXXXXXXXXXXX"
            ],
            "categories": [
                {
                    "id": "XXXXXXXXXXXXXXXX",
                    "name": "Conference Room"
                }
            ]
        }

However, when creating a new placemark, you don’t need all these fields… The only objects that are absolutely required are map, type, x, and y (I haven’t tried sending an id along with a POST, so I don’t know if it will ignore it or reject it.) I’ve used Postman variables here for map and name, because then I could just change those in the environment variables and resubmit to put on multiple floors. The best part about the POST method is that the payload doesn’t just have to be a single JSON object, it can be a list of them, by simply putting multiple objects inside a list using square brackets.

{
            "map": "{{ActiveMapID}}",
            "x": 2152.0189227788246,
            "y": 499.59448403469816,
            "latitude": 41.94822,
            "longitude": -87.65552,
            "related_map": "",
            "name": "{{ActiveFloor}} Pitch Room",
            "area": "2132.722,480.101,2228.177,480.498,2230.003,634.737,2132.412,634.722",
            "hint": null,
            "uid": null,
            "links": [],
            "type": "conference_room",
            "type_category": "Office",
            "type_name": "Conference Room",
            "color": "596c7c",
            "description": null,
            "keywords": null,
            "phone": null,
            "email": null,
            "url": null,
            "custom_1": null,
            "custom_2": null,
            "custom_3": null,
            "custom_4": null,
            "image_url": null,
            "image_layout": "widescreen",
            "is_facility": false,
            "hide_on_map": false,
            "landmark": false,
            "feed": "",
            "deep_link": null,
            "is_disabled": false,
            "category_ids": [
                "5095323364098048"
            ],
            "categories": [
                {
                    "id": "5095323364098048",
                    "name": "Conference Room"
                }
            ]
        }

And if you want to update an existing placemark, simply make a PATCH call to the API with the existing placemark’s ID, and whatever fields you wish to update. Like with the POST call, you can send a payload that is a list of objects This is a great way to batch update URLS or names.

It may also come in handy to generate a list of all the placemarks at a given location. So I threw together a handy little python script that will spit it out into a CSV (and will also look up the map ID and get the floor number for easy reference).

#!/usr/bin/python3

# Aruba Meridian Placemark Export to CSV
# (c) 2021 Ian Beyer
# This code is not endorsed or supported by HPE

import json
import requests
import csv
import sys

auth_token = 'Please Insert Another Token to continue playing'
location_id = 'XXXXXXXXXXXXXXXX'

baseurl = 'https://edit.meridianapps.com/api/locations/'+location_id


header_base = {'Authorization': 'Token '+auth_token}

def api_get(endpoint,headers,payload):
	response = requests.request("GET", baseurl+endpoint, headers=headers, data=payload)
	resp_json = json.loads(response.text)
	return(resp_json)

#File name argument #1
try:
	fileName = str(sys.argv[1])
except:
	print("Exception: Enter file to use")
	exit()

maps={}

for floor in api_get('/maps',header_base,{})['results']:
	maps[floor['id']] = floor['name']

beacons=[]

# Iterate by floor for easy grouping - 
for flr in maps.keys():
	bcnlist=api_get('/beacons?map='+flr,header_base,{})
	# NOTE: If bcnlist['next'] is not null, then there are additional pages - this doesn't process those yet. 
	for bcn in bcnlist['results']:
		# Add floor name column for easier reading. 
		bcn['floor']=maps[bcn['map']]
		beacons.append(bcn)



data_file = open(fileName, 'w')
csv_writer = csv.writer(data_file)
count = 0

csv_fields = beacons[0].keys()

print(csv_fields)

csv_writer.writerow(csv_fields)

#print(placemarks)
for bcn in beacons:
	data=[]
	for col in csv_fields:
		data.append(bcn[col])
	# Writing data of CSV file
	csv_writer.writerow(data)
	count += 1
data_file.close()

print ("Exported "+str(count)+" beacons. ")

Once you have this list in a handy spreadsheet form, then you can make quick and easy edits, and then run the process the other way.

Scripts on this page can also be found on github.

EC2 Monitoring with Raspberry Pi

I’ve been doing a little Raspberry Pi hacking lately, and put together a neat way to have physical status LEDs on your desk for things like EC2 instances.

The Hardware

In its most basic form, you can simply hook up an LED and a bias resistor between a ground line and a GPIO line on the Pi, but that doesn’t scale especially well – You can run out of GPIO lines pretty quickly, especially if you’re doing different colors for each status. Plus, it’s not overly elegant.

The solution? Unicorns!

No, really. The fine folks at Pimoroni in Sheffield, UK have made a lovely little HAT device for the Pi called a Unicorn. Its primary purpose is lots of blinky lights to make pretty rainbows and stuff, hence the name. However, this HAT is a 4×8 (or an 8×8) array of RGB LEDs, addressable via the I2C bus, which doesn’t eat up a line per LED (good thing, otherwise it would require 96 analog lines). The unicornhat library (python3-unicornhat) is available for Python 2 and Python 3 in the Raspbian repo. When installed onto the Pi, the Unicorn will fit within a standard Raspberry Pi case.

The Code

This is my first foray into Python, so there was a bit of a learning curve. If you’re familiar with object-oriented code concepts, this should be easy for you. Python is much more parsimonious with punctuation than PHP or perl are.

For accessing the EC2 data, we’ll need Amazon’s boto3 library, also available in the Raspbian repo (python3-boto). One area where boto3 is really nice is that the data is returned directly as a dict object (what users of other languages would call an array), so you don’t have to mess with converting JSON or XML into an object structure, and it can be manipulated as you would any other associative array (or a hash for you old-timers that use perl). AWS returns a fairly complex object, so you kind of have to dig into it via a few iterative loops to extract the data you’re after.

From there, it’s a matter of assigning different RGB values to the states. I chose these ones:

  • stopped: red
  • pending: green
  • running: blue
  • stopping: yellow(ish)

I also discovered that I needed to assign a specific pixel to each instance ID, otherwise they tended to move around a bit depending on what order AWS returned them on a particular request.

Here’s what the second iteration looks like in action:

import boto3 as aws
import unicornhat as unicorn
import time

# Initialize the Unicorn
unicorn.clear()
unicorn.show()
unicorn.brightness(0.5)

# Create an EC2 object 
ec2 = aws.client('ec2')

# Define colors and positions
color = {}
color['stopped']={'red':255,'green':0,'blue':0}
color['pending']={'red':64,'green':255,'blue':0}
color['running']={'red':32,'green':32,'blue':255}
color['stopping']={'red':192,'green':128,'blue':32}
	
pixel = {}
pixel['i-0fa4ea2560aa17ffd']={'x':0,'y':0}
pixel['i-06b95cd864acb1a8c']={'x':0,'y':1}
pixel['i-0661da0f50ffb604c']={'x':0,'y':2}
pixel['i-063ec151e0f44ef9b']={'x':0,'y':3}
pixel['i-02c514ca567d8a033']={'x':0,'y':4}

# Loop until forever
while True:

	response = ec2.describe_instances()
		
	
	statetable = {}
	resarray = response['Reservations']
	for res in resarray:
		instarray = res['Instances']
		for inst in instarray:
			iid = inst['InstanceId']
			state = inst['State']['Name']
			# print(iid)
			# print(state)
			statetable[iid] = state
	
	
	for ec2inst in statetable:
		x = pixel[ec2inst]['x']
		y = pixel[ec2inst]['y']
		r = color[statetable[ec2inst]]['red']
		g = color[statetable[ec2inst]]['green']
		b = color[statetable[ec2inst]]['blue']
		# print(x,y,r,g,b)
	
		unicorn.set_pixel(x,y,r,g,b)
		unicorn.show()


	time.sleep(1)

For the moment, this is just monitoring EC2 status, but I’m going to be adding checks in the near future to do things like ping tests, HTTP checks, etc. Stay tuned.