Working From Home: Internet Access

In my previous post, I went over the basics of working from home. It’s worth noting here that many of these concepts can also be applied to your kids who might be taking school online – they’re teleworking just like you are, and face many of the same challenges. In this and future posts, I’ll be dealing with the tech basics required for a successful and productive home office.

I was originally going to do a single post on all things tech, but it started getting lengthy, so I decided to break it down into a couple of parts. This post will deal specifically with external network connectivity.

The Internet

No surprises here – a decent internet connection is pretty much a given for remote work. One thing that is becoming apparent during this quarantine period is that a whole lot of people have abysmally bad internet connections at home. I’m hearing horror stories from the trenches, from my colleagues and friends who work front-line IT support.

The word “Broadband” is thrown around a lot by ISPs intent on selling you a service package, but what does it really mean? In the United States, the Federal Communications Commission updated their definition of “broadband” most recently in 2015, to mean a connection speed of at least 25Mbps downstream (from ISP to your house), and 3Mbps upstream (from your house to your ISP. But what do those speeds really mean? The FCC also has a handy guide listing what activities require what level of speed.

So your Cable ISP touts their “SuperGigaFast” service with “gigabit” service. Sounds great, right? Not so fast. Cable-based ISPs that come into your house via a coaxial cable use a technology called DOCSIS, which has great downstream speeds, and (usually) abysmally bad upstream speeds. The cable companies originally designed this technology back in the late 1990s when internet usage consisted largely of downloading web pages and sending small bits of control data. This meant that an asymmetrical connection would work great for most users, and they would be able to leverage their existing wiring infrastructure.

Fast forward 25 years to 2020, and cloud-based data storage and teleconferencing and the like mean that you need a lot more upstream speed than you used to. But that hasn’t stopped cable companies from selling “gigabit” packages with a paltry 10Mbps upstream connection. When getting an internet service package for teleworking, your upstream speed should be at least 10% of your downstream speed – because if you saturate your upstream link, it’s going to negatively impact your downstream traffic and limit it. This lets the cable company sell you “gigabit”, knowing full well that they’ll never have to deliver on that promise. They also usually provide really cheap equipment which means your Wi-Fi speeds are going to be limited even more, and they still don’t have to deliver on those gigabit speeds they’re charging you for. If you have the option of a symmetrical connection (usually delivered over fiber optic cable), it will be a lot more functional.

Much of what applies to DOCSIS cable connections also applies to DSL connections from the local telephone company. Make sure you have enough upstream bandwidth to do what you need to do. Also beware of any service that has a data cap – working from home can blow through a data cap in a real hurry.

It’s usually worth investing in your own router – the equipment provided by the ISP is, in most cases, absolute junk. AT&T is notoriously bad about this on both their U-Verse DSL and fiber-based services, and they have it configured such that it’s very difficult to use a “real” router with their service.

And in some places, cable, fiber, or DSL aren’t an option, and you’re stuck with either a wireless ISP or cellular.

Hardware

The typical internet connection requires a couple of devices. ISPs and telcos generally refer to this as “Customer Premises Equipment”, or “CPE”.

1950s-era dial telephone using an acoustic coupler modem

The Modem

This is the device that interfaces your ISP’s connection with your home network, usually via an Ethernet connection. The term comes from “modulation/demodulation”, which is the process of converting a data stream into a series of electrical signals. This operates between what us network nerds call “Layer 1” (electrical signals) and “Layer 2” (data link). I posted on network layers in this post from 2018, if you want to get into some of the details of those. The modem’s primary function is extending your ISP’s physical network to your house. Before the days of direct internet connections, the data link was established over a telephone line by modulating the data signals into electrical signals in the narrow audio range supported by the telephone system.

Modems can take many forms, and in many cases, your ISP’s modem is integrated into a single device with a router. In the case of cable, you can usually supply your own. In the case of DSL or fiber service (where it’s usually called an Optical Network Terminal instead of a modem) it’s usually provided by the ISP and you won’t get much choice in the matter, although sometimes it’s possible to request a specific type or model.

Your smartphone also contains a modem that interfaces to the cellular networks – it likely uses LTE (4G), but older ones (3G) would use CDMA or GSM, and newer ones (5G) use a few different things, mostly based on LTE. If you need to interface a cellular network to your home network, either as a primary or backup link, there are dedicated cellular modem devices for that (more on that in a moment).

GIF from "The IT Crowd" where Moss shows Jen a small black box, and tells her, "This, Jen, is the Internet"

The Router

This is the device that connects your network to your ISP’s network. It operates at “Layer 3”, which for the vast majority of people means “the internet”. The internet is nothing more than a whole bunch of interconnected networks. A protocol (known as the “Internet Protocol”, or “IP”) has been in place for decades, specifying how all these networks can talk to each other. Each network is connected to other networks by way of a router (also known as a “gateway”). Its job is to look at traffic that comes in, and decide where it needs to go next. If it’s for another device on a network it’s directly connected to, it sends it directly. For something elsewhere on the internet, it sends it to the next router down the line (usually your ISP) to deal with and eventually get it to where it needs to go. This process usually happens in a matter of milliseconds (you can use the “ping” command to see how long this takes, or “tracert” (windows)/”traceroute” (everything else) to see the path it takes. The whole idea is that you don’t see what’s happening under the hood.

The term “Router” is often misconstrued to mean “WiFi”. This is often because the equipment provided by an ISP or purchased consists of a router, a network switch, and a Wi-Fi access point (and sometimes a modem) all in one box referred to as “the router”.

Owing to a general shortage of IP addresses, your ISP will assign a single IP address (which is unique on the entire internet!) to your router’s Internet-facing connection (the Wide Area Network/WAN interface), and your own network devices (on the Local Area Network/LAN interface) will occupy address space that is defined by RFC1918 as “private” address space (which can not be used directly on the internet, but can be re-used by anyone – in most cases, your network will be 192.168.something, the specifics vary from one devices to another). The router will then perform Network Address Translation (NAT) to move data between the two networks. Most of the time, you don’t need to worry about the details of how it’s set up, although when it comes to troubleshooting, having at least a general awareness of how it’s set up can be useful.

3D Illustrated representation of a firewall.

The Firewall

This is a key piece of the network, as it is what decides which traffic is and isn’t allowed. This is critical to providing network security. It is usually integrated into the router. It examines each packet and checks a list of rules (which can be updated multiple times a day to react to ongoing threats) to determine if the packet should be sent along its merry way, or dropped into a deep, dark hole.

LAN Party

The Local Area Network

The router is the transition point from your network to the rest of the internet. I’m not going to get into the details of the LAN for the moment (that’s for another post), but this is where you will connect all your equipment, either wirelessly via Wi-Fi, or via a wire to an Ethernet switch.

Single car in a tunnel

Virtual Private Networking (VPNs)

This isn’t really a hardware component, but is usually a key piece of any home office (it sometimes uses dedicated hardware, though). The function of a VPN is to connect you to another private LAN located elsewhere (either physically or just another part of the network.) When working from home, installing a dedicated private network connection between the main office to a home office is cost-prohibitive (although there are some interesting new technologies with 5G that will allow you to connect mobile devices directly to the corporate network, essentially making the corporate network its own cellular carrier.)

Enter the VPN – It uses the public internet to establish a connection to the corporate network, and it builds an encrypted tunnel that allows corporate traffic to pass through securely. Sometimes, this is an application that runs directly on a computer, establishing the tunnel directly to that computer, and sometimes, the tunnel is established by the network equipment you have at home, and it just presents another LAN for you to connect anything to. In most cases, in order to use bandwidth more efficiently, any traffic destined for the internet will go out directly from your router rather than through the tunnel and go out from the corporate network. This is known as a “split tunnel”. Some companies, however, will choose to pass all traffic through the tunnel in order to benefit from high-power corporate firewalls to better secure traffic against malware, data leakage, or to just filter content.

As cloud-based services such as Office 365 become more prevalent, VPN connections back to the office are becoming less important.

It’s worth noting that this is very different from public “VPN” services that claim to offer privacy when accessing the internet. While the underlying technology is similar, all these are doing is relocating where you hop on to the internet, sending it through the VPN service’s network where they can inspect all your traffic.

Home Network Equipment

Equipment

A quick rundown of connectivity equipment:

Cellular Modems

If you need to connect to a cellular network, you can use the following:

  • Your smartphone hotspot (easiest in a pinch, can also usually connect to your laptop via a USB cable if you don’t want to or can’t use Wi-Fi)
  • A portable hotspot, sometimes called a “Mi-Fi” or a “Jetpack”, both are brand names for common devices in this category. Many of these also can connect via USB.
  • A USB cellular modem (check your cellular carrier for options)
  • An Ethernet cellular modem or router such as a CradlePoint IBR series device

Some home routers and most enterprise routers will support a USB cellular modem as a WAN connection, either primary or as a backup.

Home Routers

There is a wide variety of these out there, and most of what you can get commercially will do the job better than what the ISP provides. NetGear and Asus both make devices that perform well, but these devices have limited security capabilities. TP-Link and Linksys are cheap, but tend to underperform. Plan on about $200-300 for these types of devices. I’ll get into this a little more when I talk about the LAN side of things.

Many people recommend Ubiquiti equipment, but that’s a lot more complex than I feel is appropriate for non-technical users. If it’s what a managed service provider supplies, then it’s quite adequate, but make sure they’re the ones that have to deal with the technical side of it. If you’re a network nerd, then you already know this stuff.

Enterprise Firewalls

This is where your corporate IT department or managed service provider usually comes into play, and provide you with a firewall/router device that is pre-configured for corporate networking and security standards (and will often set up a dedicated VPN connection as well). These devices come from a vendor like Fortinet, Aruba (in the form of a Remote Access Point), Palo Alto, Cisco/Meraki, and other enterprise networking vendors. These are helpful in a home office because they are generally managed by your MSP or IT department and are essentially plug and play, giving you a secure network connection that is functionally equivalent to being on the network at the office.

You can also purchase your own standalone firewall from these vendors, all of which have a home office model or two in their lineup. They usually come with an annual subscription cost which gives you frequent updates to the security profiles and rules, to adapt to the changing network threat landscape. These will typically provide much better security than a residential gateway device, but are more complex and expensive to operate.

Summary/tl;dr

This got long (which is why I’m breaking tech up into multiple posts), but the bottom line is that your internet connection is a vital piece of the home office puzzle, and it’s one where you’re going to want to spend some time and money getting it right. If you have to go cheap somewhere, this is not the place to do it, but you also don’t need to go overboard.

My colleague Scott Lester also posted on his blog about temporary internet access.

Please share your internet access related tips and experiences in the comments.

Misty valley landscape with a tree on an island

Mist Deployment (Part The First)

First in a series about our first deployment of a Mist Systems wireless network. Mist Systems Logo

Over the course of the past few months, I’ve been working with the IT staff at College Park Church in Indianapolis to overhaul their aging Ubiquiti UniFi wireless system. They initially were looking at a Ruckus system, owing to its widespread use among other churches involved with the Church IT Network and its national conference (where I gave a presentation on Wi-Fi last fall). We had recently signed on as a partner with industry newcomer Mist Systems, and had prepared a few designs of similar size and scope for other churches in the Indianapolis area using the Mist system. We proposed a design with Ruckus, and another with Mist, with the church selecting Mist for its magic sauce, which is its Bluetooth Low Energy (BLE) capability for location engagement and analytics.

Fundamentally, the AP count, coverage, and capacity were not significantly different with Ruckus vs. Mist, and Mist offered a few advantages over the Ruckus in terms of the ability to add external antennas for creating smaller cells in the sanctuary from the APs mounted on the catwalks, as floor mounting was not an option.

About Mist

Mist is a young company that’s been around for about two or three years, and they have developed a couple of cool things in their platform – The first is what they call their AI cloud, the second is their BLE subsystem, and the last is their API.

Their AI component is a cloud management dashboard (similar to what you would see with Ruckus Cloud or Meraki — many of the engineers that started with Mist came over from Meraki), where the APs are constantly analyzing AP and client performance through frame capture and analysis, and reporting it back to the cloud controller. The philosophy here is that a large majority of the issues that users have with Wi-Fi performance is actually related to performance on the wired side of the network (“It’s always DNS.” Not always, but DNS — and DHCP — are major sources of Wi-Fi pain). The machine learning AI backend is looking at the stream of frames to detect problems, and then using that to generate Wi-Fi SLA metrics that can help determine where problems lie within the infrastructure, and doing some analysis of root causes. An example of this is monitoring the entire Station/AP conversation during and shortly following the association process. It looks at how long association took. How long DHCP took (and if it was successful), whether 4-way handshakes completed, and so on. It will also keep a frame capture of that conversation for further manual troubleshooting. It also keeps a log of AP-level events such as reboots and code changes so that client errors can be correlated on a timeline to those events. There’s a lot more it can do, and I’m just giving a brief summary here. Mist has lots of informational material on their website (and admittedly, there’s a goodly amount of marketing fluff in it, but that’s what you’d expect on the vendor website).

Graphs of connection metrics from the Mist system

 

 

 

 

 

 

 

 

 

 

Next, we have their BLE array. This is what really sets Mist apart from the others, and is one of the more interesting pieces of tech to show up in wifi hardware since Ruckus came on the scene with their adaptive antenna technology. Each AP has not one, but *eight* BLE radios in it, coupled with a 16-element antenna array (8 TX, 8RX). Each antenna provides an approximately 45° beam covering a full circle. Mist is able to use this in two key ways. One is the ability to get ridiculously precise BLE location information from their mobile SDK, (and by extension, locate a BLE transponder for asset visibility/tracking) and the other is the ability to use multiple APs to place a virtual BLE beacon anywhere you want without having to go physically install a battery-powered beacon. There are myriad uses for this in retail environments, and the possibilities for engagement and asset tracking are very interesting in the church world as well.

Lastly, we have their API. According to Mist, their cloud controller’s web UI only exposes about 40% of what their system can do. The remainder is available via a REST API that will allow you do do all kinds of neat tricks with it. I haven’t had a chance to dig into this much yet, but there’s a tremendous amount of potential there. Jake Snyder has taught a 3-day boot camp on using Python in network administration to leverage the power of APIs like the one from Mist (Ruckus also has an API on their Cloud and SmartZone controllers)

Mist is also updating their feature set on a weekly basis – rather than one big update every 6 months that may or may not break stuff, small weekly releases allow them to deploy features in a more controlled manner, making it easy to track down any potential show-stopper bugs, preferably before they get released into the wild. You can select whether your APs get the early-release updates, or use a more extensively tested stable channel.

Much like Meraki, having all your AP data in the cloud is tremendously useful when contacting support, as they have access to your controller data without you having to ship it to them. They can also take database snapshots and develop/test new features based on real data from the field rather than simulated data. No actual upper-layer traffic is captured.

The Hardware

note: all prices are US list – specific pricing will be up to your partner and geography.

There are four APs in the Mist line. The flagship 4×4 AP41 ($1385), the lower-end AP21 ($845), the outdoor AP61 ($?) , and the BLE-only BT11 ($?). The AP41 also comes in a connectorized version called the AP41E, at the same price as the AP41 with the internal antenna.

The AP41/41E is built on a cast aluminum heat sink, making the AP noticeably heavy. It offers an Ethernet output port, a USB port, a console port, and what they call an “IoT port” that provides for some analog sensor inputs, Arduino-style. It requires 802.3at (PoE+) power, or can use an external 12V supply with a standard 5.5×2.5mm coaxial connector. In addition to the 4-chain Wifi radio and the BLE array, the AP41 also has a scanning radio for reading the RF environment. On the AP41E, the antenna connectors are located on the downward face of the AP.

The AP21 is an all-plastic unit that uses the same mounting spacing as the AP41, and has an Ethernet pass-through port with PoE (presumably to power downstream BT11 units or cameras). Like the AP41, it also has the external 12V supply option.

This install didn’t make use of BT11 or AP61 units, so I don’t have much hands-on info about them.

It’s also important to note that none of these APs ship with a mounting bracket, nor does the AP have any kind of integrated mounting like you would find on a Ruckus AP. Mist currently offers 3 mounting brackets: a T-Rail bracket ($25), a drywall bracket ($25) and a threaded rod bracket ($40). The AP attaches to these brackets via four T10 metric shoulder screws (Drywall, Rod), or four metric Phillips screws (T-Rail). More on these later.

The Software

Each AP must be licensed, and there are three possibilities: Wifi-only, BLE Engagement, and BLE Asset tracking. Each subscription is nominally $150/year per AP, although there are bundles available with either two services or all three. Again, your pricing will depend on your location and your specific partner. Mist recently did away with multi-year pricing, so there’s no longer a cost advantage in pre-buying multiple years of subscriptions.

When the subscription expires, Mist won’t shut off the AP the way Meraki does, however, the APs will no longer have warranty coverage. After a subscription has been expired for two months, Mist will not reactivate an AP. The APs will continue to operate with their last configuration, however, but there will no longer be access to the cloud dashboard for that AP.

Links:

Mist Systems

Jake Snyder on Clear To Send podcast #114: Automate or Die

Mist Product Information

Up Next: The Design

EC2 Monitoring with Raspberry Pi

I’ve been doing a little Raspberry Pi hacking lately, and put together a neat way to have physical status LEDs on your desk for things like EC2 instances.

The Hardware

In its most basic form, you can simply hook up an LED and a bias resistor between a ground line and a GPIO line on the Pi, but that doesn’t scale especially well – You can run out of GPIO lines pretty quickly, especially if you’re doing different colors for each status. Plus, it’s not overly elegant.

The solution? Unicorns!

No, really. The fine folks at Pimoroni in Sheffield, UK have made a lovely little HAT device for the Pi called a Unicorn. Its primary purpose is lots of blinky lights to make pretty rainbows and stuff, hence the name. However, this HAT is a 4×8 (or an 8×8) array of RGB LEDs, addressable via the I2C bus, which doesn’t eat up a line per LED (good thing, otherwise it would require 96 analog lines). The unicornhat library (python3-unicornhat) is available for Python 2 and Python 3 in the Raspbian repo. When installed onto the Pi, the Unicorn will fit within a standard Raspberry Pi case.

The Code

This is my first foray into Python, so there was a bit of a learning curve. If you’re familiar with object-oriented code concepts, this should be easy for you. Python is much more parsimonious with punctuation than PHP or perl are.

For accessing the EC2 data, we’ll need Amazon’s boto3 library, also available in the Raspbian repo (python3-boto). One area where boto3 is really nice is that the data is returned directly as a dict object (what users of other languages would call an array), so you don’t have to mess with converting JSON or XML into an object structure, and it can be manipulated as you would any other associative array (or a hash for you old-timers that use perl). AWS returns a fairly complex object, so you kind of have to dig into it via a few iterative loops to extract the data you’re after.

From there, it’s a matter of assigning different RGB values to the states. I chose these ones:

  • stopped: red
  • pending: green
  • running: blue
  • stopping: yellow(ish)

I also discovered that I needed to assign a specific pixel to each instance ID, otherwise they tended to move around a bit depending on what order AWS returned them on a particular request.

Here’s what the second iteration looks like in action:

import boto3 as aws
import unicornhat as unicorn
import time

# Initialize the Unicorn
unicorn.clear()
unicorn.show()
unicorn.brightness(0.5)

# Create an EC2 object 
ec2 = aws.client('ec2')

# Define colors and positions
color = {}
color['stopped']={'red':255,'green':0,'blue':0}
color['pending']={'red':64,'green':255,'blue':0}
color['running']={'red':32,'green':32,'blue':255}
color['stopping']={'red':192,'green':128,'blue':32}
	
pixel = {}
pixel['i-0fa4ea2560aa17ffd']={'x':0,'y':0}
pixel['i-06b95cd864acb1a8c']={'x':0,'y':1}
pixel['i-0661da0f50ffb604c']={'x':0,'y':2}
pixel['i-063ec151e0f44ef9b']={'x':0,'y':3}
pixel['i-02c514ca567d8a033']={'x':0,'y':4}

# Loop until forever
while True:

	response = ec2.describe_instances()
		
	
	statetable = {}
	resarray = response['Reservations']
	for res in resarray:
		instarray = res['Instances']
		for inst in instarray:
			iid = inst['InstanceId']
			state = inst['State']['Name']
			# print(iid)
			# print(state)
			statetable[iid] = state
	
	
	for ec2inst in statetable:
		x = pixel[ec2inst]['x']
		y = pixel[ec2inst]['y']
		r = color[statetable[ec2inst]]['red']
		g = color[statetable[ec2inst]]['green']
		b = color[statetable[ec2inst]]['blue']
		# print(x,y,r,g,b)
	
		unicorn.set_pixel(x,y,r,g,b)
		unicorn.show()


	time.sleep(1)

For the moment, this is just monitoring EC2 status, but I’m going to be adding checks in the near future to do things like ping tests, HTTP checks, etc. Stay tuned.

Streaming to multiple simultaneous destinations

Live streaming has been a “thing” for some time. I work with many churches to help them solve their streaming challenges and develop their technology strategy for streaming. One of the most frequent questions I hear is, “can I stream to Facebook Live and still keep my other stream?” Fortunately, this is a lot easier than it used to be. There are variations on this question, but they all boil down to wanting to know how to send one stream to multiple outlets to expand audience reach.

Method 1:

Multiple outputs from your encoder

Several software encoder platforms support multiple outputs. The easiest among these is probably Telestream’s Wirecast software. (The free/open-source Open Broadcast Studio does this as well, but I don’t have much experience with it, and I prefer the Wirecast interface, which is much more polished.) With Wirecast, it’s merely a matter of adding the additional outputs to the various streaming services that are supported. The downside to this approach is that you’ll need more bandwidth, as you are sending the same stream multiple times.

Screenshot 2017-04-16 09.56.42

Method 2:

The Cloud

1. Teradek Core

This is a vendor-specific approach that integrates with Teradek‘s pro-grade encoders (Cube, Bond, Slice, and T-Rax). It provides a single pane of glass that lets you manage your entire fleet of encoder devices (and control/configure them remotely), and then virtually patch the output of those encoders to one or more outputs. You can also use their Live::Air apps for iOS as an input (stay tuned for a post about using Live::Air). If you are using a Bond product, the input is via their Sputnik server, which allows you to spread the stream across multiple connections for extra bandwidth and redundancy, and then it’s reassembled before sending it on to the next step.

In this example, I’m taking an input stream from the Live::Air Solo app on my iPhone, and sending it to Wowza Streaming Cloud, and Facebook Live, all while recording the incoming stream:

Screenshot 2017-04-16 07.10.22

This is a simple drag and drop operation: Drag a source on the left into the workspace, and then drag one or more destinations from the left – this can be:

Teradek decoders (this is great for a multisite church scenario)

Channels (which are external stream destinations):

Screenshot 2017-04-16 09.49.05

Groups (a combination of the above):

Screenshot 2017-04-16 09.50.31

If you click the “Auto” box on the outputs, it will start that output automatically when the stream is available from the input.

When you create stream destinations for social sites, it will authenticate you against that site and keep that authentication.

You can manage a lot of inputs and outputs this way. This example from Teradek’s marketing department shows the scale:

core-management-platform-user-interface_e811873e-7cfb-4514-a465-1467d487d8d7

 

2. Wowza Streaming Engine/Streaming Cloud

Similar to Core, but not tied to a specific vendor, Wowza Streaming Engine provides Stream Targets as of version 4.4 (although the functionality has been in the software since sometime in version 2, as the PushPublish module, Stream Targets integrates it into the UI). Facebook Live support has been an option since almost the very beginning of Facebook Live. YouTube Live support is there, but as a standard RTMP destination.

Similarly, Wowza Streaming Cloud also offers this capability under the “Advanced Menu”:

Screenshot 2017-04-16 10.07.49

From there, you can create a stream target:

Screenshot 2017-04-16 10.08.02

 

Once that target is created, simply go into a transcoder output and add it (you can also create a target directly from there):

Screenshot 2017-04-16 10.12.26

 

As with Core, you can add multiple destinations to a transcoder output – Generally speaking you’ll want to send your best output to places like FB Live, YouTube, etc, as they do their own internal transcoding.

Screenshot 2017-04-16 10.13.31

 

Method 3:

Multiple Encoders

This is the obvious one, but also the least efficient both in terms of hardware and bandwidth. Each encoder goes to its own destination. This generally requires signal distribution amplifiers and other extra hardware.

 

 

Going Serverless: Office 365

Recently I just completed a project for a small church in Kansas. Several months ago, the senior pastor asked me for a quote on a Windows server to provide authentication as well as file and print share services. During the conversation, a few things became clear:

  1. Their desktop infrastructure was completely on Windows 10. Files were being kept locally or in a shared OneDrive account.
  2. The budget they had for this project was not going to allow for a proper server infrastructure with data protection, etc.
  3. This church already uses a web-based Church Management System, so they’re somewhat used to “the cloud” already as part of their workflows.

One of the key features provided by Windows 10 was the ability to use Office 365 as a login to your desktop (Windows 8 allowed it against a Microsoft Live account). Another is that for churches and other nonprofits, Office 365 is free of charge for the E2 plan.

I set about seeing how we could go completely serverless and provide access not only to the staff for shared documents, but also give access to key volunteer teams and church committees.

The first step was to make sure everybody was on Windows 10 Pro (we found a couple of machines running Windows 10 Home). Tech Soup gave us inexpensive access to licenses to get everyone up to Pro.

Then we needed to make sure the internet connection and internal networking at the site was sufficient to take their data to the cloud. We bumped up the internet speed and overhauled the internal network, replacing a couple of consumer-grade unmanaged switches and access points with a Ubiquiti UniFi solution for the firewall/router, network switch, and access points. This allows me and key church staff to remotely manage the network, as the UniFi controller operated on an Amazon Web Services EC2 instance (t2.micro). This new network also gave the church the ability to offer guest wifi access without compromising their office systems.

The next step was to join everyone to the Azure domain provided by Office 365. At this point, all e-mail was still on Google Apps, until we made the cutover.

Once we had login authentication in place, I set about building the file sharing infrastructure. OneDrive seemed to be the obvious solution, as they were already using a shared OneDrive For Business account.

One of OneDrive’s biggest challenges is that, like FedEx, it is actually several different products trying to behave as a single, seamless product. At this, OneDrive still misses the mark. The OneDrive brand consists of the following:

  • OneDrive Personal
  • OneDrive for Business
  • OneDrive for Business in Office 365 (a product formerly known as Groove)
  • Sharepoint Online

All the OneDrive for Business stuff is Sharepoint/Groove under the hood. If you’re not on Office 2016, you’ll want to make the upgrade, because getting the right ODB client in previous versions of Office is a nightmare. Once you get it sorted, it generally works. If you’ve got to pay full price for O365, I would recommend DropBox for Business as an alternative. But it’s hard to beat the price of Office 365 when you’re a small business.

It is very important to understand some of the limitations of OneDrive for Business versus other products like DropBox for Business. Your “personal” OneDrive for Business files can be shared with others by sending them a link, and they can download the file, but you can’t give other users permission to modify them and collaborate on a document. For this, you need to go back to the concept of shared folders, and ODB just doesn’t do this. This is where Sharepoint Online comes in to play.

Naturally, this being Sharepoint, it’s not the easiest thing in the world to set up. It’s powerful once you get it going, but I wasn’t able to simply drop all the shared files into a Sharepoint document library — There’s a 5000-file limit imposed by the software. Because the church’s shared files included a photo archive, there were WAY more than 5000 files in it.

Sharepoint is very picky about getting the right information architecture (IA) set up to begin with. Some things you can’t change after the fact, if you decide you got them wrong. Careful planning is a must.

What I ended up doing for this church is creating a single site collection for the whole organization, and several sites within that collection for each ministry/volunteer team. Each site in Sharepoint has 3 main security groups for objects within a site collection:

  • Visitors (Read-Only)
  • Members (Read/Write)
  • Owners (Read/Write/Admin)

In Office 365, much as it is with on-premises, you’re much better off creating your security groups outside of Sharepoint and then adding those groups to the security groups that are created within Sharepoint. So in this case, I created a “Worship Production” team, added the team members to the group, and then added that group to the Worship Site Owners group in Sharepoint. The Staff group was added to all the Owners groups, and the visitors group was left empty in most cases. This makes group membership administration substantially easier for the on-site admin who will be handling user accounts most of the time. It’s tedious to set up, but once it’s going, it’s smooth sailing.

Once the security permissions were set up for the various team sites, I went into the existing flat document repository and began moving files to the Sharepoint document libraries. The easiest way to do this is to go to the library in Sharepoint, and click the “Sync” button, which then syncs them to a local folder on the computer, much like OneDrive (although it’s listed as Sharepoint). There is no limit to how many folders you can sync to the local machine (well, there probably is, but for all practical purposes, there isn’t). From there it’s a matter of drag and drop. For the photos repository, I created a separate document library in the main site, and told Sharepoint it was a photo library. This gives the user some basic Digital Asset Management capabilities such as adding tags and other metadata to each picture in the library.

So far, it’s going well, and the staff enjoys having access to their Sharepoint libraries as well as Microsoft Office on their mobile devices (iOS and Android). Being able to work from anywhere also gives this church some easy business continuity should a disaster befall the facility — all they have to do is relocate to the local café that has net access, and they can continue their ministry work. Their data has now been decoupled from their facility. I have encountered dozens of churches over the years whose idea of data backup is either “what backup?” or a hard drive sitting next to the computer 24×7, which is of no use if the building burns to the ground or is spontaneously relocated to adjacent counties by a tornado. The staff doesn’t have to worry about the intricacies of running Exchange or Sharepoint on Windows Small Business Server/Essentials. Everything is a web-based administrative panel, and support from Microsoft is excellent in case there’s trouble.

If you’re interested in how to take your church or small business serverless, contact me and I’ll come up with a custom solution.

Nonprofit Tech Deals: Microsoft Azure

Last week while I was at the Church IT Network National Conference in Anderson, SC, a colleague pointed me to a fantastic donation from Microsoft via TechSoup: $5000/year in Azure credit. At a hair over $400/month, this means you can run a pretty substantial amount of stuff. Microsoft just announced this program at the end of September, so it’s still very new. And very cool. Credits are good any time within the 12-month period, so you don’t have to split them up month by month. They do not, however, roll over to the following year.

The context of the conversation was for hosting the open-source RockRMS Church/Relationship Management System, but Wowza Streaming Engine is also available ready to go on Azure. And many other things. (and for those of us in the midwest, Microsoft’s biggest Azure datacenter is “US Central” located in Des Moines, as Iowa is currently a very business-friendly place to put a huge datacenter)

If you’re a registered 501c3 non-profit (or your local country’s equivalent if you’re outside the US), head on over to Tech Soup to take advantage of this fantastic deal.

As an added bonus, if you have Windows Server Datacenter licenses from TechSoup or that your organization purchased with Software Assurance, each 2-socket license can be run on up to two Azure compute instances each with up to 8 virtual cores, reducing the cost of your instances even further (as standard Windows instances include the cost of the Windows license at full nonprofit prices.). This also applies to SQL Server.

Here’s the process:

  1. Read the FAQ.
  2. Register your organization with TechSoup if you haven’t already done so.
  3. Head over to Microsoft’s Azure Product Donations page and hit “Get Started”
  4. At some point in the process you’ll also want to create an Azure account to associate the credits with. If you’re already using Office 365 for nonprofits, it’s best to tie an account to your O365 domain.

Multi-tenant Virtual Hosting with Wowza on EC2

That’s a mouthful, isn’t it?

I recently needed to migrate a couple of Wowza Streaming Engine tenants on a baremetal server that was getting long in the tooth, and was getting rather expensive. These tenants were low-volume DVR or HTTP transmuxing customers, with one transcoding customer that required some more CPU power. But this box was idle most of the time. So I decided to move it over to AWS and fire up the box only when necessary. Doing this used to be a cumbersome process with the AWS command-line tools that were Java-based. The current incarnation of tools is quite intuitive and runs in Python, so there’s not a lot of insane configuration and scripting to do.

You may recall my post from a few years back about multi-tenant virtual hosting. I’m going to expand on this and describe how to do it within the Amazon EC2 environment, which has historically limited you to  a single IP address on a system.

The first step to getting multiple network interfaces on EC2 is to create a Virtual Private Cloud (VPC) and start your EC2 instances within your VPC. “Classic” EC2 does not support multiple network interfaces.

Once you’ve started your Wowza instance within your VPC (for purposes of transcoding a single stream, I’m using a c4.2xlarge instance), you then go to the EC2 console, and on the left-hand toolbar, under “network and security” is a link labeled “Network Interfaces”. When you click on that, you have a page listing all your active interfaces.

To add an interface to an instance, simply create a network interface, select the VPC subnet it’s on, and optionally set its IP (the VPC subnet is all yours, in dedicated RFC1918 space, so you can select your IP). Once it’s created, you can then assign that interface to any running instance. It shows up immediately within the instance without needing to reboot.

Since this interface is within the VPC, it doesn’t get an external IP address by default, so you’ll want to assign an ElasticIP to it if you wish to have it available externally (in most cases, that’s the whole point of this exercise)

Once you have the new interface assigned, simply configure the VHosts.xml and associated VHost.xml files to listen to those specific internal IP addresses, and you’re in business.
As for scheduling the instance? On another machine that IS running 24/7 (if you want to stick to the AWS universe, you can do this in a free tier micro instance), set up the AWS command line tools and then make a crontab entry like this:

30 12 * * 1-5 aws ec2 start-instances --instance-ids i-XXXXXXXX
35 12 * * 1-5 aws ec2 associate-address --network-interface-id eni-XXXXXXXX --allocation-id eipalloc-XXXXXXXX
35 12 * * 1-5 aws ec2 associate-address --network-interface-id eni-XXXXXXXX --allocation-id eipalloc-XXXXXXXX
30 15 * * 1-5 aws ec2 stop-instances --instance-ids i-XXXXXXXX 

This fires up the instance at 12:30pm on weekdays, assigns the elastic IPs to the interfaces, and then shuts it all down 3 hours later (because this is an EBS-backed instance in a VPC, stopping the instance doesn’t nuke it like terminating does, so any configuration you make on the system is persistent)

Another way you can use this is to put multiple interfaces on an instance with high networking performance and gain the additional bandwidth of the multiple interfaces (due to Java limitations, there’s no point in going past 4 interfaces in this use case), and then put the IP addresses in either a round-robin DNS or a load balancer, and simply have Wowza bind to all IPs (which it does by default).

HLS distribution with Amazon CloudFront

I’ve blogged extensively about Wowza RTMP distribution with edge/origin and load balancing, but streaming distribution is moving more to HTTP-based systems such as Apple’s HTTP Live Streaming (known inside Wowza as “cupertino”), Adobe’s HTTP Dynamic Streaming (Wowza: “sanjose”), and Microsoft’s Smooth Streaming (Wowza: “smooth”). Future trends suggest a move to MPEG-DASH, which is a standard based on all three proprietary methods (I’ll get into DASH in a future post as the standard coalesces – we’re talking bleeding edge here). The common element in all of them, however, is that they use HTTP as a distribution method, which makes it much easier to leverage CDNs that are geared towards non-live content on HTTP. One of these CDNs is Amazon’s CloudFront service. With edges in 41 locations around the world and 12 cents a gigabyte for transfer (pricing may vary by region), it’s a good way to get into an HTTP CDN without paying a huge amount of money or committing to a big contract with a provider like Akamai.

On the player side, JW Player V6 now supports HLS, and you can do Adobe HDS with the Strobe Media Player.

With the 3.5 release, Wowza Media Server can now act as an HTTP caching origin for any HTTP based CDN, including CloudFront. Doing so is exceedingly simple. First, configure your Wowza server as an HTTP caching origin, and then create a CloudFront distribution (use a “download” type rather than a streaming type – it seems counterintuitive, but trust me on this one!), and then under the origin domain name, put the hostname of your Wowza server. You can leave the rest as defaults, and it will work. It’ll take Amazon a few minutes to provision the distribution, but once it’s ready, you’ll get a URL that looks something like “d1ed7b1ghbj64o.cloudfront.net”. You can verify that the distribution is working by opening a browser to that address, and you should see the Wowza version information. Put that same CloudFront URL in the player URL in place of the Wowza server address, and your players will now start playing from the nearest CloudFront edge cache.

See? Easy.

Wowza EC2 Capacity Update

It’s been a while since Wowza has updated their EC2 performance numbers (they date back to about 2009), and both Amazon and Wowza have made great improvements to their products. Since I have access to a high-capacity system outside of Amazon’s cloud, I am able to use Wowza’s load test tool on a variety of instance sizes to see how they perform.

The test methodology was as follows:

  • Start up a Wowza instance on EC2 with no startup packages (us-east)
  • Install the server-side piece of Willow (from Solid Thinking Interactive)
  • Configure a 1Mbps stream in Wirecast
  • Monitor the stream in JWPlayer 5 with the Quality Monitor Plugin
  • Configure the Wowza Load Test Tool on one of my Wowza Hotrods located at Softlayer’s Washington DC datacenter
    • Server is 14 hops/2ms from us-east-1
  • Increase the load until:
    • the measured bandwidth on JW player drops below stream bandwidth
    • frame drops get frequent
    • Bandwidth levels out on the Willow Graphs while connection count increases
  • Let it run in that condition for a few minutes

In Willow, it basically looked like this (this was from the m1.small test). You can see ramping up to 100, 150, 200, 250, 275, and 300 streams. The last 3 look very similar because the server maxed out at 250 Mbps. (Yes, the graph says MBytes, that was a bug in Willow which Graeme fixed as soon as I told him about it)

Willow Bandwidth

Meanwhile, this is what happens on the server.. the CPU has maxed out.

EC2 CPU Usage

So that’s the basic methodology. Here are the results:

[table id=1 /]

There are a couple of things to note here. Naturally, if you’re not expecting a huge audience, stick to m1.small. But the best bang for the buck is the c1.medium (High-CPU Medium), which is a relatively new instance type, which gives you 4x the performance of a m1.small at less than 2x the price. The big surprise here was the m2.xlarge. It performs only marginally better than an m1.small at 4x the price.
All the instances that show 950 are effectively giving you the full benefit of the gigabit connection on that server and maxed out the interface long before the CPU maxes out. In the case of the c1.xlarge, there’s lots of CPU to spare for things like transcoding and such if you’re using a BYOL image. If you want to go faster, you’ll need to roll your own Cluster Quad or do a load-balanced set.

Disclaimers: Your mileage may vary, these are just guidelines, although I think they’re pretty close. I have not tested this anywhere but us-east-1, so if you’re using one of amazon’s other datacenters, you may get different results. I hope to test the other zones soon and see how the results compare.