Another cool use of the WLANpi

Recently, the nice people that employ me to be a wireless network engineer for them were kind enough to add a WLANpi to my toolkit (as well as that of several of my co-workers), and it is indeed a very handy gizmo for network engineering work.

The other day, I found yet another useful trick I could do with it: Software repository. Sounds basic, because it is. But useful nonetheless. Necessity is the mother of invention, after all.

The WLANpi, with a little customization

The situation was that I needed to update AirWave on a customer server, and the WLAN management network at this site is isolated from the rest of the world (and even if it wasn’t, a satellite connection is not a fun thing to download a couple of gigabytes over!) Fortunately I came prepared for this and while I was at home on my gigabit fiber connection, I downloaded a whole host of software images I might need and stored them on my laptop.

AirWave’s heavily locked down CLI does give you the option of uploading a file, but it does it in a strange way that is in fact initiating an SCP download from somewhere. There’s not really any way to push a file to the box. No worries, Macs are Unix-ish, and this should be trivial, right? Nope, in Mojave there appears to be a strange quirk where ssh won’t respond on anything but localhost. So, my plan to scp from my Mac was shot to bits. I needed a linux box, and didn’t want to download an install ISO over the satellite any more than I wanted to download AirWave (after all, AirWave is itself Linux-based). Then I remembered I had my WLANpi.

Like an increasing number of gadgets these days, the WLANpi’s USB port (used for power) also happens to be an OTG port, and presents itself to the host system as an “RNDIS Ethernet Gadget”, and sets up an Ethernet link over the USB. This allows gadgets like the WLANpi and the Ekahau Sidekick to easily communicate with the host without going through the brain damage of custom device drivers (incidentally, Aruba is taking a similar approach to IoT support on its APs). RNDIS handles the messy layer 1 and layer 2 stuff, sets up layer 3 (the WLANpi defaults to 192.168.42.1) and then the application only has to implement standard upper-layer network communications.

So all I had to do was open an ssh session to my WLANpi (I use Emtec’s ZOC, which I have been using since the days of OS/2!) to see if I had enough storage space on the device to hold the 2.5GB AirWave update (Narrator: it did). Then I fired up Transmit, my go-to file transfer application on MacOS (whatever your platform, anything that supports scp will fit the bill), and sent the Airwave update over to a newly created files directory in the WLANpi user’s home directory.

Once the file was on the WLANpi, I plugged the WLANpi’s Ethernet port into a VLAN that was accessible to the WLAN management devices (I used the AP management VLAN since it already had a DHCP server), and then opened an ssh session to the AirWave server from my existing session on the WLANpi, essentially using it as a jump box. This served to verify port 22 connectivity, and also meant I didn’t have to put my laptop on that VLAN either.

Once I was able to copy the file from the AirWave server, the process was a snap to get the thing upgraded.

I think I’m going to get a bigger SD card for my WLANpi and store a full set of code and images that I may need, and also set up a tftp server on there, and maybe a file manager for the WLANpi’s web interface.

How It Works: HTTP Live Streaming

For those of us that work on wireless systems with a strong guest access component, the fine folks at Wowza Media Systems posted earlier this month about the inner workings of HTTP Live Streaming (Apple’s proprietary streaming protocol, or HLS) which accounts for about 45% of all streaming traffic – which tracks pretty closely to Apple’s market share of mobile devices.

Prior to getting hot and heavy with wireless networks, I did a lot of streaming infrastructure implementation for Wowza’s customers (as many of this blog’s readers are well aware – just go look into the archives!) HLS, which was released with the iPhone 3Gs, is designed from the ground up to handle the highly variable bandwidth and delay conditions inherent to mobile connections on Wi-Fi and cellular, while delivering a good streaming experience to the end user. It also allows streaming providers to leverage existing HTTP-based content delivery infrastructure.

Older streaming protocols like RTMP and RTSP are particularly unfriendly to wireless networks as they require a constant data stream at the stream bandwidth. For a video stream, much like a VOIP call, this requires consistent and timely medium access, which is definitely not a sure thing on Wi-Fi the way it is on Ethernet. The tradeoff is that the delay from live on HLS (a minute or two) is much higher than it is on RTSP (a few frames/milliseconds) or RTMP (a few seconds).

When working down at Layer 2, it’s usually helpful to understand what’s going on up the stack, especially with regards to what kind of unholy things are being done inside HTTP (which we may or may not have visibility into because of encrypted packet and segment payloads). In terms of the ISO model, HLS is probably best described as Layer 5 (the HTTP segmentation) and Layer 6 (the video data).

My good friend Jim Palmer (Not the baseball player) spoke at the Wireless LAN Pros Conference last year about the effects of user bandwidth throttling in a guest wireless environment with heavy streaming usage (predominantly Netflix). Understanding how HLS works in this context is key to understanding why the network behaves the way it does when you do that throttling. His talk is well worth ten minutes of your time. He’s also had some informative appearances on the Clear To Send Podcast (Episode #136, on antennas and filters), the Wireless LAN Pros podcast (Episode 116 on Captive Portals), and WiFi Ninjas Podcast (Episodes 19 and 20 on Airport Wireless Design).

So, here’s the link to Wowza’s post on the subject. I hope they post one about MPEG-DASH soon (from an HTTP standpoint, DASH works in a similar fashion).

https://www.wowza.com/blog/hls-streaming-protocol

Solving Home Wi-Fi Woes

“My home wi-fi sucks, how can I fix it?”

“What router should I buy to fix my home wi-fi?”

And so it goes. I get questions like these all the time when it becomes known that I’m a wi-fi expert (really! don’t take my word for it, CWNP and a panel of my peers said I was!) Since I get asked this a lot, I’m creating this post as a handy guide to making your home wi-fi better. 

While by day, I’m a mild-mannered field engineer for a wi-fi consulting company, and deal mostly with large-scale enterprise systems (often fixing their bad wi-fi), many of the same principles apply, because it all boils down to best practices.

First, you’re going to need a couple of basic tools to see what your wifi environment looks like. 

  • Mac: Wifi Explorer Lite
  • Windows: InSSIDer
  • Android: Wifi Analyzer
  • iOS: AirPort Utility

All these tools do is give you a listing (usually with a graphical representation) of the wi-fi channels in use in your environment. 

What causes my wi-fi to suck?

Generally speaking, if you have bad wi-fi, it’s because the device and the access point can’t hear each other very well, or it’s so busy neither one can get a word in edgewise. Wi-Fi can only have one device on a channel talking at once. When it wants to talk, it listens on the channel to see if it’s clear, and if it is, it says its piece and gets off. If it’s not (because someone else is talking), it pauses for a moment and tries again. On a busy channel, that can take a while (and in terms of computer networking, “a while” may only be a few milliseconds, but any delay slows you down. Sometimes a device says its piece, and the intended recipient couldn’t acknowledge it because it was too noisy because of interference from something that isn’t wifi (like bluetooth, microwave ovens, zigbee, etc.)

Let’s get a little terminology out of the way, first, so we’re all speaking the same language. 

router in terms of home wi-fi is an all-in-one device that contains not only a router, but also an ethernet switch and an access point. the access point is the piece that actually does your wi-fi. They just happen to all be stuffed into the same box together. Sometimes they’ll stuff a cable modem or a DSL modem in there too…

mesh is a means of connecting other access points to the network wirelessly. You have probably seen home “mesh” systems that incorporate a couple of access points in some sort of plug and play fashion. 

Your wi-fi is a wireless local area network. It is not internet access. It is entirely independent of your internet access, even if the router box you got from your ISP does wi-fi (usually badly). 

So the first thing you’ll want to do is check out your channel environment. The tools listed above will highlight the channel and AP you’re connected to, and show all the others, with the signal strength of each. It’s doing this by listening for beacon frames that are sent out approximately 10 times per second by every AP on every SSID.

There are two things you’re looking for: YOUR signal above -65dBm, and everyone else’s BELOW -82dBm. In the 2.4GHz band, you also want to watch out for anyone on channels in between the non-overlapping channels of 1,6,11 (many devices will automatically choose channels that aren’t 1/6/11, which they need to stop doing). 

So what if one or both of those tests comes back outside of those parameters? There are a few things that affect your signal strength:

  • Proximity to the access point
  • objects between you and the access point
  • the access point’s output power
  • the access point’s antennas

Your access point should be located somewhere fairly central in your home. It often isn’t because the ISP/Cable company was lazy and put the cable outlet on an outside wall. It should also be out in the open and not behind anything (I’ve seen many stuffed behind a TV, which does nobody any favors). The top of a bookshelf in the middle of your house is a great spot. 

If it has external antennas, they should ALL be pointing vertically. This is not an art piece where they go every which way, and they are not magic wands where wi-fi comes shooting out the ends (in fact, the axis of the antenna has the weakest signal.) If you mount it on a wall, they should still be vertical. 

One caveat to this is that if the antennas are detachable, you can keep your access point near the edge of your home and get a directional antenna.

If your access point lets you set power levels, set it to the lowest you can go and still cover what you need to cover, and nothing more. This keeps your neighbors from getting your wi-fi, and having yours interfere with theirs. If you go too high, you may be able to reach the far corners of your house, but your AP won’t hear your device’s responses. 

Which brings me to extenders. There are many of these on the market, and they’re all junk. Because of the whole “one device may speak at a time” thing, you now have a conversation where another person is repeating it loudly for the people in the back. Don’t do it. If you need more coverage, get one of those residential mesh systems, but connect it up with wires if at all possible (otherwise they act mostly like repeaters and murder your performance). 

I mentioned channels earlier. If you’re on 2.4GHz, you should only ever be on channels 1,6, or 11. But really, you shouldn’t be on 2.4GHz at all. There are lots more channels to work with in 5GHz. If you must be on 2.4GHz, minimize your use of it, and whatever you do, don’t use a 40MHz channel unless you live in the middle of a cornfield with no neighbors. 

If you’re on 5GHz, try to avoid 80MHz channels, 40 is OK if you don’t have many neighbors, and 20 is best when you have lots of neighbors. Many devices default to channels 36/40/44/48 and 149/153/157/161. I’m gonna let you in on a little secret: there are a whole lot more channels you can use. If your device supports them, you can use 52/56/60/64, 100/104/108/112/116/120/124/128/132/136/140/144, and 165. Chances are those channels are WIDE OPEN where you are. use them! 

And now, for cutting through the marketing hype:

“Tri-Band” is NOT A THING. (at least, not as of late 2018 when this is being written) Those devices are all a 2.4GHz radio and two 5GHz radios. That’s only two bands. However, if you have a tri-band radio, your best use is to set up one of the radios with a dedicated SSID and channel for your streaming equipment like Smart TVs, AppleTV, Roku, etc, and use your general internet access on the other. 

Gigabit wifi is A BIG FAT LIE. At most, you’re going to see a couple hundred megabits on a channel. Many vendors’ marketing people like to add up the theoretical maximum of all the radios in the device and claim that as the maximum speed, which is why you see absurd things like “5300Mbps” and “6400Mbps”. Those speeds will NEVER HAPPEN, because wi-fi doesn’t add them up. 

More antennas does not mean a better AP. a 4×4 AP is all well and good, but most of your clients are 2×2 with a small handful of high-end Macs that do 3×3. This refers to the number of MIMO spatial streams. There is ONE 4×4 client device on the market from Asus, and it is a PCIe expansion card for desktops. 

Power levels are limited by FCC rules (in the US – national communications authorities in other countries impose similar limits) , mostly at 100mW. Any device claiming high power wifi is lying to you. 

The current generation of WiFi is 802.11ac (also called WiFi 5 – and is 5GHz Only). The previous generation is 802.11n (WiFi 4, and is the current generation for 2.4GHz). Anything older than that should be replaced. The upcoming 802.11ax (WiFi 6) standard is still being developed and won’t be official until at least late 2019. 

Questions? Comment below!

Morning Fog along the Flint Hills National Scenic Byway

Mist Deployment, Part Deux

Second in a series about our first deployment of a Mist Systems wireless network. 

In my last post, I gave you an overview of the various components of the Mist Wireless system. This post will go into some of the design considerations pertaining to this particular project.

Because we’re now designing for more than just Wi-Fi, there are a few additional things to factor in when planning the network.

Floor Plans

It’s not uncommon for your floor plans to have a “Plan North” that doesn’t always line up with “Geographic North”. Usually this isn’t a factor, but looking at it in hindsight, I would strongly encourage you to build your floor plans aimed at geographic north from the start, as the Mist AI will also use that floor plan for direction/wayfinding and the compass in mobile devices will be offset if you just go with straight plan north. You can also design on plan north, but then output a second floor plan file that is oriented to true north. Feature request to Mist: Be able to specify the angle offset of the plan from true north and correct that for user display in the SDK.

For this project, I had access to layered AutoCAD files for the entire facility, which (sort of) makes things easier in Ekahau Site Survey, but sort of doesn’t – the import can get a little overzealous with things like door frames. I had to go do a fair bit of cleanup afterwards, and might have been better off just drawing the walls in the first place. This was partly due to the general lack of any good CAD tools on MacOS that would have allowed me to look at the data in detail and massage it before attempting the import into Ekahau. The other challenge is that ESS imported the ENTIRE sheet as its view window, which made good reporting impossible as the images had wide swaths of white space. Having the ability to crop the CAD file would have been nice.

Density Considerations

View from the rear of the main sanctuary at College Park Church in Indianapolis.

Since one of the areas being covered is a large auditorium, we had to plan on multiple small cells within the space. We needed to put the APs in the catwalks, as we did not have the option of mounting the units on the floor because of the sanctuary being constructed onslab (and while the cloud controller allows you to specify AP height and rotation from plan north, there is no provision to tell it the AP is facing *up* and located on/near the floor). This posed a few challenges, the first being that we were well above the recommended 4-5m (the APs were at 10m from the floor), the other being that we needed to create smaller cells. For this, we used the AP41E with an AccelTex 60-degree patch antenna.

Acceltex 8/10 dBi 60° 4-element patch antenna

 We also needed to either run a whole lot of cables up to the theatrical catwalks, or place a couple of small managed PoE switches – we unsurprisingly opted for the latter, using two 8-port Meraki switches, and uplinked them using the existing data cabling that was feeding the two UniFi APs that were up there.

As an added bonus, the sanctuary area was built with tilt-up precast concrete panels, which allowed us to use that heavy attenuation to our benefit and flood the sanctuary space with APs and not worry about spilling out too much.

Capacity-wise, we used 10 APs in the space, which seats 1700. Over the course of several church designs, I’ve found that a ratio of one active user for every three seats usually works out pretty well – in most church sanctuaries, the space feels packed when 2/3 of the seats are occupied, which means that we’re actually planning for one client for every two seats. Now, we’re talking active clients here, not associated clients. An access point can handle a lot more associations than it can active clients. As a general rule, I try to keep it to about 40 or 50 active clients per AP, before airtime starts becoming a significant factor.

In an environment like this, you want as many client devices in the room to associate to your APs, even if they’re not actively using them – when they’re not associated, they’re sitting out there, banging away with probe requests (especially if you have any hidden SSIDs), chewing up airtime (kind of like that scene from Family Guy where Stewie is hounding Lois just to say “Hi.”). Once they associate, they quiet down a whole bunch.

In addition to the main sanctuary, there are also a couple of other smaller but dense spaces: the chapel (seats 300) and the East Room (large classroom that can seat up to 250). In these areas, design focused on capacity, rather than coverage.

Structural Considerations

As is often the case with church facilities, College Park Church is an amalgamation of several different buildings built over a span of many years, accommodating church growth. What this ends up meaning is that the original building is then surrounded on multiple sides with an addition, and you end up with a lot of exterior walls in the middle of the building, as well as many different types of construction. Some parts of the building were wood-frame, others were steel frame, and others were cast concrete. The initial planning on this building was done without an onsite visit, but the drawings made it pretty obvious where those exterior (brick!) walls were. Naturally, this also makes ancillary tasks like cabling a little interesting.

Fortunately, the church had a display wall that showed the growth of the church which included several construction pictures of the building, which was almost as good as having x-ray vision.

Aesthetic Considerations

Because this is a public space, the visual appearance of the APs is also a key factor – Sometimes putting an AP out of sight takes precendence over placing for optimal Wi-Fi or BLE performance.

Placement Considerations

Coverage Area

Mist specifies that the BLE array can cover about 2500 square feet. The wifi can cover a little more, but it doesn’t hurt to keep your wifi cells that size as well, since you’ll get more capacity out of it. In most public areas of the building, we’re planning for capacity, not coverage. With Mist, if you need to fill some BLE coverage holes where your wifi is sufficient, you can use the BT11 as a Bluetooth-only AP.

AP Height

Mist recommends placing the APs at a height of 4-5m above the floor, in order to provide optimal BLE coverage. The cloud controller has a field in the AP record where you can specify the actual height above the floor.

AP Orientation

Because the BLE array is directional, you can’t just mount the APs facing any direction you please. These APs are really designed to be mounted horizontally, the “front” of the AP should be consistently towards plan north, but the controller does have the ability to specify rotation from plan north in case mounting it that way isn’t practical. The area, orientation and height are critical to accurate calculation of location information.

AP Location

Several of the existing APs in older sections of the building were mounted to hard ceiling areas, and we had to not only reuse the data cable that was there, but also the location. Fortunately, the previous system (Ubiquiti UniFi) was reasonably well-placed to begin with, and we were able to keep good coverage and reuse those locations without any trouble.

There were also some co-existence issues in the sanctuary where we had to make sure we stayed out of the way of theatrical lighting and fixtures that would pose a problem with physical or RF interference. In the sanctuary, we also have to consider the safety factor of the APs and keeping them from falling onto congregants like an Australian Drop-Bear.

Planning for BLE

Since starting this project, I’ve begun working with Ekahau on testing BLE coverage modeling as part of the overall wifi coverage, and it’s looking very promising. I was able to go back to the CPC design and replan it with BLE radios, and it’s awesome. Those guys in Helsinki keep coming up with great ideas. As far as Ekahau is concerned, multi-radio APs are nothing too difficult – They’ve been doing this for Xirrus arrays for some time now, as well as the newer dual-5GHz APs.

Stay tuned for a post about BLE in Ekahau when Jussi says I’m allowed to talk about it.

Up Next: The Installation

 

Cover Image: Explore Kansas: The Flint Hills National Scenic Byway (Kansas Highway 177)

Misty valley landscape with a tree on an island

Mist Deployment (Part The First)

First in a series about our first deployment of a Mist Systems wireless network. Mist Systems Logo

Over the course of the past few months, I’ve been working with the IT staff at College Park Church in Indianapolis to overhaul their aging Ubiquiti UniFi wireless system. They initially were looking at a Ruckus system, owing to its widespread use among other churches involved with the Church IT Network and its national conference (where I gave a presentation on Wi-Fi last fall). We had recently signed on as a partner with industry newcomer Mist Systems, and had prepared a few designs of similar size and scope for other churches in the Indianapolis area using the Mist system. We proposed a design with Ruckus, and another with Mist, with the church selecting Mist for its magic sauce, which is its Bluetooth Low Energy (BLE) capability for location engagement and analytics.

Fundamentally, the AP count, coverage, and capacity were not significantly different with Ruckus vs. Mist, and Mist offered a few advantages over the Ruckus in terms of the ability to add external antennas for creating smaller cells in the sanctuary from the APs mounted on the catwalks, as floor mounting was not an option.

About Mist

Mist is a young company that’s been around for about two or three years, and they have developed a couple of cool things in their platform – The first is what they call their AI cloud, the second is their BLE subsystem, and the last is their API.

Their AI component is a cloud management dashboard (similar to what you would see with Ruckus Cloud or Meraki — many of the engineers that started with Mist came over from Meraki), where the APs are constantly analyzing AP and client performance through frame capture and analysis, and reporting it back to the cloud controller. The philosophy here is that a large majority of the issues that users have with Wi-Fi performance is actually related to performance on the wired side of the network (“It’s always DNS.” Not always, but DNS — and DHCP — are major sources of Wi-Fi pain). The machine learning AI backend is looking at the stream of frames to detect problems, and then using that to generate Wi-Fi SLA metrics that can help determine where problems lie within the infrastructure, and doing some analysis of root causes. An example of this is monitoring the entire Station/AP conversation during and shortly following the association process. It looks at how long association took. How long DHCP took (and if it was successful), whether 4-way handshakes completed, and so on. It will also keep a frame capture of that conversation for further manual troubleshooting. It also keeps a log of AP-level events such as reboots and code changes so that client errors can be correlated on a timeline to those events. There’s a lot more it can do, and I’m just giving a brief summary here. Mist has lots of informational material on their website (and admittedly, there’s a goodly amount of marketing fluff in it, but that’s what you’d expect on the vendor website).

Graphs of connection metrics from the Mist system

 

 

 

 

 

 

 

 

 

 

Next, we have their BLE array. This is what really sets Mist apart from the others, and is one of the more interesting pieces of tech to show up in wifi hardware since Ruckus came on the scene with their adaptive antenna technology. Each AP has not one, but *eight* BLE radios in it, coupled with a 16-element antenna array (8 TX, 8RX). Each antenna provides an approximately 45° beam covering a full circle. Mist is able to use this in two key ways. One is the ability to get ridiculously precise BLE location information from their mobile SDK, (and by extension, locate a BLE transponder for asset visibility/tracking) and the other is the ability to use multiple APs to place a virtual BLE beacon anywhere you want without having to go physically install a battery-powered beacon. There are myriad uses for this in retail environments, and the possibilities for engagement and asset tracking are very interesting in the church world as well.

Lastly, we have their API. According to Mist, their cloud controller’s web UI only exposes about 40% of what their system can do. The remainder is available via a REST API that will allow you do do all kinds of neat tricks with it. I haven’t had a chance to dig into this much yet, but there’s a tremendous amount of potential there. Jake Snyder has taught a 3-day boot camp on using Python in network administration to leverage the power of APIs like the one from Mist (Ruckus also has an API on their Cloud and SmartZone controllers)

Mist is also updating their feature set on a weekly basis – rather than one big update every 6 months that may or may not break stuff, small weekly releases allow them to deploy features in a more controlled manner, making it easy to track down any potential show-stopper bugs, preferably before they get released into the wild. You can select whether your APs get the early-release updates, or use a more extensively tested stable channel.

Much like Meraki, having all your AP data in the cloud is tremendously useful when contacting support, as they have access to your controller data without you having to ship it to them. They can also take database snapshots and develop/test new features based on real data from the field rather than simulated data. No actual upper-layer traffic is captured.

The Hardware

note: all prices are US list – specific pricing will be up to your partner and geography.

There are four APs in the Mist line. The flagship 4×4 AP41 ($1385), the lower-end AP21 ($845), the outdoor AP61 ($?) , and the BLE-only BT11 ($?). The AP41 also comes in a connectorized version called the AP41E, at the same price as the AP41 with the internal antenna.

The AP41/41E is built on a cast aluminum heat sink, making the AP noticeably heavy. It offers an Ethernet output port, a USB port, a console port, and what they call an “IoT port” that provides for some analog sensor inputs, Arduino-style. It requires 802.3at (PoE+) power, or can use an external 12V supply with a standard 5.5×2.5mm coaxial connector. In addition to the 4-chain Wifi radio and the BLE array, the AP41 also has a scanning radio for reading the RF environment. On the AP41E, the antenna connectors are located on the downward face of the AP.

The AP21 is an all-plastic unit that uses the same mounting spacing as the AP41, and has an Ethernet pass-through port with PoE (presumably to power downstream BT11 units or cameras). Like the AP41, it also has the external 12V supply option.

This install didn’t make use of BT11 or AP61 units, so I don’t have much hands-on info about them.

It’s also important to note that none of these APs ship with a mounting bracket, nor does the AP have any kind of integrated mounting like you would find on a Ruckus AP. Mist currently offers 3 mounting brackets: a T-Rail bracket ($25), a drywall bracket ($25) and a threaded rod bracket ($40). The AP attaches to these brackets via four T10 metric shoulder screws (Drywall, Rod), or four metric Phillips screws (T-Rail). More on these later.

The Software

Each AP must be licensed, and there are three possibilities: Wifi-only, BLE Engagement, and BLE Asset tracking. Each subscription is nominally $150/year per AP, although there are bundles available with either two services or all three. Again, your pricing will depend on your location and your specific partner. Mist recently did away with multi-year pricing, so there’s no longer a cost advantage in pre-buying multiple years of subscriptions.

When the subscription expires, Mist won’t shut off the AP the way Meraki does, however, the APs will no longer have warranty coverage. After a subscription has been expired for two months, Mist will not reactivate an AP. The APs will continue to operate with their last configuration, however, but there will no longer be access to the cloud dashboard for that AP.

Links:

Mist Systems

Jake Snyder on Clear To Send podcast #114: Automate or Die

Mist Product Information

Up Next: The Design

On Network Models

One of the most fundamental concepts underlying modern data networking is that of the network

“stack”, which consists of individual “layers” that allow one to describe a network without actually getting lost in the weeds of specific underlying technologies. There are two models that are in common usage (there are several others as well but are less common):

  • the seven-layer OSI Model (which is largely theoretical), published in 1984 as ISO standard 7498 and officially known as the “Open Systems Interconnection Reference Model” (Kansas connection: the OSI model’s designer, Charles Bachman III, was born in Manhattan, the son of the head football coach at K-State at the time)
  • the four-layer TCP/IP Model (which is a more practical model owing to the widespread use of the internet). The TCP/IP model predates the OSI model and can trace some of its roots to BBN’s early work on internetworking in the late 1960s.

One of the key principles of the model is that each layer is carried by the layer below it. The layers each have their own methods and protocols, which are (for the most part) independent of the layers below that are carrying them from A to B. In the TCP/IP column, I’ve also indicated what type of system operates at that layer.

Network Model
OSI TCP/IP OSI Protocol data unit (PDU) Function
Host
layers
7. Application Application

(Computer)

Data High-level APIs, including resource sharing, remote file access
6. Presentation Translation of data between a networking service and an application; including
5. Session Managing communication sessions, i.e. continuous exchange of information in the form of multiple back-and-forth transmissions between two nodes
4. Transport Transport

(ISP)

Segment

Reliable transmission of data segments between points on a network, including segmentation, acknowledgement and multiplexing
Media
layers
3. Network Internetwork

(Router)

Packet Structuring and managing a multi-node network, including addressing, routing, and traffic control
2. Data link Link

(Switch)

Frame Reliable transmission of data frames between two nodes connected by a physical layer
1. Physical Bit Transmission and reception of raw bit streams over a physical medium

The importance of understanding these network models comes into play is when you are designing or troubleshooting a network. Understanding at what level your problem is happening is a major step towards solving it. I’ve seen and answered countless questions on Quora about “why doesn’t X” work, or “can someone on the internet trace me by my MAC address?” and various other questions that can be enlightened by an understanding of the network models. As a general rule, the lower you are in the model, the more physically localised you’re dealing with.

It’s probably difficult to wrap your head around if you’re not used to this kind of stuff. So let me offer up an example of how this same network model manifests itself in the real world, completely unrelated to computer networking. You’ve almost certainly seen it in action. You’ve benefited from it in your life. I give you: Container Shipping.

Container shipping relies on a standardized set of steel containers (also defined by the ISO) that can be used to haul goods efficiently around the world.

Here’s what the Transport Layer looks like:

Notice that the container is itself full of smaller containers (plain cardboard boxes: The session layer) – which themselves may contain additional boxes for retail (presentation layer), which contain an actual product (application layer). When the container is closed and sealed, its contents go wherever it goes.

But how does it get there? It makes use of the Network Layer. This is where it goes through one or more shipping companies (like ISPs) that get the contents from the factory in China (server) to the buyer (client). As in computer networking, This transportation company can use multiple types of ways to get it there, such as trucks, trains, ships, and airplanes. These are Layer 2, the Link Layer, and are all capable of carrying these containers.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Each of these Layer 2 conveyances rides on a different physical medium: Roads (land), Rails (land), Shipping routes (sea), flight paths (air).

It’s also worth noting that two of these physical media are bounded media (roads and rails) which constrain the path the vehicle takes. This is akin to a wire or fiber optic cable.  The other two (sea and air) are unbounded, which means the vehicles can take any path they choose. This is akin to free-space optical transmission and wireless RF transmission – it also means those are more susceptible to interception (hijacking/piracy).

I mentioned earlier that a layer 3 device is a router. Its job is to get the data from one network to another. What does this look like in container shipping? One of these giant cranes that remove the containers from one conveyance, to be loaded onto another. Sometimes, it will buffer them in a container yard before sending it on to its destination.

 

Once the container gets to its destination, it is signed for (an ACKnowledgement in the network world), and the signature is sent back to the shipper to confirm that it arrived at its destination – this is a transport layer function, as it is the shipper’s responsibility to make sure stuff gets there on time. The buyer and shipper (Layer 7) don’t really care *how* it got there, just that it did.

 

EC2 Monitoring with Raspberry Pi

I’ve been doing a little Raspberry Pi hacking lately, and put together a neat way to have physical status LEDs on your desk for things like EC2 instances.

The Hardware

In its most basic form, you can simply hook up an LED and a bias resistor between a ground line and a GPIO line on the Pi, but that doesn’t scale especially well – You can run out of GPIO lines pretty quickly, especially if you’re doing different colors for each status. Plus, it’s not overly elegant.

The solution? Unicorns!

No, really. The fine folks at Pimoroni in Sheffield, UK have made a lovely little HAT device for the Pi called a Unicorn. Its primary purpose is lots of blinky lights to make pretty rainbows and stuff, hence the name. However, this HAT is a 4×8 (or an 8×8) array of RGB LEDs, addressable via the I2C bus, which doesn’t eat up a line per LED (good thing, otherwise it would require 96 analog lines). The unicornhat library (python3-unicornhat) is available for Python 2 and Python 3 in the Raspbian repo. When installed onto the Pi, the Unicorn will fit within a standard Raspberry Pi case.

The Code

This is my first foray into Python, so there was a bit of a learning curve. If you’re familiar with object-oriented code concepts, this should be easy for you. Python is much more parsimonious with punctuation than PHP or perl are.

For accessing the EC2 data, we’ll need Amazon’s boto3 library, also available in the Raspbian repo (python3-boto). One area where boto3 is really nice is that the data is returned directly as a dict object (what users of other languages would call an array), so you don’t have to mess with converting JSON or XML into an object structure, and it can be manipulated as you would any other associative array (or a hash for you old-timers that use perl). AWS returns a fairly complex object, so you kind of have to dig into it via a few iterative loops to extract the data you’re after.

From there, it’s a matter of assigning different RGB values to the states. I chose these ones:

  • stopped: red
  • pending: green
  • running: blue
  • stopping: yellow(ish)

I also discovered that I needed to assign a specific pixel to each instance ID, otherwise they tended to move around a bit depending on what order AWS returned them on a particular request.

Here’s what the second iteration looks like in action:

import boto3 as aws
import unicornhat as unicorn
import time

# Initialize the Unicorn
unicorn.clear()
unicorn.show()
unicorn.brightness(0.5)

# Create an EC2 object 
ec2 = aws.client('ec2')

# Define colors and positions
color = {}
color['stopped']={'red':255,'green':0,'blue':0}
color['pending']={'red':64,'green':255,'blue':0}
color['running']={'red':32,'green':32,'blue':255}
color['stopping']={'red':192,'green':128,'blue':32}
	
pixel = {}
pixel['i-0fa4ea2560aa17ffd']={'x':0,'y':0}
pixel['i-06b95cd864acb1a8c']={'x':0,'y':1}
pixel['i-0661da0f50ffb604c']={'x':0,'y':2}
pixel['i-063ec151e0f44ef9b']={'x':0,'y':3}
pixel['i-02c514ca567d8a033']={'x':0,'y':4}

# Loop until forever
while True:

	response = ec2.describe_instances()
		
	
	statetable = {}
	resarray = response['Reservations']
	for res in resarray:
		instarray = res['Instances']
		for inst in instarray:
			iid = inst['InstanceId']
			state = inst['State']['Name']
			# print(iid)
			# print(state)
			statetable[iid] = state
	
	
	for ec2inst in statetable:
		x = pixel[ec2inst]['x']
		y = pixel[ec2inst]['y']
		r = color[statetable[ec2inst]]['red']
		g = color[statetable[ec2inst]]['green']
		b = color[statetable[ec2inst]]['blue']
		# print(x,y,r,g,b)
	
		unicorn.set_pixel(x,y,r,g,b)
		unicorn.show()


	time.sleep(1)

For the moment, this is just monitoring EC2 status, but I’m going to be adding checks in the near future to do things like ping tests, HTTP checks, etc. Stay tuned.

A Story of Cats

This is the internet, so at some point we’ve got to talk about cats. It’s in the rule book.  The Internet runs on cats. Cat pictures, cat videos, and… cat cables.

Those of you not familiar with the intricacies of the first layer of the OSI “7-layer Burrito” (Internet old-timers will remember this) are probably blissfully unaware of the gory details of the wiring that makes everything (including wireLESS) work.

Dilbert (April 24, 2010)

Dilbert (April 24, 2010)

So who are all these cats, anyway?

Simply put, it’s an abbreviation for “Category”. The Telecommunications Industry Association (TIA) has adopted a series of specifications over the years defining cable performance to transport various types of networks.

Here’s a quick rundown. We’re gonna get a tech lesson AND a history lesson all rolled into one.

Category 1 (pre-1980)

An IBM "Type 1" Token-Ring connector. Known colloquially as a "Boy George Connector" due to its ambiguous gender.

An IBM “Type 1” Token-Ring connector. Known colloquially as a “Boy George Connector” due to its ambiguous gender. Photo: Computer History Museum

This never officially existed, and was a retroactive term used to define “Level 1” cable offered by a major distributor. It is considered “voice grade copper”, sufficient to run signals up to 1MHz, and not suitable for data of any sort (except telephone modems). You could probably meet category 1 requirements with a barbed wire fence. You laugh, but it’s been done. Extensively.

Category 2 (mid-1980s)

Like Category 1, never officially existed, and was a name retroactively given to Level 2 cable from said same distributor. Cat2 brought voice into the digital age. It could support 4MHz of bandwidth, and was used extensively for early Token-Ring networks that operated at 4Mbps, as well as ARCNet, which operated at 2.5Mbps on twisted pair (it had previously used coaxial cable).

Category 3 (1991)

This is the first of the cable categories officially recognized by TIA. It is capable of operating 10Mbps Ethernet over twisted pair (like ARCNet, Ethernet also ran on coaxial cable in the very early days). Category 3 wire was deployed extensively in the early 1990s as it was a much better alternative to 2Mbps ethernet over coax. This is where the now nearly ubiquitous 8P8C connector (often incorrectly referred to as “RJ45”) came into usage for Ethernet, and it’s still in use nearly 3 decades later. Both the connector pinout and the cable performance are defined in TIA standard 568.  Since token-ring networks still operated at 4Mbps, they ran quite happily over this new spec. In 2017, one can still occasionally find Cat3 in use for analog and digital phone lines. The 802.3af Power over Ethernet specification is compatible with this type of wire.

Category 4 (early 1990s)

This stuff existed only for a very brief period of time. In the late 1980s, IBM standardized a newer version of Token Ring that ran at 16Mbps, which required more cable bandwidth than what Category 3 could offer. Category 4 offered 20MHz to work with (which may sound familiar to the wifi folks, who use 20MHz channels a lot). But Category 5 came along pretty quickly, and Category 4 was relegated to history and is no longer recognized in the current TIA-568 standard.

Category 5 (1995)

TIA revised their 568 standard in 1995 to include a new category of cable, supporting 100MHz of bandwidth. This enabled the use of new 100Mbps ethernet (a 100Mbps version of Token Ring soon followed, which also used the same 8P8C connector as Ethernet).

An 8P8C connector, commonly (and incorrectly) referred to as "RJ45". The standard twisted-pair ethernet connector for the last quarter century.

An 8P8C connector, commonly (but incorrectly) referred to as “RJ45”. This has been the standard twisted-pair Ethernet connector for the last quarter century.

Category 5e (2001)

TIA refined their spec on Category 5 to improve the performance of Category 5, to support the new gigabit ethernet standard. It is still a 100MHz cable, but new coding schemes and the use of all four pairs allowed the gigabit rate. IBM and the 802.5 working group even approved a gigabit standard for token ring in 2001, but no products ever made it to market, as Ethernet had taken over completely by that point.

Category 6 (2002)

Not long after Category 5e came to be, Along comes category 6, with 250MHz of bandwidth. This was accomplished partly with better cable geometry and by going from 24AWG conductors to 23AWG. This increased bandwidth allows 10Gbps ethernet to operate on cables up to 55 meters in length.

Category 6a (2009)

This refinement to Category 6 increased cable bandwidth to 500MHz in order to allow 10Gbps ethernet to operate at the full 100m length limit for Ethernet. Categories 6 and 6a will support the new 802.11bt Power Over Ethernet Level 3 (60W) and Level 4 (90W) standards (expected 2018) provided that cable bundles do not exceed 24 cables for thermal reasons.

Category 7/7a

Category 7 cable. Who would want to terminate that? What a pain!

Category 7 cable. Who would want to terminate that? What a pain!

This one never existed in the eyes of the TIA. It still lives as an ISO standard defining several different types of shielded cable whose performance is comparable to Category 6 (bandwidth up to 600MHz for Cat7, 1GHz for Cat7a). Both these specs were rendered moot by 10Gbps Ethernet operating on Category 6a with standard 8P8C connectors. This cat was so ugly, TIA left it at the shelter.

Category 8 (2017)

The latest and greatest, this cable exists to run 40Gbps ethernet. It comes in two flavors, unshielded as 8.1, and shielded (supplanting the Category 7 specs) as 8.2. This cable has a bandwidth of 1600MHz for unshielded, and 2000MHz for shielded.

 

So there you have it. The cats that put the WORK in “Network”. And because this is the internet, I leave you with gratuitous kittens.

Gratuitous Kittens

 

 

 

 

 

Enhancing the public Wi-Fi experience

Recently, there was an excellent blog post from WLAN Pros about “Rules for successful hotel wi-fi“. While it is aimed primarily at Wi-Fi in the hotel business (where there is an overabundance of Bad-Fi), many of the tips presented also apply to a wide variety of large-scale public venue wifi installations. Lots of great information in the post, and well worth a read.

At the 2016 WLPC there was an interesting TENTalk from Mike Liebovitz at Extreme Networks about the pop-up wifi at Super Bowl City in San Francisco, where analytics pointed to a significant portion of the traffic being headed to Apple.

Meanwhile, a few months later at the 2016 National Church IT Network conference, I heard a TENTalk about Apple’s MacOS Server, where I first heard about this incredibly useful feature (sadly, it wasn’t recorded, that I know of, so I can’t give credit…)

With most of the LPV installations I’ve worked on, I’ve found the typical client mix includes about 60% Apple devices (mostly iOS). For example, this is at a large church whose wireless network I installed. (Note that Windows machines make up less than 10% of the client mix on wifi!)

Client mix from Ruckus ZoneDirector

OK, So what?

This provides an opportunity to make the wifi experience even better for your (Apple-toting) guests. Whenever possible, as part of the “WiFi System” I will install an Apple Mac Mini loaded with MacOS Server. This allows me to turn on caching. This is not just plain old web caching like you would get with a proxy server such as Squid, but rather a cache for all things Apple. What does this do for your fruited guests? It speeds up the download of software distributed by Apple through the Internet. It caches all software and app updates, App Store purchases, iBook downloads, iTunes U downloads (apps and books purchases only), and Internet Recovery software that local Mac and iOS devices download.

Why is this of interest and importance? Let me give you an example: A few years ago, we were hosting a national Church IT Round Table conference at Resurrection on a day when Apple released major updates to MacOS, iOS, and their iWork suite. In addition to the 50 or so staff Mac machines on the network, there were another hundred or two Mac laptops and iThings among the conference attendees. The 200MB internet pipe melted almost instantly under the load of 250 devices each requesting 3-5GB of updates. That would have melted even a gigabit pipe, and probably given a 10Gbps pipe a solid run for its money (not to mention bogging down some of the uplinks on the internal network!. Having a caching server would have mitigated this. It didn’t do great things to the access points in the conference venue either, all of which were not only struggling for airtime, but also for backhaul.

Just by way of an example, Facebook updates their app every two weeks and its current incarnation (86.0, March 30, 2017) weighs in at 320MB (the previous one was about half that!), and its close pal Messenger clocked in at 261MB. Almost everyone has those to apps, so they’re going to find itself in your cache almost instantly, along with numerous other popular apps. Apple’s iWork suite apps and Microsoft Office apps all weigh in around 300-500MB apiece as well. This has potential to murder your network when you least expect it. (A few years back, the church where I was working hosted the national Church IT conference that happened to coincide with Apple’s release of OSX Mavericks, and a major iWork update for both iOS and MacOS. The conference Wi-Fi and the church’s 200Mbps WAN pipe melted under the onslaught of a couple hundred Apple devices belonging to the guest nerds and media staff dutifully downloading the updates.)

In any case, check out the network usage analytics from either your wireless controller or your firewall. If Apple.com is anywhere near the top of the list (or on it at all), you owe it to yourself and your guests to implement this type of solution.Network Statistics from Ubiquiti UniFi

The Technical Mumbo-Jumbo

Hardware

As mentioned previously, a Mac Mini will do the job nicely. If you’re looking to do this on the cheap, it will happily run on a 2011-vintage Mini (you can find used Mac Minis on Craigslist or eBay all day long for cheap), just make sure you add some extra RAM and a storage drive that doesn’t suck (the stock 5400rpm spinning disks on the pre-2012 era Mac Mini and iMacs were terrible.) Fortunately, 2.5″ SSDs are pretty cheap these days. Newer Minis will have SSD baked in already.

If you’re wanting to put the Mac Mini in the datacenter, you might want to consider using a Sonnet RackMac Mini (which is available on Amazon for about $139) and can hold one or two machines.

Sonnet RackMac Mini

You can also happily run this off of one of the 2008-era “cheese grater” Mac Pros that has beefier processing and storage (and also fits in a rack, albeit not in the svelte 1U space the Sonnet box uses). If you have money to burn, then by all means use the “trash can” Mac Pro (Sonnet also makes a rack chassis for that model!).

This is a great opportunity to re-purpose some of those Macs sitting on the shelf after your users have upgraded to something faster and shinier.

Naturally, if you’re running a REALLY big guest network, you’ll want to look at something beefy, or a small farm of them Minis with SSD storage (the MacOS Server caching system makes it quite easy to deploy multiple machines to support the caching.)

The Software

MacOS Server (Mac App Store, $19.99)

Since most of your iOS guests will have updates turned on, one of the first things an iOS device does when it sees a big fat internet pipe that isn’t from a cell tower is check for app updates. If you have lots of guests, you will need to fortify your network against the onslaught of app update requests that will inevitably hit whenever you have lots of guests in the building.

The way it works is this: When an Apple device makes a request to the CDN, Apple looks at the IP you’re coming from and says, “You have a local server on your LAN, get your content from there, here’s its IP.” The result being that your Apple users will get their updates and whatnot at LAN speeds without thrashing your WAN pipe every time anyone pushes out a fat update to an app or the OS, which is then consumed by several hundred people using your guest wifi over the course of a week. You’ve effectively just added an edge node to Apple’s CDN within your network.

Content will get cached the first time a client requests it, and it does not need to completely download to the cache before starting to send it to the client. For that first request, it will perform just as if they were downloading it directly from Apple’s servers. If your server starts running low on disk space, the cache server will purge older content that hasn’t been used recently in order to maintain at least 25GB of free disk space.

MacOS Caching Server Configuration

The configuration

If you have multiple subnets and multiple external IPs that you want to do this for, you can either do multiple caching servers (they can share cache between them), or you can configure the Mini to listen on multiple VLANs:

Mac OS network preferences panel

Once you have the machine listening on multiple VLANs, you can tell the caching server which ones to pay attention to, and which public IPs. The Mac itself only needs Internet access from one of those subnets.

MacOS Server Caching Preferences

The first dropdown will give you the option of “All Networks”, “Only Local Subnets”, and “Only Some Networks”. Choosing the last one opens an additional properties box that allows you to define those networks:

Mac OS Server Cache Network Settings

The second one gives you the options of “Matching this server’s network” or “On other networks”. As with the first options, an additional properties box is displayed.

In both cases, hit the plus sign to create a network object:

Mac OS Server Create a New Network

It should be noted here that this only tells the server about existing networks, but it won’t actually create them on the network interface. You’ll still need to do that through the system network preferences mentioned previously. If you don’t want to have the server listen on multiple VLANs, you can just make sure its address is routable from the subnets you wish to have the cache server available, define the external and internal networks it provides service to, and you should be off to the races. This will provide caching for subnet A that NATs to the internet via public IP A, and B to B, and so on. Defining a range of external IPs also has you covered if you use NAT pooling.

There’s also some DNS SRV trickery that may need to happen depending on your environment. There are some additional caveats if your DNS servers are Active Directory read-only domain controllers. This post elaborates on it.

 

Is it working?

Click the stats link near the top left of the server management window. At the bottom is a dropdown where you can see your cache stats. The red bar shows bytes served from the origin, and green shows from the cache. If you only have one server doing this, you won’t see any blue bars, which are for cache from peer servers. Downside is that you can only go back 7 days.

On this graph, 3/28 was when there were both a major MacOS and iOS update released, hence the huge spike from the origin servers on Apple’s CDN. Nobody has updated from the network yet… But guest traffic at this site is pretty light during the week. I’ll update the image early next week.

MacOS Server Cache Stats

Other useful features

A side benefit of this is that you can also use this to provide a network recovery boot image on the network, in case someone’s OS install ate itself – on the newer Macs with no optical drive, this boots a recovery image from the internet by default. This requires some additional configuration, and the instructions to set up NetInstall are readily available with a quick Google search.

If you want, you can also make this machine the DHCP and local DNS server for your guest network. With some third-party applications, you can also serve up AirPrint to your wireless guests if they need it.

Conclusion

From a guest experience perspective, your guests see their updates downloading really fast and think your WiFi is awesome, and it’s shockingly easy to set up (the longest and most difficult part is probably the actual acquisition of the Mac Mini) It will even cache iCloud data (and encrypts it in the cache storage so nobody’s data is exposed). Even if you have a fat internet pipe, you should really consider doing this, as the transfers at LAN speed will reduce the amount of airtime consumed on the wireless and the overall load on your wireless network. (Side note, if you’re a Wireless ISP, this sort of setup is just the sort of thing you ought to put between your customer edge network and your IP transit)

Of course, you could also firewall off Apple iCloud and Updates instead, but why would you do that to your guests? Are you punishing them for something?

Android/Windows users: So sad, Google and Microsoft don’t give you this option (Although Microsoft sort of does in a corporate environment with WSUS, but it’s not nearly as easy to pull off, nor is it set up for casual and transient users). I would love it if Google would set up something like this for play store, Chromebook, etc, as about half of the client mix that isn’t from Apple is running on Android. You can sort of do it by installing a transparent proxy like squid.

Now, if only we could do the same for Netflix’s CDN. The bandwidth savings would be immense.

Update

(Added November 16, 2017)

As of the release of MacOS High Sierra and MacOS Server 5.4 (release notes), the caching service is now integrated into the core of MacOS, so any Mac on the network can do it, without even needing to install Server. The new settings are under System Preferences > Sharing: