HLS distribution with Amazon CloudFront

I’ve blogged extensively about Wowza RTMP distribution with edge/origin and load balancing, but streaming distribution is moving more to HTTP-based systems such as Apple’s HTTP Live Streaming (known inside Wowza as “cupertino”), Adobe’s HTTP Dynamic Streaming (Wowza: “sanjose”), and Microsoft’s Smooth Streaming (Wowza: “smooth”). Future trends suggest a move to MPEG-DASH, which is a standard based on all three proprietary methods (I’ll get into DASH in a future post as the standard coalesces – we’re talking bleeding edge here). The common element in all of them, however, is that they use HTTP as a distribution method, which makes it much easier to leverage CDNs that are geared towards non-live content on HTTP. One of these CDNs is Amazon’s CloudFront service. With edges in 41 locations around the world and 12 cents a gigabyte for transfer (pricing may vary by region), it’s a good way to get into an HTTP CDN without paying a huge amount of money or committing to a big contract with a provider like Akamai.

On the player side, JW Player V6 now supports HLS, and you can do Adobe HDS with the Strobe Media Player.

With the 3.5 release, Wowza Media Server can now act as an HTTP caching origin for any HTTP based CDN, including CloudFront. Doing so is exceedingly simple. First, configure your Wowza server as an HTTP caching origin, and then create a CloudFront distribution (use a “download” type rather than a streaming type – it seems counterintuitive, but trust me on this one!), and then under the origin domain name, put the hostname of your Wowza server. You can leave the rest as defaults, and it will work. It’ll take Amazon a few minutes to provision the distribution, but once it’s ready, you’ll get a URL that looks something like “d1ed7b1ghbj64o.cloudfront.net”. You can verify that the distribution is working by opening a browser to that address, and you should see the Wowza version information. Put that same CloudFront URL in the player URL in place of the Wowza server address, and your players will now start playing from the nearest CloudFront edge cache.

See? Easy.

Browser-Aware Player Code: Episode V, IE Strikes Back

Not so long ago, I updated my browser-aware player code to check for the presence of a stream. Recently, it’s come to light that Internet Explorer 9 doesn’t play nice with this particular snippet, because in IE9, the Javascript engine is rather brain-damaged when it comes to cross-site requests. In order to deal with this properly, we must alter the way we query the server for the presence of a stream:


console.log("Starting stream Check");
if (is_ie9) {
console.log ("Using XDR because IE9 is stupid");
streamcheckXDR();
setInterval(function(){streamcheckXDR()},10000);
}
else {
streamcheck();
setInterval(function(){streamcheck()},10000);
}
function streamcheckXDR() {
 console.log("Starting XDR");
 xdr = new XDomainRequest();
 if(xdr) {
 xdr.onerror = function(){ console.log("XDR Error"); };
 xdr.onload = function(){ startPlayer(xdr.responseText,"XDR"); };
 url = "http://"+streamer+":8086/streamcheck?stream="+stream;
 xdr.open("get",url);
 xdr.send();
 } else {
 console.log("Failed to create XDR object");
 }

}

function startPlayer(result,mode){

// for some inexplicable reason, running "Boolean" on the XDR output doesn't work

// so we have to call the function and tell it if we're dealing with XDR data or AJAX data.

 

if(mode == "XDR") {
 if (result === "true") { curstatus = true;}
 if (result === "false") { curstatus = false;}
 } else {
 curstatus = Boolean(result);
 }

//console.log("Result: "+result);
 //console.log("Previous: "+prevstatus);
 //console.log("Current: "+curstatus);
 if (curstatus == prevstatus) {
 //console.log("No Change");
 } else {
 if (curstatus) {
 if (is_iphone || is_ipad || is_ipod) { iOSPlayer("videoframe",plwd,plht,server,stream);}
 else if (is_blackberry) { rtspPlayer("videoframe",plwd,plht,server,stream+'_+240p');}
 else { flashPlayer("videoframe",plwd,plht,server,stream); }
 console.log("Changed from false to true");
 } else {
 var vframe=document.getElementById("videoframe")
 if (is_iphone || is_ipad || is_ipod || is_blackberry) {
 } else {
 jwplayer("videoframe").remove();
 }
 vframe.innerHTML = '<IMG SRC="image.png" WIDTH="'+plwd+'" HEIGHT="'+plht+'">';


 console.log("Changed from true to false");
 }

}
 prevstatus = curstatus;
}

function streamcheck() {
 console.log("Starting AJAX");
 $.ajax({
 dataType: "json",
 contentType: "text/plain",

type: "GET",
 url: "http://"+streamer+":8086/streamcheck?stream="+stream,
 error: function(XMLHttpRequest, textStatus, errorThrown)
 {
 console.log('AJAX Failure:'+ textStatus+':'+errorThrown);
 //some stuff on failure
 },
 success: function(result){startPlayer(result)}
 });
}

Wowza EC2 Capacity Update

It’s been a while since Wowza has updated their EC2 performance numbers (they date back to about 2009), and both Amazon and Wowza have made great improvements to their products. Since I have access to a high-capacity system outside of Amazon’s cloud, I am able to use Wowza’s load test tool on a variety of instance sizes to see how they perform.

The test methodology was as follows:

  • Start up a Wowza instance on EC2 with no startup packages (us-east)
  • Install the server-side piece of Willow (from Solid Thinking Interactive)
  • Configure a 1Mbps stream in Wirecast
  • Monitor the stream in JWPlayer 5 with the Quality Monitor Plugin
  • Configure the Wowza Load Test Tool on one of my Wowza Hotrods located at Softlayer’s Washington DC datacenter
    • Server is 14 hops/2ms from us-east-1
  • Increase the load until:
    • the measured bandwidth on JW player drops below stream bandwidth
    • frame drops get frequent
    • Bandwidth levels out on the Willow Graphs while connection count increases
  • Let it run in that condition for a few minutes

In Willow, it basically looked like this (this was from the m1.small test). You can see ramping up to 100, 150, 200, 250, 275, and 300 streams. The last 3 look very similar because the server maxed out at 250 Mbps. (Yes, the graph says MBytes, that was a bug in Willow which Graeme fixed as soon as I told him about it)

Willow Bandwidth

Meanwhile, this is what happens on the server.. the CPU has maxed out.

EC2 CPU Usage

So that’s the basic methodology. Here are the results:

[table id=1 /]

There are a couple of things to note here. Naturally, if you’re not expecting a huge audience, stick to m1.small. But the best bang for the buck is the c1.medium (High-CPU Medium), which is a relatively new instance type, which gives you 4x the performance of a m1.small at less than 2x the price. The big surprise here was the m2.xlarge. It performs only marginally better than an m1.small at 4x the price.
All the instances that show 950 are effectively giving you the full benefit of the gigabit connection on that server and maxed out the interface long before the CPU maxes out. In the case of the c1.xlarge, there’s lots of CPU to spare for things like transcoding and such if you’re using a BYOL image. If you want to go faster, you’ll need to roll your own Cluster Quad or do a load-balanced set.

Disclaimers: Your mileage may vary, these are just guidelines, although I think they’re pretty close. I have not tested this anywhere but us-east-1, so if you’re using one of amazon’s other datacenters, you may get different results. I hope to test the other zones soon and see how the results compare.

Wowza/EC2 Q&A Session

I’ll admit, the nerd meter on here has been running past redline for a while, so I’m going to step back and answer some questions that have come in via e-mail or twitter from those of you still trying to get started with Wowza streaming.

Q: How do I find a good Wowza EC2 server to use, what should I be using?

A: I understand your confusion. There are a lot of Wowza AMIs out there on the EC2 stack, and that’s just the official ones! When starting an instance from the AWS console, simply searching for “wowza” under the community AMIs is a bit of a daunting process. Just in us-east-1, you get 68 results:

Starting Wowza on EC2

As a general rule, the most recent version available is the one you want to use. Wowza keeps a list of the most recent AMIs on their support site, and unless you have bought a license (Plug: I’m a reseller, support your streamnerd!), you’ll want to use the DevPay instances which require a $5 monthly subscription from Amazon.

If you need to run an older version, Wowza’s support team can give you the AMI ID you need, as they continue to make older versions available (which is why you get such a huge list), although they’re not maintained once a newer AMI has been released. With a few exceptions, startup packages designed for one version will generally work on another version.

Where it gets complicated is if you’re dealing with BYO Licenses and Marketplace instances. That’s probably a topic for another post.

Q: How do I know if I need to set up load balancing and how do I do that, in plain English?

A: Simply put, you’ll need to do load balancing any time the audience on your server exceeds the network throughput that your instance size will support. Wowza has published some numbers in the past for a few of the instance sizes, but those numbers are several years old, and changes to both Amazon’s infrastructure and the Wowza code base have rendered them obsolete. Since I now manage some very large Wowza servers outside the Amazon stack, I will be running some new benchmarking tests in the next few weeks. When I get the results, I’ll be posting those here as well as sharing them with the Wowza team.

The best way to make your Wowza stack scalable is to configure your single server as both an origin and an edge, and make it a load balancing army of one. That way, if you get an unexpected surge in traffic, adding more repeaters is a fairly simple task. All you need to do is keep a set of startup packages with the configuration for each type on hand, and you can start them up.

The key thing to remember about Wowza load balancing is that the load balancer and the origin/edge architecture are independent of each other. In an origin/edge configuration, you publish to one application, and another application (on either the same server or a different server) pull the stream from the origin application and then distribute it out to clients.

There are many ways to determine which edge application a client goes to, but the most common one is the Wowza load balancer module. This module consists of a “sender” that lives on edge servers, and a “listener” that lives on the load balancing server (which is usually the origin server, but doesn’t have to be). The sender keeps track of how many connections are on its server, and reports those back to the listener along with its address and a unique identifier. When queried, the load balancer will then provide the address of whichever server has the fewest connections (or an XML dump listing all the servers in the pool and their status).

Once you have the information of which server has the fewest connections, it’s simply a matter of directing an incoming connection to that edge server. The load balancer package provides an application that will redirect RTMP clients, but that doesn’t work with HTTP clients such as iOS and Android devices, so you’ll need some additional steps for them.

Q: I can check connection counts for stats, but all other posts I read about analytics is confusing. Is there a better, easier way?

One of the most challenging things about Wowza facing new users is that it lacks all the pretty graphs – but the flip side of that is that you get access to raw data. Out of the box, there’s not really anything other than the connectioncounts.xml and serverinfo.xml XML dumps. There are a number of third-party components that can massage this data into something a little more useful and “boss-friendly”. On the commercial side there’s CasterStats (Plug: I’m a reseller. Support your streamnerd!) which can do both live stats and post-analysis of log files, as well as Sawmill and AWStats which will analyze logs. There are probably a few others as well. And, naturaly, there are also a number of PHP scripts that I’ve posted on this blog before that can help with analyzing and processing the XML dumps.

One final plug: If you’re interested in a web-based control system that manages your startup packages and live stats, ask me about cdnBuilder.

That’s all for today, if you’ve got a Wowza question (EC2 or not!), drop me a line on twitter or leave me a comment below!

 

Supercharging a Wowza Hotrod

sp1-larry dixon-photo/steve kohlsI’ve posted in the past about running Wowza on large server instances. When running Wowza on systems with a 10Gbps network interface, you run into a 5Gbps limitation inherent to the Java Virtual Machine. With Wowza tuning parameters maxing out at 12 virtual cores, most machines with a 10Gbps interface will have processing power to spare, but why let all that horsepower go to waste?

For relatively little money (under $1000/month), you can get server from 100TB.com with 12 or 16 physical cores (24 or 32 virtual) and park it in the Softlayer datacenter in Washington, DC with a 10Gbps interface, pretty much in the middle of the Internet. With 100TB/month of data transfer included in the price (and for an extra charge, they’ll take the meter off entirely), this is a great option for heavy or full-time streaming. But there’s that pesky JVM throughput limitation. It’s time to take the governor off this hotrod.

What you’ll need:

  • A big honkin’ server
  • A big honkin’ pipe
  • Some extra IP addresses (the 100TB servers at Softlayer come with 8 usable addresses)
  • Wowza with either a subscription license key or two perpetual license keys

Note that I’ve done this in Linux, I’m not quite sure how it would work in Windows with registering the second instance with the services.

Step 1

  1. Install Wowza and tune
  2. Configure it with a license
  3. Choose an IP address on the system
  4. In VHost.xml, and Server.xml replace all instances of <IpAddress>*</IpAddress> with the IP.
  5. Set up load balancing (both sender and listener) with redirect
  6. Add an origin application (type live).
  7. Add an edge application (type liverepeater-edge) pulling from the origin app

Step 2

  • Make a duplicate of the following (I appended a -2):
    • /usr/local/WowzaMediaServer-3.5.0
    • /etc/init.d/WowzaMediaServer *
    • /usr/bin/WowzaMediaServerd *
  • Edit all references to WowzaMediaServer in the files indicated by a *
  • in the cloned installation:
    • change bin/setenv.sh and bin/startup.sh to point to the cloned install
    • delete conf/Server.guid
    • Edit conf/VHost.xml to point to a second IP address
    • Edit conf/Server.xml to point to a second IP address
    • Edit conf/Server.xml to bind jmx to ports 8081/8082
    • Set up load balancing as a Sender (edge) pointing to load balancer IP in primary install
    • Create edge application

Step 3

  • Start the primary instance
  • Check http://primary.ip:1935/loadbalancer?serverInfoXML to verify the primary edge is connected
  • Start the secondary instance
  • Check  load balancer again and make sure secondary instance is connected
  • Publish a stream
  • Pull a stream directly from each edge to make sure the repeaters are working
  • use http://primary.ip:1935/loadbalancer to get least loaded server for load-balancing non-RTMP clients
  • use rtmp://primary.ip/redirect/stream to get load balanced RTMP.

Step 4

  • Sit back, relax, and enjoy the fact that you now have a server that can handle 10,000 connections.
  • If you need more, build up a second server just like this one, and just make both instances edge instances

Browser-aware player code, revisited again

It’s the code snippet that just won’t go away. I’ve updated the code for some additional functionality. This version takes server, port, and stream parameters via the URL, parses them in javascript, and then queries a streamcheck HTTPProvider on the server to see if a stream by that name is currently published. If it is, it will load the player, otherwise load a message, and check periodically to see if the stream is published, and load the player if the state changes to true, and unload it if it changes to false, returning to the message. The player is designed to scale to fit whatever window it’s in, so make an IFRAME of whatever size you want the player, and you’re off and running

<IFRAME SRC="player.html?streamer=wowza.nhicdn.net&port=1935&app=live&stream=livestream" WIDTH="640" HEIGHT="360" SCROLLING="NO" />


Without further ado, here’s the code:



<body style="margin: 0px; background: black; color: white; font-family: sans-serif;">
<div id="videoframe" style="text-align: center;font-size: 14;">The video stream is currently offline. Playback will resume as soon as a stream is available.</div>
<script type="text/javascript" src="/assets/jw5/jwplayer.js"></script> 
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" type="text/javascript"></script>
<script type='text/javascript' src='https://www.google.com/jsapi'></script>


<SCRIPT type="text/javascript">
// Browser-aware video player for JW Player and Wowza
 

plwd=self.innerWidth;
plht=self.innerHeight;
// var debugwindow=document.getElementById("debug")
// debugwindow.innerHTML='<P>Dimensions: '+plwd+'x'+plht+'</P>';
var streamer=getUrlVars()["streamer"];
var app=getUrlVars()["app"];
var port=getUrlVars()["port"];
var stream=getUrlVars()["stream"];
var server=streamer+':'+port+'/'+app;


var agent=navigator.userAgent.toLowerCase();
var is_iphone = (agent.indexOf('iphone')!=-1);
var is_ipad = (agent.indexOf('ipad')!=-1);
var is_ipod = (agent.indexOf('ipod')!=-1);
var is_playstation = (agent.indexOf('playstation')!=-1);
var is_safari = (agent.indexOf('safari')!=-1);
var is_blackberry = (agent.indexOf('blackberry')!=-1);
var is_android = (agent.indexOf('android')!=-1);
var streamstatus = false;
var prevstatus = false;
var curstatus = false;

streamcheck();
setInterval(function(){streamcheck()},10000);

function streamcheck() {
            $.ajax({
              type: "GET",
              url: "http://"+streamer+":8086/streamcheck?stream="+stream,
             dataType: "json",
             success: function(result){
		curstatus = Boolean(result);
		//if (result === "true") { curstatus = true;}
		//if (result === "false") { curstatus = false;}
		if (curstatus == prevstatus) {
		} else {
		if (curstatus) {
			if (is_iphone || is_ipad || is_ipod) { iOSPlayer("videoframe",plwd,plht,server,stream);}
			else if (is_blackberry) { rtspPlayer("videoframe",plwd,plht,server,stream);}
			else { flashPlayer("videoframe",plwd,plht,server,stream); }
			console.log("Changed from false to true");
		} else {
			var vframe=document.getElementById("videoframe")
			if (is_iphone || is_ipad || is_ipod || is_blackberry) { 
			} else {
				jwplayer("videoframe").remove();
			}
			vframe.innerHTML = 'The video stream is currently offline. Playback will resume as soon as a stream is available.';
			
			
			console.log("Changed from true to false");
		}

		}		
			prevstatus = curstatus;
		}
           });
}

 
function getUrlVars() {
    var vars = {};
    var parts = window.location.href.replace(/[?&]+([^=&]+)=([^&]*)/gi, function(m,key,value) {
        vars[key] = value;
    });
    return vars;
}

function iOSPlayer(container,width,height,server,stream)
{
var player=document.getElementById(container)
player.innerHTML='<VIDEO controls '+
'HEIGHT="'+height+'" '+
'WIDTH="'+width+'" '+
'title="Live Stream">'+
'<SOURCE SRC="http://'+server+'/'+stream+'/playlist.m3u8"> '+
'</video>';
}
 
function rtspPlayer(container,width,height,server,stream)
{
var player=document.getElementById(container)
player.innerHTML='<A HREF="rtsp://'+server+'/'+stream+'">'+
'<IMG SRC="poster-play.png" '+
'ALT="Start Mobile Video" '+
'BORDER="0" '+
'HEIGHT="'+height+'" '+
'WIDTH="'+width+'">'+
'</A>';
}
 
function flashPlayer(container,wide,high,server,stream)
{
jwplayer(container).setup({
		height: high,
		width: wide,
		streamer: 'rtmp://'+server,
		file: stream,
		autostart: true,
		stretching: 'uniform'
		});
	
}
 
</SCRIPT>
</BODY>

The code for the streamcheck module is as follows:

package net.nerdherd.wms.http;

import java.io.*;
import java.util.Iterator;
import java.util.List;
import java.util.Map;

import com.wowza.wms.application.IApplicationInstance;
import com.wowza.wms.http.*;
import com.wowza.wms.logging.*;
import com.wowza.wms.vhost.*;

public class StreamCheck extends HTTProvider2Base {

    	public void onHTTPRequest(IVHost vhost, IHTTPRequest req, IHTTPResponse resp) {
		StringBuffer report = new StringBuffer();
		StringBuffer streamlist = new StringBuffer();
		if (!doHTTPAuthentication(vhost, req, resp))
			return;
		
		Map<String, List<String>> params = req.getParameterMap();
		
		String stream = "";
		boolean status = false;
		boolean listing = false;
				
		if (req.getMethod().equalsIgnoreCase("post")) {
			req.parseBodyForParams(true);
		}
		
		if (params.containsKey("stream"))
			stream = params.get("stream").get(0);
				
		try
		{
				if (vhost != null)
				{
					
					List<String> appNames = vhost.getApplicationNames();
					Iterator<String> appNameIterator = appNames.iterator();
					while (appNameIterator.hasNext())
					{
						try {
	                        String Name = appNameIterator.next();

	                        IApplicationInstance NowApp = vhost.getApplication(Name).getAppInstance("_definst_");
	                        List<String> PublishedNames = NowApp.getPublishStreamNames();
	                        Iterator<String> ThisPublished = PublishedNames.iterator();
	                        if ( PublishedNames.size()>0 )
	                                {
	                                while ( ThisPublished.hasNext() )
	                                        {
	                                        try {
	                                                String NowPublished = ThisPublished.next();
	                                                
	                                                	if (NowPublished.equals(stream)){
	                                                		status = true;
	                                                	}
	                                                	
	                                                 } catch (Exception e) {}
	                                        }
	                                
	                                }
	                        
	                        } catch (Exception e) {report.append(e.toString()); } 
	                        }					
					}
						
								
		}
		catch (Exception e)
		{
			WMSLoggerFactory.getLogger(HTTPServerVersion.class).error("StreamCheck: " + e.toString());
			e.printStackTrace();
		}

		if (!listing) { 
			if (status){
				report.append("true");
		    } else {
		    	report.append("false");
		    }
		}
		
		try {
			resp.setHeader("Content-Type", "text/plain");
			resp.setHeader("Access-Control-Allow-Origin","*");
			OutputStream out = resp.getOutputStream();
			byte[] outBytes = report.toString().getBytes();
			out.write(outBytes);
		} catch (Exception e) {
			WMSLoggerFactory.getLogger(null).error(
					"MediaCasterHTTP: " + e.toString());
		}
		
	
	}
}

		

Creating a global streaming CDN with Wowza

I’ve posted in the past about the various components involved in doing edge/origin setups with Wowza, as well as startup packages and Route 53 DNS magic. In this post, I’ll tie the various pieces together to put together a geographically-distributed CDN using Wowza.

For the purposes of this hypothetical CDN, we’ll do it all within EC2 (although there’s no reason it has to be), with three edge servers in each availability zone.

There are two ways we can do load balancing within each zone. One is to use the Wowza load balancer, and the other is to use round-robin DNS. The latter is not as evenly balanced as the former, but should work reasonably well. If you’re using set-top devices like Roku, you’ll likely want to use DNS as redirects aren’t well supported in the Roku code.

For each zone, create the following DNS entries in Route 53. If you use your registrar’s DNS, you’ll probably want to create a domain name as many registrar DNS servers don’t deal well with third-level domains (sub.domain.tld). For the purposes of this example, I’ll use nhicdn.net, the domain I use for Nerd Herd streaming services. None of these hostnames actually exist.

We’ll start with Wowza’s load balancer module. Since the load balancer is not in any way tied to the origin server (although they frequently run on the same system), you can create a load balancer on one of the edge servers in that zone. Point the loadbalancertargets.txt on each edge server in that zone to that server, and point to the origin stream in the Application.xml configuration file. Once the load balancer is created, create a simple CNAME entry in Route 53 called lb-zone.nhicdn.net, pointing to the hostname of the load balancer server:

lb-us-east.nhicdn.net CNAME ec2-10-20-30-40.compute-1.amazonaws.com

If you opt to use DNS for load balancing, again point the Application.xml to the origin stream, and then create a weighted CNAME entry to each server in that zone, all with the same host record, each with equal weight (1 works):

lb-us-east.nhicdn.net CNAME ec2-10-20-30-40.compute-1.amazonaws.com
lb-us-east.nhicdn.net CNAME ec2-10-20-30-41.compute-1.amazonaws.com
lb-us-east.nhicdn.net CNAME ec2-10-20-30-42.compute-1.amazonaws.com

If you wish, you can also create a simple CNAME alias for each server for easier tracking – this is strictly optional:

edge-us-east1.nhicdn.net CNAME ec2-10-20-30-40.compute-1.amazonaws.com
edge-us-east2.nhicdn.net CNAME ec2-10-20-30-41.compute-1.amazonaws.com
edge-us-east3.nhicdn.net CNAME ec2-10-20-30-42.compute-1.amazonaws.com

Once you’ve done this in each zone where you want servers, Create a set of latency-based CNAME entries for each lb entry:

lb.nhicdn.net CNAME lb-us-east.nhicdn.net
lb.nhicdn.net CNAME lb-us-west.nhicdn.net
lb.nhicdn.net CNAME lb-europe.nhicdn.net
lb.nhicdn.net CNAME lb-brazil.nhicdn.net
lb.nhicdn.net CNAME lb-singapore.nhicdn.net
lb.nhicdn.net CNAME lb-japan.nhicdn.net

Once your DNS entries are in place, point your players to lb.nhicdn.net and use either the name of the edge application (if using DNS) or the redirect application (if using Wowza load balancer), and clients will go to either the edge server or load balancer for the closest region.

One other thing to consider is to set up a load balancer on your origin, and use that for gathering stats from the various servers. You can put multiple load balancers in the loadbalancertargets.txt file, so you can have each server report in to both its regional load balancer and the global one, except the global one is not being used to redirect clients, but rather only for statistics gathering. You can also put multiple load balancers in each region for redundancy and use a weighted CNAME entry.

If you would like assistance setting up a system like this for your organization, I am available for hire for consulting and/or training.

Converting EC2 S3/instance-store image to EBS

Amazon’s instance-store images are convenient, but ephemeral in nature. Once you shut them down, they’re history. If you want persistence of data, you want to use an EBS instance that can be stopped and started at will without losing your info. Here’s the process I went through to convert a Wowza image to EBS for a client to use with a reserved instance. I’m going to assume no configuration changes for Wowza Media Server, as the default startup package is fairly full-featured. This process works for any other instance-store AMI, just ignore the Wowza bits if that’s your situation.

Boot up a 64-bit Wowza lickey instance. I was working in us-east-1, so I used ami-e6e4418f, which was the latest as of this blog post.

Once it’s booted up, log in.

Elevate yourself to root. You deserve it:

sudo su -

Stop the Wowza service:

service WowzaMediaServer stop

delete the Server.guid file. This will cause new instances to regenerate their GUID.

rm /usr/local/WowzaMediaServer/conf/Server.guid

Go into the AWS management console and create a blank EBS volume in the same zone as your instance.

Attach that volume to your instance (I’m going to assume /dev/sdf here)

Create a filesystem on it (note: while the console refers to it as /dev/sdf, Amazon Linux uses the Xen virtual disk notation /dev/xvdf):

mkfs.ext4 /dev/xvdf

Create a mount point for it, and mount the volume:

mkdir /mnt/ebs
mount /dev/xvdf /mnt/ebs

Sync the root and dev filesystems to the EBS disk:

rsync -avHx / /mnt/ebs
rsync -avHx /dev /mnt/ebs

Label the disk:

tune2fs -L '/' /dev/xvdf

Flush all writes and unmount the disk:

sync;sync;sync;sync && umount /mnt/ebs

Using the web console, create a snapshot of the EBS volume. Make a note of the snapshot ID.

Still in the web console, go to the instances and make a note of the kernel ID your instance is using. This will be aki-something. In this case, it was aki-88aa75e1.

For the next step, you’ll need an EC2 X.509 certificate and private key. You get these through the web console’s “Security Credentials” area. This is NOT the private key you use to SSH into an instance. You can have as many as you want, just keep track of the private key because Amazon doesn’t keep it for you. If you lose it, it’s gone for good. Download both the private key and certificate. You can either upload them to the instance or open them in a text editor and copy the text, and paste it into a file. Best place to do this is in /root. Once you have the files, set some environment variables to make it easy:

export EC2_CERT=`pwd`/cert-*.pem
export EC2_PRIVATE_KEY=`pwd`/pk-*.pem

once this is done, you’ll need to register the snapshot as an AMI. It’s important here to specify the root device name as well as map out the ephemeral storage as Wowza uses those for content and logs. Ephemeral storage will persist through a reboot, but not a termination. If you have data that needs to persist through termination, use an additional EBS volume.

ec2-register --snapshot [snapshot ID] --description "Descriptive Text" --name "Unique-Name" --kernel [kernel ID] --block-device-mapping /dev/sdb=ephemeral0 --block-device-mapping /dev/sdc=ephemeral1 --block-device-mapping /dev/sdd=ephemeral2 --block-device-mapping /dev/sde=ephemeral3 --architecture x86_64 --root-device-name /dev/sda1

Once it’s registered, you should be able to boot it up and customize to your heart’s content. Once you have a configuration you like, right-click on the instance in the AWS web console and select “Create Image (EBS AMI)” to save to a new AMI.

Note: As of right now, I don’t think startup packages are working with the EBS AMI. I don’t know if they’re supposed to or not. 

Streaming on Amazon’s “SuperQuad”

I posted recently about using Amazon EC2’s cluster compute instances for big streaming projects. That post got me a call from a client in Texas who was planning to stream a big tennis tournament in Dallas and needed a server backend that could handle it, without going through the hassle and expense of setting up a CDN account for a single event. Of course, since everything is bigger in Texas, they wanted to stream to a large audience. They also wanted to be able to send a single high-definition stream for each of the two tournament courts, and then transcode down to a few different bandwidth-friendly bitrates. This called for not only big network horsepower, but big CPU horsepower as well.

I fired up the superquad (cc1.4xlarge), installed Wowza and the subscription license on it (Wowza pre-built AMIs do not exist for this instance type), and tuned it. I then created the transcoder profiles to create a 480p, 360p, 240p, and 160p stream, and we tested. Note that when installing Wowza yourself on an EC2 image, you don’t have access to the EC2-specific variables and classes out of the box. You’ll need to add the EC2 jar file that can be found on one of Amazon’s prebuilt AMIs. In this case, that wasn’t a factor, as I simply hardcoded the server’s public DNS name into any place that needed it.

Once the tournament started, we were seeing big audience numbers, with bitrates on the box well in excess of 1Gbps. On day two, audiences started complaining about spotty stream performance, and some were running 15 minutes behind live.

After jumping into the logs, it became apparent that this 8-core/16-thread monster was starved for CPU resources! Wowza recommends that a transcoder system not exceed 50-55% CPU. We then reduced the number of transcode streams to two (480p and 360p). In the process, I discovered that a misformatted search/replace had altered the configuration to transcode all the streams to 1280×720, at extremely low bitrates. No wonder the poor thing was dying. Once we got everything fixed, a full audience with both courts going was clocking in around 40% CPU. At no time in the process did Java heap memory exceed 3GB (in the tuning, I allowed it up to 8GB, the max recommended by Wowza). Wowza seems to be exceedingly efficient with its memory usage. If you need to run heavier transcoding loads, you may want to look at what I call the “super-duper-octopus” (cc1.8xlarge), which is about double what this one is.

CPU Usage - Note 2/7 when we were having trouble

Early Thursday, I checked the AWS usage stats for the month, and my jaw dropped. In three days of streaming, we’d clocked over five TERABYTES of data transfer. I expect I’ll bump into the next bandwidth tier (or come very close) by the end of the week. That’s what happens when you average around 1Gbps for the better part of 12 hours a day!

Network Usage (Bytes/Minute.. What the heck, Amazon?)

As for server usage, this instance type runs about the price of two extra-large instances (each capable of about 450Mbps), and doesn’t even break a sweat at those transfer rates. Had I parked this service on a VPS at another hosting provider, I would have blown through the monthly data cap by mid-Tuesday, and likely not had access to a 10GB pipe on the server.  Meanwhile, when you start cranking terabytes of data, that cost per gigabyte is a major factor. When you crank out 10TB of data, every penny per gigabyte adds $100 to the bandwidth tab.

Although a large portion of the audience for this event was in Europe (at one point, 60% of the audience was coming from Lithuania!), the cluster instances are currently only available in the us-east (Virginia) region. If performance for European users had gotten problematic, I could have set up a repeater in Amazon’s datacenter in Ireland. As it was, there were no complaints.

So that’s how a superquad works for large streaming events. If you want some help setting one up, or just want to rent mine for your event, drop me a line.

Go big, or go home!

I’m currently working on setting up Wowza on an EC2 “Cluster Compute Quadruple Extra Large” instance (or as I’ve heard it called, the “super-duper-quadruple”, which sounds like something I’d get at Five Guys). There’s no pre-built AMI for this one, so you have to use a stock Linux image (I use the standard Amazon one) and install Wowza with a subscription license, and do the tuning yourself. But the payoff is this: for $1.30 an hour, you get a streaming server capable of delivering 10Gbps of data.  On a 750Kbps stream, that’s over 13,000 concurrent clients. This for about the same cost as nine or ten m1.small instances which can deliver an aggregate of about 1.5Gbps. On a reserved instance, you can get this down to just under 75 cents an hour.

In addition to Ludicrous Speed on the network I/O, this instance comes with 8 multithreaded Xeon 5570 cores (at 2.97GHz), 23 GB of RAM, and 1.7TB of local storage. (a quick speed test downloaded a half-gigabyte file in about four seconds, limited by the gigabit interface at the remote server). This is roughly equivalent to a moderately configured Dell R710. There’s also a GPU-enabled version of this that adds a pair of nVidia Tesla GPU cores.

If that’s not enough, you can go bigger, with 16 cores, 60GB of memory, and 3.5TB, Recently, someone clustered just over a thousand of these instances into the 42nd largest supercomputer in the world.

As of right now, these monster instances are only available in the us-east-1 zone.