Why Amazon Web Services
In just a few weeks, NASA/JPL was able to design, build, test, and deploy their web hosting and live video streaming solutions that were built using a variety of services on AWS. NASA/JPL’s live video streaming architecture was developed on a combination of Adobe Flash Media Server, Amazon Elastic Compute Cloud (Amazon EC2) instances running the popular nginx caching tier, Elastic Load Balancing, Amazon Route 53 for DNS management, and Amazon CloudFront for content delivery. AWS CloudFormation automates the deployment of live video streaming infrastructure stacks across multiple AWS Availability Zones (AZ) and regions.
Additionally, Amazon EC2 instances running the Amazon Linux AMI were configured using configuration scripts and Amazon EC2 instance metadata. Shortly before the landing, NASA/JPL provisioned stacks of AWS infrastructure, each capable of handling 25 Gbps of traffic. NASA/JPL used Amazon CloudWatch to monitor spikes in traffic volume and provision additional capacity based on regional demand. As traffic volumes returned to normal hours after the landing, NASA/JPL used AWS CloudFormation to de-provision resources using a single command. The figure below provides a diagram of the live video streaming architecture.
Figure 1: NASA/JPL Live Video Streaming Architecture
The mars.jpl.nasa.gov website is based on the open-source Content Management System (CMS) Railo, running on Amazon EC2. Shared storage for Railo is provided by Amazon EC2 instances running Gluster on a pool of Amazon Elastic Block Store (EBS) volumes for consistently high performance disk I/O. The CMS also interacts with a highly available, multi-AZ MySQL database managed by Amazon Relational Database Service (RDS). Traffic is dispersed across CMS servers by a number of Elastic Load Balancers using Amazon Route 53 to provide a weighted traffic distribution across the ELBs. Amazon CloudFront is also used to spread traffic to points of presence around the world, thereby reducing latency for international visitors and improving the overall scalability of the solution.
Furthermore, NASA leverages Amazon Simple Workflow Service (Amazon SWF) to copy the latest images from Mars to Amazon S3. Metadata is stored in Amazon SimpleDB and Amazon SWF triggers provisioning of Amazon EC2 instances to process images as each transmission from Curiosity is relayed to Earth. The diagram below illustrates NASA/JPL’s web architecture.
Figure 2: NASA/JPL Web Architecture
Operating the mars.jpl.nasa.gov website on Amazon Web Services allowed NASA/JPL to broadcast their message to the world without building this infrastructure themselves. The broad set of capabilities and ease-of-use afforded by AWS allowed NASA/JPL to construct a robust, scalable web infrastructure in only two to three weeks instead of months.
Now that Curiosity has landed safely on Mars, the mission will continue to use Amazon Web Services to automate the analysis of images from Mars, maximizing the time that scientists have to identify potential hazards or areas of particular scientific interest. As a result, scientists are able to send a longer sequence of commands to Curiosity that increases the amount of exploration that the Mars Science Laboratory can perform on any given sol (Martian day).
To find out more about NASA/JPL’s mission and explore the planet Mars, visit http://mars.jpl.nasa.gov This link will launch in a new browser window or tab. or to read more about how NASA uses the AWS cloud for an interoperable, standards-based, secure, and cost-effective environment, visit NASA’s blog This link will launch in a new browser window or tab..
To learn more about how AWS supports mission critical cloud computing applications across public sector, visit http://aws.amazon.com/publicsector.
Here at DNN Direct we specialize in AWS cloud hosting and our engineers can assist you with migration to Amazon Web Services. You will get fully scalable sever environment at very competitive prices.