Bristle Software Cloud Computing Tips

This page is offered as a service of Bristle Software, Inc.  New tips are sent to an associated mailing list when they are posted here.  Please send comments, corrections, any tips you'd like to contribute, or requests to be added to the mailing list, to tips@bristle.com.

Table of Contents:

  1. Cloud Computing Concepts
    1. Intro to Cloud Computing
    2. Public, Private and Hybrid Clouds
    3. SaaS -- Software as a Service
    4. PaaS -- Platform as a Service
    5. IaaS -- Infrastructure as a Service
    6. See Also
  2. Amazon Web Services
    1. AWS Intro
    2. Getting Started
      1. Create AWS Accounts
      2. Set Up Security
      3. Launch an EC2 Server Instance
      4. Login to the Instance
      5. Set the hostname
      6. Set an Elastic IP Address
      7. Point the DNS at the IP Address
      8. Relay Mail Through Another SMTP Server
      9. Mount a Persistent EBS Volume
      10. Configure the Server Instance
      11. Move Files to the EBS Volume
      12. Reserve an Instance to Save Money
      13. Manage Your AWS Billing
      14. Set Up More Security
    3. Clone a server instance
      1. Create an AMI From Your Instance
      2. Create an Instance From Your AMI
      3. Create an Instance From Your AMI With Some Files on EBS
    4. AWS Spot and Micro Instances
    5. Static Web Site at AWS S3
    6. AWS Support Pricing
    7. Move server to new hardware

Details of Tips:

  1. Cloud Computing Concepts

    1. Intro to Cloud Computing

      Original Version: 9/29/2009
      Last Updated: 10/28/2009

      "Cloud Computing" is a hot topic these days.  It's the "next big thing".  However, it is not really a new thing, so much as a new name for a useful collection of old things, with some new twists thrown in.

      Previous names for the various parts include (from the 1970's) "mainframe", "time-sharing", "virtual machine", and (more recently) "network", "LAN", "WAN", "Internet", "World-Wide Web", "hosting service", "groupware", "virtualization", "hypervisor", etc.  These have all been ways for a single large computer to act like multiple smaller computers, or for multiple computers to interact with each other.  Cloud Computing pulls these various concepts together in a useful way, and with a jazzy new name. 

      So, why now?  What's driving the convergence?  In a word: "broadband".  That is, the recent dramatic increase in the speed and availability, and the dramatic decrease in cost, of high-speed networks.  Suddenly in recent years, Joe Average can cheaply get a fast connection (DSL, cable modem, FIOS, etc.) from his home or small business to computers throughout the world.  Speeds of 10, 20, or even 50 Mbps, are now common (and getting faster every day).  Large corporations have had such connections for years, but they've been expensive.  Those costs are now dropping fast.  For the first time in history, it is typically as fast to access a computer thousands of miles away as to access the hard drive on your local computer.  Or if not quite as fast, certainly "fast enough" to open a whole new range of possibilities.

      Cloud computing comes in many different forms, with more emerging every day.  Details in subsequent tips.

      --Fred

    2. Public, Private and Hybrid Clouds

      Original Version: 10/28/2009
      Last Updated: 10/28/2009

      The computing "cloud" can have different access modes, and different physical locations.

      Public
      The first popular option was the "Public" cloud.  It typically resides outside of your building, somewhere on the Internet.  It achieves economies of scale by using a few powerful servers to support the needs of many customers.  It is accessible to the general public, subject only to login restrictions.  Anyone with a valid username and password, or other security device, can access the cloud.  This is very convenient, but not necessarily secure.  You have to worry about things like password-cracking, line sniffing, and other security explots.

      Private
      For better security, the "Private" cloud became an option.  Some large companies wanted all the advantages of a cloud (flexibility, scalability, reduced electric bills, reduced physical space, etc.), but couldn't afford the security risks.  They created their own private clouds, using the exact same technology as a public cloud, but residing entirely in their own building, behind their own firewall, on their own physical servers.

      Smaller companies wanted to do the same thing, but couldn't afford it.  Therefore a market emerged for cloud vendors to offer a private cloud option.   The cloud would physically reside outside of your building, somewhere on the Internet, but would be accessible only via a dedicated line, or only via encrypted communications (SSL, VPN, etc.), and perhaps only from a restricted set of client computers, IP addresses, etc.

      Hybrid
      Recently, the "Hybrid" cloud has emerged as a third option.  With separate public and private clouds, communication can be inefficient, with all traffic being routed from one cloud into your building, through your firewall, and back out to the other cloud.  With public and private parts of a single cloud residing on separate servers at the same cloud vendor, communications can take a secure shortcut.

      --Fred

    3. SaaS -- Software as a Service

      Original Version: 9/29/2009
      Last Updated: 9/30/2009

      SaaS (Software as a Service) includes familiar things like Web e-mail where you don't have to install the software on your own computer, or even keep the data there.  You just connect to the Yahoo, Gmail or other Web site to read your e-mail, send messages, etc.  Also, things like Google Docs and Google Apps where you don't have to install a word processor, spreadsheet program, or other app, on your own computer.  You just connect to the Google Web site to view and edit your documents and spreadsheets.  Additional such services are springing up right and left.

      It is becoming possible to take a step back towards the truly "thin client" -- a computer with a display, keyboard, mouse, fast CPU, lots of RAM, but very little hard drive disk space.  Almost like the "dumb terminals" of the 70's, but with more processing done locally.  Having no applications stored locally means you don't have to install, upgrade, configure, patch, re-patch, re-install, re-configure, re-re-patch, all the time.  Instead, you just pay a company to provide you with the latest working software at all times, and you go back to doing the things you wanted to be doing, without having to also be your own computer administrator.

      Furthermore, if even your own documents, spreadsheets, e-mail messages, address book, and other data, are stored remotely, not on your own local computer, you don't have to worry about losing them.  No more need to do backups.  If your hard drive crashes, or your house burns down, or your computer is stolen, or you decide to buy a new or additional computer, there's no setup time.  Just power on a new computer, fire up a Web browser to connect to your remote data and applications, and get started.  If you move everything to a remote server "in the cloud", you may be able to get rid of your hard drive entirely, and just boot the computer from a CD or other read-only media.  In that case, there's no need for a virus scanner, since there's nothing to infect with a virus.  Simply re-boot the computer and it's guaranteed to be virus free.

      Since your files are not on the local computer in your home or office, you can access them from any computer in the world. They are just as available to you at home as at work, or at a friend's house, or an Internet cafe, or even your cell phone (which is also a computer, after all).  Others can also access your files, in controlled ways that you choose to allow.  You can allow your friends and colleagues to see your pictures, your calendar, etc.  You can allow them to directly edit documents on which you are collaborating, without having to send files back and forth and keep track of who is currently editing each file.

      This form of Cloud Computing offers the highest level of convenience, service, support, and automation, but the lowest degree of flexibility, portability and control.  What it allows you to do (send e-mail, manage docs, run apps, etc.) is very easy, but you have to do things the way the site was designed for you to do them.  Furthermore, you may get locked into that one site, and not be easily able switch to another vendor.  So far, most vendors seem to do a pretty good job of supporting import and export, but that's worth checking before you commit too fully to one vendor. 

      SaaS is ideal for computer users who don't have the skills and/or interest in computer programming and computer administration.

      --Fred

    4. PaaS -- Platform as a Service

      Original Version: 9/29/2009
      Last Updated: 5/30/2016

      PaaS (Platform as a Service) includes things like Heroku and Google App Engine where you, as a computer programmer but not necessarily a computer administrator, can develop entire applications remotely.

      It offers a lower level of convenience, service, support, and automation, than SaaS, but additional flexibility and control (though perhaps not additional portability).  It doesn't provide you with a suite of useful, general-purpose apps, but rather with a set of tools that are only useful if you are a computer programmer, planning to write your own apps.  In that case, it offers many of the same advantages as SaaS, relieving you of all the administrative chores.  There's no need to install/upgrade/configure the programs that make up the development environment -- compilers, debuggers, editors, etc., and no need to backup your files in case of disk crash or other disaster.  As with SaaS, just power on a new computer, fire up a Web browser to connect to your remote development environment and begin programming.

      As with SaaS, your program source files are not on the local computer in your home or office, so you can access them from any computer in the world.  You can work from anywhere.   Also, you can easily collaborate with other programmers, sharing access to the same set of source files.

      As with SaaS, however, you have to do things the way the site was designed for you to do them.  You may get locked into that one development environment, and not be easily able switch to another technology or even another vendor supporting the same technology.  Hopefully, the vendors offering these services will be responsive to the needs of their users.  For example, at first Google App Engine supported only Python, but it recently added support for the Java JVM, which supports Java, Groovy, JRuby, Scala, Clojure, AspectJ, and others.  It does not yet support the use of a standard SQL relational database, like MySQL.  Instead, you have to use Google's BigTable, GQL, etc.

      PaaS is best for computer programmers who don't have the skills and/or interest in computer administration, and who are willing to limit themselves to the technologies offered by the PaaS vendor.

      5/30/2016 Update:
      PaaS is getting to be more portable.  For example, Heroku allows you to deploy an entire Python/Django app with no changes.  You can use standard not-necessarily-cloud technologies like Django, MySQL, Git, etc., as well as IaaS technologies like AWS S3, AWS RDS, AWS SES, etc.  No need to do any sys admin or IaaS tasks, like OS and package install/configure/patch, security, virus scans, backups, disk space management, etc.  Under the covers, it makes heavy use of IaaS at AWS.

      --Fred

    5. IaaS -- Infrastructure as a Service

      Original Version: 9/29/2009
      Last Updated: 9/30/2009

      IaaS (Infrastructure as a Service) includes things like Amazon Web Services (AWS) where you, as a computer adminstrator, can set up a "virtual" standard Linux or Windows server. 

      You log in as an admin or "root" user and install/configure any applications and tools you choose.  It is exactly like setting up a physical server by buying an actual computer, plugging it into your own electrical outlet and your own Internet connection or LAN, logging in, and installing/configuring the apps and tools.  The big difference is that your "virtual" server is actually a simulation running, along with potentially many other such simulations, in a large server somewhere else, with the IaaS vendor buying the hardware, paying for the electricity and the high-speed Internet connection, etc.  Also, it is much more efficient than running your own physical server, because the single larger server sits idle much less than your own small server might have, and consumes less electricity and less air-conditioning than all of the small servers would have, etc.

      This is accomplished by minor tweaks to mature technology that has been around since the 1970's.  A "hypervisor" program runs on the big server, managing multiple concurrently running instances of the same or different operating systems.  This is much like the way an operating system manages multiple concurrently running programs, and the way IBM mainframes have always run multiple "virtual machines", one for each logged in user, within the mainframe.  Each virtual server appears to have its own disks, its own RAM, its own CPU, etc., but is really sharing larger disks, more RAM, and a faster CPU with other virtual servers within the hypervisor.

      IaaS offers a lower level of convenience, service, support, and automation, than SaaS or PaaS, but much more flexibility and control, as well as absolute portability.  It doesn't necessarily provide you with a complete suite of useful, general-purpose apps like word processors, spreadsheets, calendars, etc.  Nor does it necessarily provide you with a complete programming environment.  Instead, it provides you with a server that may be preconfigured with a standard Linux, Windows, or other operating system, and the standard collection of applications and tools that come with that operating system.  You may also choose a server that comes preconfigured with additional standard portable tools and applications.  And you can install or write as many other non-standard tools and apps as you like.

      IaaS does offer some of the administrative advantages of SaaS and PaaS.  For example, there may be no need to backup your files in case of disk crash or other disaster.  The vendors typically offer backup or data-duplication services that protect you from any single point of failure.  So, you should never lose anything at the server, and you don't have to store anything on the local client computer.  As with SaaS and PaaS, just power on any client computer, connect to the remote server and begin working.

      However, it typically does not offer the SaaS and PaaS advantage of updating and patching the operating system, applications, and tools.  You choose an initial server configuration, and are responsible from then on for any updates and patches that may be required.  More control (you never get a patch that adds a new bug, just at the time you can least afford it), but more work for you to do yourself.  On the other hand, just like with a physical server, you can choose to use an "automatic updates" type of service to apply updates, patches etc. for you.  And you can hire a service to perform any administrative tasks you don't want to bother with.

      As with SaaS and PaaS, your server does not physically reside in your home or office, so you can access it from any computer in the world.  You can work from anywhere.   Also, you can easily collaborate with other users and programmers, sharing access to the same set of apps, tools, and files.

      The real advantage of IaaS is that you don't have to do things the way the vendor intended.  You have complete control over your own server, and can do whatever you want (subject to rules against spamming, illegal activities, etc.).  You can't get locked into one development environment, one technology or one vendor.  If you don't like the first IaaS vendor you choose, simply set up a server at another vendor (or go back to your own hardware), and copy all of your portable operating system configurations, apps, tools, and files there. 

      This is especially convenient if you are using open source apps and tools written in languages like Java and running on operating systems like Linux, where no "install" is typically needed.  In that case, you simply copy all the files from one server to another and resume your work.  No need to deal with issues of transferring software licenses from one computer to another, or running install programs to re-install the software on the new computer, or manually repeating hours or days of pointing and clicking to re-configure the operating system and applications, and then wondering what you missed that is causing it to behave differently.  Just one single rsync command to copy all the files, and you are on your way.

      IaaS is very simple -- there is nothing new to learn, other than how to launch and terminate server instances.  If you already know how to configure and administer a Linux or Windows server, your skills are completely portable.  You still do everything the way you always did.  Instead of just having a Web mail interface or doc/spreadsheet interface (like SaaS), and instead of having to develop software to fit a specific proprietary framework (like PaaS), you have complete control of the server in a portable way.  Once the server instance is launched, even if you don't have a fully configured server somewhere else to copy from via rsync as described above, you log in as usual, create/delete user accounts as usual, install Tomcat, MySQL, etc. as usual, deploy your apps and other files as usual, etc.

      IaaS is ideal for computer users or programmers who have the skills and interest to do their own computer administration, and who value such control, flexibility and portability.

      --Fred

    6. See Also

      Original Version: 9/29/2009
      Last Updated: 9/30/2009

      For more info, see:

      http://www.webguild.org/2008/07/cloud-computing-basics.php
      - Good definitions of SaaS, PaaS and IaaS

      http://www.webguild.org/category/cloudnomics
      - Aggregated newsfeed of cloud computing articles from various sources

      http://cloudfeed.net/2008/06/03/defining-saas-paas-iaas-etc/
      - Very well-written daily blog on cloud computing

      --Fred

  2. Amazon Web Services

    1. AWS Intro

      Original Version: 8/12/2009
      Last Updated: 1/11/2012

      Amazon Web Services (AWS) is a paid service that allows you to create virtual Linux and Windows servers to replace your physical servers.  It is the type of "Cloud Computing" known as IaaS (InfraStructure as a Service).

      There are several different services, so you can pick and choose among them:

      1. Amazon EC2 (Elastic Compute Cloud) -- Virtual CPUs
      2. Amazon S3 (Simple Storage Service) -- Virtual hard drives
      3. Amazon EBS (Elastic Block Store) -- Accessing S3 drives from EC2 CPUs
      4. etc.

      Inexpensive:
      It can be very inexpensive to replace your physical servers with virtual servers.  For about $62/month (8.5 cents/hour) I got a virtual server that has a faster CPU, more RAM, more disk space, and a much faster Internet connection, than the physical server I replaced.  Once I was sure I liked it, I looked into lowering my price by committing to a one-year ($41/month) or three-year ($32/month) subscription, and chose the three-year.  I was previously paying $120/month for just the DSL line to the physical server, since I needed a fixed IP address and a fast upload speed.  I also had the costs of originally buying the server, occasionally buying new hardware (like a new UPS every couple years), paying for electricity, etc.  When it came time to do a major upgrade to a newer, bigger/better/faster server, I moved to Amazon instead.  It is now blazingly fast, especially the upload/download speeds.

      [1/11/2012 Update]
      You can now get a server for only $15/month (2 cents/hour), and the first year is free.  See AWS Spot and Micro Instances.

      Convenient:
      It can also be very convenient to have a virtual server instead of a physical server.  I no longer have to worry about electric outages, DSL phone line outages, being there in person to reset modems and routers, etc.  That's all handled by Amazon.  My virtual server is really running on one of their powerful physical servers, and they already have redundant power supplies, redundant comm lines, etc.  My uptime is greatly improved, and I don't have to do any of the work.

      New possibilities:
      With virtual servers, new possibilities arise.  Unlike physical servers, it is very easy to create, delete, and clone them.  No more waiting for a physical server to be purchased and shipped to you.  It takes about 60 seconds to create an additional server that is a clone of an existing server.  You can easily create 100 more servers to handle the busy shopping season, if you are an e-commerce site.  You can easily stop your servers over the weekend to save money when they are not needed, since you only pay for the hours that they are running.  Once, when I was installing software on a server, I noticed a particular file and wondered whether it had always been there, or had been created by the install.  To find out, I quickly created another server, looked for the file there, and deleted the server.  Since the additional server was up and running for less than an hour, this experiment cost me less than 10 cents.

      --Fred

    2. Getting Started

      Here is a series of tips on how to set up an Amazon EC2 server with attached Amazon EBS drives.  If you prefer video, instead of these tips, see:
             http://dicjtockkg63v.cloudfront.net/hpc-video-1.mov

      1. Create AWS Accounts

        Original Version: 8/12/2009
        Last Updated: 2/11/2010

        To use Amazon AWS, you must create an AWS account and sign up for each of the services you want to use.

        Go to:
            http://aws.amazon.com
        and sign up for a free AWS account, giving your e-mail address, a password, contact info, etc.  Read the customer agreement.  All of your data and apps belong to you, not to Amazon.  You're not allowed to use it send spam, attack other computers, or do anything illegal.

        Then, go to:
            http://aws.amazon.com/s3
        and enter a credit card number to sign up for the S3 service that allows you to store data at Amazon.  You can do this, even if you don't use the EC2 service and thus don't have a virtual server.  You can use the storage space to copy files to/from your desktop or any physical servers you may have.  Or you can skip this step, if you want to run a server, but have no need to save persistent files because, for example, all of the Web pages and Java code that runs at the server can easily be re-published to there from your desktop.

        Then, go to:
            http://aws.amazon.com/ec2
        and authorize the use of the credit card to sign up for the EC2 service that allows you to create virtual servers at Amazon.

        So far, this is all free.  You have given Amazon a credit card number, but there is no charge except for hours that your virtual servers are running, and the gigabytes of storage you are using -- none yet, in both cases.

        For more info, see:
            http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?using-credentials.html

        --Fred

      2. Set Up Security

        Original Version: 8/14/2009
        Last Updated: 2/11/2010

        Before you create any Amazon EC2 instances, you'll want to set up security for them.
        Note:  The AWS console has recently been enhanced to allow you to set up security on the fly, while creating your first instance, but that wasn't an option when I did it, so I did these explicit steps.

        X.509 Certificate and AWS Access Key Identifier (Optional):
        While creating your AWS accounts, you can optionally take the further steps of creating an "X.509 Certificate", and an "AWS Access Key Identifier".  However, these are needed only if you intend to access the AWS services through the SOAP or REST web service interface, or perhaps by the CLI (command line interface) that uses the SOAP interface behind the scenes.  If you plan to always use the AWS Console web page to manage your virtual servers and your virtual hard drives, you don't need to bother.  I created them, but probably didn't really need to, since Amazon keeps adding more capabilities to the Console, and I can now do everything there.  For more info on creating them, see:
            http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?using-credentials.html

        Firewall:
        In all cases, you must configure the EC2 server's firewall.  It defaults to having no ports open, so you can't login to your server instances, people can't see your Web sites, etc.  Here's what I did:

        1. Login to the Amazon AWS Console (via your e-mail address and AWS password) at:
                  http://aws.amazon.com/console
        2. Click the "Amazon EC2" tab, if necessary
        3. Click "Security Groups"
        4. Click the "default" checkbox
        5. In the bottom pane, specify the ranges of ports ("From Port" through "To Port" inclusive) that you want opened.  For example, to allow ping, http, and ssh:
          1. Ping
            1. Connection Method = Custom
            2. Protocol = ICMP
            3. From Port = -1
            4. To Port = -1
            5. Source = 0.0.0.0/0
            6. Save
          2. HTTP
            1. Connection Method = HTTP
            2. Save
          3. SSH
            1. Connection Method = SSH
            2. Save

        SSH Key Pair:

        By default, the ssh server on your server is configured to allow ssh access only via a key pair, not via a root username/password, so you have to create such a key pair.  Otherwise, you won't be able to login to the server to configure it.  Here's how:
        1. Login to the Amazon AWS Console (via your e-mail address and AWS password) at:
                  http://aws.amazon.com/console
        2. Click the "Amazon EC2" tab, if necessary
        3. Click "Key Pairs"
        4. Click "Create key Pair"
        5. Give the key pair a name, like "aws_ec2_key_pair" and save it as a local file on the computer from which you will ssh to the EC2 server instance that you plan to create.
        6. Protect your local key pair file from being publicly viewable (otherwise ssh may object when you try to use it).  For example:
              % chmod 400 aws_ec2_key_pair.pem

        So far, this is still all free.  You have no EC2 servers running and no S3 files stored, so there's no charge.

        For more info, see:
        http://docs.amazonwebservices.com/AWSEC2/latest/GettingStartedGuide/StartConsole.html
        http://clouddb.info/2009/05/17/using-and-managing-aws-part-3-aws-security/

        --Fred

      3. Launch an EC2 Server Instance

        Original Version: 8/14/2009
        Last Updated: 1/15/2010

        Now, you can launch an instance of a virtual server at Amazon EC2.

        You'll have to choose the size of the server (small, medium, large, etc.), which determines the cost.  You'll also have to choose the "Amazon Machine Image" (AMI) to use as a template for the initial server configuration.  There are AMIs with various operating systems installed (Fedora, Red Hat, Cent OS, Ubuntu, Debian, Gentoo, SUSE, Open Solaris, Windows, etc.).  Some are 32-bit, some are 64-bit, so you have to choose a server size that matches the AMI.  Also, you can't put Windows on a small ($0.085/hour) server -- probably because Amazon has to pay Microsoft for the Windows license, which raises the price by over 40% to $0.12/hour.  There are also AMIs that are preconfigured with useful software packages.  For example, the "Java Web Starter" AMI already has Java, Tomcat, MySQL, and Apache HTTP Server installed and configured.

        To browse the available AMIs (unless you're happy with one of the basic ones, like the one I chose):

        1. Go to the Amazon AWS Web site:
                  http://aws.amazon.com
        2. Click "Resources"
        3. Click "Amazon Machine Images (AMIs)"
        4. Note the name and "AMI ID" of the AMI you want to create.

        To launch an instance of a virtual server, here's what I did:

        1. Login to the Amazon AWS Console (via your e-mail address and AWS password) at:
                  http://aws.amazon.com/console
        2. Click the "Amazon EC2" tab, if necessary
        3. Click "Instances"
        4. Click "Launch Instance" to start the "Launch Instance Wizard"
        5. Click the "Quick Start" tab, if necessary
        6. Click the Select button for "Basic Fedora Core 8" (or whatever AMI you want, using the drop down lists of the "Community AMIs" tab if necessary for AMIs not listed in the "Quick Start" tab).
        7. Specify 1 instance of type "small"
        8. Use default Availability Zone (us-east-1a) since that's where I live.
        9. Use default of "Launch Instances", rather than "Request Spot Instances" because I want a server that runs all the time, not only when the price per hour has dropped to my target price.
        10. Use default "Advanced Instance Options" and don't request CloudWatch Monitoring
        11. Choose the Key Pair I created earlier.
        12. Choose the "default" security group I created earier.
        13. Click "Launch"
        14. Click the checkbox next to the server instance that appears in the instance list, to see its details.  Specifically, note the "Public DNS" name of the server.

        Once the status of the server shows as "running", you should be able to ping it via its DNS name.

        Now, it is no longer free.  Since you now have a server instance running, your credit card is now being charged $0.085/hour.

        For more info, see:
                http://docs.amazonwebservices.com/AWSEC2/latest/GettingStartedGuide/StartConsole.html

        --Fred

      4. Login to the Instance

        Original Version: 8/14/2009
        Last Updated: 11/7/2009

        Login to your virtual server as you would a regular physical server, via its public DNS name.

        Linux Server:
        To login as the root user to a Linux server from a Linux, Unix, or Mac client, use ssh and specify the name of your local key pair file.  For example:
        % ssh -i aws_ec2_key_pair.pem root@ec2-100-101-102-103.compute-1.amazonaws.com
        To login to a Linux server from a Windows client, use the free PuTTY ssh tool:
        Windows Server:
        To login to a Windows server, use Windows "Remote Desktop Connection".
        The Windows client can be found at:
        Start Menu | Programs | Accessories | Communications | Remote Desktop Connection
        The Mac client can be downloaded for free from: There are lots of free versions for Linux. See: For more info on using Remote Desktop Connection to access Amazon EC2 servers, see:
        You should now be able to interact with the server, as you would with any Linux or Windows server, logged in as root.  You can change configurations, add users, start and stop services, etc.  We'll do some of that next.

        For more info on logging in to Amazon EC2 servers, see:
        Feedback from Darin Strait [with Fred's comments]
        1. There is a Linux RDP [Remote Desktop Connection (Protocol)] client. It comes with ubuntu 9.04, if I recall correctly.  I have used it to connect to Windows machines with no issues.  You get a full desktop, just like the Windows RDP client.
        2. I would want to read up on the encryption that the RDP protocol does [with the Windows, Linux, Mac, or any client] if I was going out over open Internet, though. [Good point!  Unix's ssh was created in 1995 explicitly to be a secure encrypted alternative to telnet, ftp, etc., but Microsoft never picked it up.  Windows still comes with telnet and ftp clients, but no ssh client, so it forces you to take a risk when connecting to a remote server, unless you download and install PuTTY or some other 3rd party package.  Is RDP any better?]
          ...
          According to wikipedia (http://en.wikipedia.org/wiki/Remote_Desktop_Protocol): as of version 5.2 [2003], the RDP protocol has some sort of native implementation of RC4 encryption.  Versions prior to 6 [2006] are vulnerable to certain kinds of attack.  With a current version, I can't say if it beats ssh (unlikely), I do see some references to people running rdp over ssh [for better encryption than RDP alone provides].
        3. There are "RDP" console managers that allow you to organize servers, customize client settings and have added convenience over the standard Windows RDP client.  Look for the "Terminals" program on sourceforge.
        4. If you want to host a full desktop on Amazon, I would suggest FreeNX, which rides on top of ssh.  This isn't the usual X-client and X-server trick that allows you to run the graphics for one app locally but the heavy lifting happens somewhere else.  FreeNX gives you a standard GUI desktop, more like what you get with Windows and RDP.  It's as if you had booted your image locally.  There are builds available for various Linuxes and for Windows.  [I'm not sure what distinction Darin is drawing here.  As I understand it, RDP and X both just remote the mouse, keyboard and display to the local client (what X calls the server, since it serves up graphics), and running the app on the remote server (what X calls the client).  I'm checking with Darin for more info.]
          ...
          Perhaps "trick" wasn't the best word.  It's really just a feature.  It's not my intention to elevate RDP above X (or vice versa).

          When it comes to unix-like machines these days, most users seem log into their own local machine and then use ssh and X remoting (ssh -X dstrait@mycooldebianbox.com iceweasel) to just use one app at a time from the other machine.  (Assuming that they know that this is possible.)

          This is in opposition to a XDMCP type of login, where you login and then the local machine sort of isn't in the picture, except for keyboard, display and mouse.  My understanding is that FreeNX basically does an XDMCP type of login, but adds ssh, compression and some other stuff that makes it more workable over a laggy link.  Of course, the remoting stuff in X was originally designed when a 10 Mbps network card was a massive amount of bandwidth.

          Even with the popularity of ubuntu and other distros, it seems that many people take the "this is my computer, I must install programs here" approach.  Maybe they are just conditioned by the way that windows works, or maybe they just don't realized how handy using a remoted application is.  I used to use iceweasel/firefox to securely browse from my computer at home, rather than set up a real VPN.

        --Fred

      5. Set the hostname

        Original Version: 8/14/2009
        Last Updated: 11/12/2009

        While logged in to the server as root, set the hostname of the server to whatever you want, instead of having to use the default value assigned by Amazon, which is based on the internal IP address used within the Amazon LAN.  For example, to set a Linux Fedora server to be known as "myhost" with a fully qualified domain name of "myhost.mydomain.com":

        % vi /etc/sysconfig/network
          - Edit or add a HOSTNAME line to say:
            HOSTNAME=myhost
        % ifconfig
          - Note the IP address of the server, of the form 10.xx.xx.xx.
        % vi /etc/hosts
          - Add line with the appropriate IP address (not "xx"):
            10.xx.xx.xx myhost.mydomain.com myhost
        % shutdown -r now

        The last line above re-boots the instance, terminating your ssh session.  After a minute or two, you should be able to login as root again.  To confirm that it worked, use the commands:

        % hostname
          - Should show the new hostname
        % hostname -f
          - Should show the new fully qualfied domain name

        For more info, see:

        --Fred

      6. Set an Elastic IP Address

        Original Version: 8/15/2009
        Last Updated: 9/7/2009

        Problem: Dynamic IP Address
        When you launch a server instance, it is dynamically assigned an IP address by Amazon.  That IP address will stay assigned to your server as long as your server is running, even if you re-boot the server.  However, if you "terminate" the instance (via the AWS Console, for example), so that you stopped paying for it per hour, the IP address is released to the pool of IP addresses available for use by other Amazon customers, and may soon be assigned to a different virtual server.  Later, when you launch a new server instance, the new server is assigned a different IP address, even if you launch the new instance from an exact AMI image of the old server -- one that you created by "bundling" the server into an AMI and uploading it to the Amazon S3 service.

        Solution:  Elastic IP Address
        To preserve the same IP address, allocate an "elastic" IP address from Amazon.  You can allocate one or more such addresses, and can assign one to each of your servers.  When they are not assigned to any server, they cost you one cent ($0.01) per hour.  When they are assigned to a server, they are free, included automatically in the hourly cost of the server.  Elastic IP addresses are addresses that you have reserved for as long as you like.  They will not change if you terminate and create server instances.  You can move them back and forth between servers quickly (seconds), so they take effect much more quickly than DNS server changes, which can take hours to propagate to all of the worldwide DNS servers.

        To allocate an elastic IP address:

        1. Login to the Amazon AWS Console (via your e-mail address and AWS password) at:
                  http://aws.amazon.com/console
        2. Click the "Amazon EC2" tab, if necessary
        3. Click "Elastic IPs"
        4. Click "Allocate New Address"

        The randomly selected elastic IP address is displayed.  Since the elastic IP address is now allocated, but not yet assigned to a server, you will be billed one cent per hour.

        To assign an elastic IP address to a server:

        1. Login to the Amazon AWS Console (via your e-mail address and AWS password) at:
                  http://aws.amazon.com/console
        2. Click the "Amazon EC2" tab, if necessary
        3. Click "Elastic IPs"
        4. Click the checkbox next to the IP address you want to assign.
        5. Click "Associate"
        6. Choose the server


        The elastic IP address is immediately (within seconds) assigned, and can be used by your users world-wide to access your server.  Since it is now assigned to a server, you are no longer billed one cent per hour for it.  The IP address previously assigned to the server is released to the pool of IP addresses available for use by other Amazon customers, unless it was another elastic IP address of yours, in which case it becomes an unassigned elastic IP address for which you are billed one cent per hour, and which you can immediately assign to a different server if you choose.

        Each Amazon IP address (elastic or dynamic) has an associated public DNS name.  For example, the IP address 100.101.102.103 might have the public DNS name "ec2-100-101-102-103.compute-1.amazonaws.com".  Therefore, when you change the IP address of your server, the public DNS name changes also.  To avoid this, you may want to assign your own DNS name, as described in the next tip.

        For more info, see:

        --Fred

      7. Point the DNS at the IP Address

        Original Version: 8/16/2009
        Last Updated: 9/26/2010

        Now that you have a stable IP address (an Amazon AWS "elastic" IP address), you can map any DNS name you own to that IP address.  For example, I configured the bristle.com DNS server to map the name bristle.com to my IP address.  Thus, whenever anyone in the world refers to the name bristle.com, they are directed to my server.

        Unfortunately, there is no way to get Amazon's DNS server to stop mapping its generated DNS name to the same IP address.  Therefore, whenever anyone in the world refers to that generated name (for example, ec2-100-101-102-103.compute-1.amazon), they are also directed to my server.  That's okay.  There's no real problem with multiple names mapping to the same IP address.

        A bigger problem is the fact that there is no way to get Amazon's DNS server to update its reverse DNS mapping.  That is, it still maps my elastic IP address to the generated DNS name ec2-100-101-102-103.compute-1.amazon, not to my preferred name bristle.com.  Furthermore, since the Amazon DNS server, not the bristle.com DNS server, is responsible for the reverse DNS mapping of all IP addresses owned by Amazon, I can't do the desired reverse DNS mapping in my bristle.com DNS server.  As the current worldwide DNS system works, there can only be one reverse DNS mapping for a given IP address.  Therefore, whenever anyone in the world looks up the IP address of bristle.com, they correctly get my elastic IP address, but if they then check the validity of that mapping by doing the reverse DNS lookup of that IP address, they get the Amazon generated DNS name ec2-100-101-102-103.compute-1.amazon instead of the expected bristle.com.

        This is a problem especially if I run an SMTP (e-mail) server on my server.  When I hand off e-mail to other SMTP servers that want to prevent spam, they may check my reverse DNS lookup, and falsely label me a spammer.  I haven't yet found a solution to this, other than configuring my SMTP server to relay through another cooperating non-Amazon SMTP server.  Any better ideas?

        March 2010 Update:  Amazon now allows you to configure your reverse DNS mapping.  See:
             http://aws.typepad.com/aws/2010/03/reverse-dns-for-ec2s-elastic-ip-addresses.html

        --Fred

      8. Relay Mail Through Another SMTP Server

        Original Version: 9/4/2009
        Last Updated: 9/26/2010

        Note:  This step may no longer be necessary.  See the March 2010 Update at Point the DNS at the IP Address.

        You'll want to have an SMTP server running.  This allows your users to send e-mail.  Perhaps more importantly, it allows automatic processes like cron jobs, tripwire, logwatch, etc., to send e-mail.  You'll sometimes want that e-mail to go to other computers, not to local mailboxes.  For example, you may want messages about security to go to your primary e-mail account.

        The biggest problem you'll encounter is that other computers may reject your e-mail as spam, for reasons given in Point the DNS at the IP Address.  My best solution, so far, is to relay all outgoing mail through another SMTP server -- one that is not suspected of being a spam source.  You should be able to use any SMTP server that you have legitimate access to: your ISP, Google Gmail, Yahoo Mail, or whatever. 

        However, be warned that such servers may impose limits on how much mail they will allow you to send.  You'll be configuring your SMTP server to act like an MUA (mail user agent), not an MTA (mail transfer agent), so it will look to the remote SMTP server as though you are personally sending all of the mail.  It may cap you at some number of messages or bytes, per day or month.  Beyond that, it may reject, delay, or even discard messages.  The company may assume your computer is infected with a virus that is sending all the mail, or that you are a spammer, or may simply say you are violating their rules.  Furthermore, they may not allow you to send mail from the multiple usernames on your server, or from usernames at a domain name different from theirs.  Be sure you know their policies before relying on them to relay your mail.  If necessary, pay an "SMTP Mail Relay Service" for the right.  See: http://google.com/search?q=smtp+mail+relay+services.  I use a service that I was already paying to host the incoming e-mail for bristle.com.

        Here's what I did with my SMTP server (sendmail):

        1. Login to your server instance as described in Login to the Instance
        2. Configure sendmail to start automatically at reboot:
              % mv /etc/rc.d/rc4.d/K30sendmail /etc/rc.d/rc4.d/S80sendmail
        3. Edit (via vi or other editor) the file /etc/aliases, adding or editing a line like:
          1. root: your_username@your_domain.com
            This tells sendmail to forward all mail for root to you at the specified e-mail address.  If you prefer, you can simply put your local username here and create a .forward file in your home directory containing "your_username@your_domain.com" as the forwarding address for all of your local mail.  That's what I did.
        4. Edit (via vi or other editor) the file /etc/mail/sendmail.mc, adding or un-commenting the following lines:
          1. define(`SMART_HOST',`smtp_server_name.smtp_domain_name.com')dnl
            This tells sendmail to act as an MUA, relaying through the specified SMTP server.
          2. FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl
            This tells sendmail to follow the access rules in /etc/mail/access.db.
          3. MASQUERADE_AS(`your_domain.com')dnl
            This tells sendmail to pretend that all mail it sends is from users at your_domain.com.
          4. FEATURE(masquerade_envelope)dnl
            This tells sendmail to masquerade not only the contents of the e-mail, but also the "envelope" info used to send it.
          5. FEATURE(`genericstable')dnl
            GENERICS_DOMAIN(`your_domain.com')dnl
            These lines may be necessary if the target SMTP server refuses to relay mail from username "root", as some do.
        5. If you used the genericstable feature above, create the file /etc/mail/genericstable, containing the line:
          1. root some_user@your_domain.com
            This tells sendmail to pretend that mail from root actually came from the specified address.  Note: unlike the syntax of /etc/aliases, there is no colon (:) here.  If the target SMTP server allows you to send from multiple addresses, this "some_user" need not be the same as "your_username" above.
        6. Edit (via vi or other editor) the file /etc/mail/access (not access.db), adding a line like:
          1. AuthInfo:smtp_server_name.smtp_domain_name.com "U:valid_username" "I:valid_username" "P:valid_password" "M:LOGIN PLAIN"
            filling in the full name of the target SMTP server and your username and password for using that server.  Again, depending on the policies of the target SMTP server, "valid_username" may or may not be required to exactly match "some_username", "your_username", or both, and may be required to include "@your_domain.com".  To determine whether to use "LOGIN PLAIN" or something encrypted like "STARTTLS", you can telnet to the mail port (usually 25 or 587) of the target SMTP server, type "EHLO", and then "QUIT".  In response to EHLO, it should list the supported authentication methods.
        7. Protect the file /etc/mail/access, so no one can read it, since it contains a plaintext password:
              % chmod 600 /etc/mail/access
        8. Re-compile the sendmail config files:
              % make -C /etc/mail
              This generates access.db from access, sendmail.cf from sendmail.mc, etc.
        9. Start sendmail:
              % /etc/init.d/sendmail start
        10. If you want to allow incoming e-mail, not only outgoing, open port 25 in the firewall, as described in Set Up Security


        For more info, see:

        http://pauldowman.com/2008/02/17/smtp-mail-from-ec2-web-server-setup/
        http://clouddevelopertips.blogspot.com/2009/06/sending-email-from-ec2.html
        http://blog.twinklesprings.com/2008/03/27/remote-mail-delivery-for-google-apps-and-postfix-mail-server/
        http://www.birds-eye.net/article_archive/smtp_mail_relay_services.htm
        http://dbaron.org/linux/sendmail
        http://www.madboa.com/geek/sendmail-genericstable/
        http://does-not-exist.org/roessler/genericstable.html
        http://www.sendmail.org/m4/features.html
        http://www.brandonhutchinson.com/Sendmail_masquerading.html
        http://www.howtoforge.com/configuring-sendmail-to-act-as-a-smarthost-and-to-rewrite-from-address


        --Fred

      9. Mount a Persistent EBS Volume

        Original Version: 8/20/2009
        Last Updated: 2/11/2010

        Now you have a usable server that can be found by anyone in the world based on its name.  That may be as far as you need to go.  If the server doesn't need to have any persistent data, you may be done. 

        For example, I have one Web site that shows static Web pages, as well as pages that are generated programmatically by Java code running in a Tomcat Web server.  However, users of that Web site cannot enter any data to be saved at the site.  They can browse the site, but not fill out forms, make purchases, enter data into a database, etc.  That site has no need to maintain persistent data.  If I were ever to terminate and re-launch the instance, or if the real Amazon server hosting my virtual server were to crash, I would lose all of the local data, but that's not a problem because I can recreate it all by re-publishing all of my static Web pages, Java code, etc.

        However, most servers need persistent data.  You may want configuration changes that you make after launching the instance to be preserved, even if you terminate and re-launch the instance, and even if the real Amazon server crashes while hosting your virtual server.  You may also want to preserve log files from your Apache Web server, Tomcat server, database server, etc.  And to preserve local databases where your Web applications may store data entered by your users.  The solution is to use not only the Amazon EC2 service to run a virtual server, but also the Amazon S3 service to store files, and the Amazon EBS service to make those files accessible to your virtual server.  Since EBS volumes are automatically replicated across multiple Amazon disk drives on multiple Amazon servers at different physical locations, a single disk crash or other local disaster won't lose your data.

        Here's what I did:

        Create an EBS volume and attach it to your server instance:

        1. Login to the Amazon AWS Console (via your e-mail address and AWS password) at:
                  http://aws.amazon.com/console
        2. Click the "Amazon EC2" tab, if necessary
        3. Click "Volumes" in the "Elastic Block Store" section
        4. Click "Create Volume"
        5. Choose the server
        6. Choose a size (I chose 30 GB which costs me $3.00/month)
        7. Choose the same availability zone as your server (in my case, us-east-1a)
        8. Don't choose a snapshot, since you want an empty volume, not a volume containing the files from an existing Amazon snapshot
        9. Click "Create"
        10. Click the checkbox next to the newly created volume
        11. Click "Attach Volume"
        12. Choose the id of your server instance
        13. Accept the recommended device name (in my case, /dev/sdf)
        14. Click "Attach"

        Mount a file system on the EBS volume:

        1. Login to your server instance as described in Login to the Instance
        2. Create a file system on the EBS volume.
          Note:  Do NOT do this step if you are mounting an EBS volume that already contains a file system and files that you want to preserve.  It deletes all files from the EBS volume.
          On my Linux server, I did:
              % mkfs -t ext3 /dev/sdf
          On Windows, run the Disk Management utility via the diskmgmt.msc command (or via Control Panel | Administrative Tools | Computer Management | Storage | Disk Management).
        3. Mount the file system at some location in your existing file system.  On Linux, I mounted mine as a top level directory called "ebs", as:
              % mkdir /ebs
              % mount /dev/sdf /ebs
        4. Arrange for the file system to be automatically mounted at each re-boot.  On Linux, I did:
              % vi /etc/fstab
                  - Append the new line:
                    /dev/sdf /ebs ext3 defaults 0 0
        5. Re-boot to confirm the file system is automatically mounted.  On Linux, I did:
              % shutdown -r now
              Re-login after a minute or so.
              Confirm that the /ebs directory exists.

        Move, copy, or link files and directories to the new file system:

        You should now have a top level directory called /ebs that you can use like any other directory, reading and writing files.  It is stored persistently on an Amazon S3 server, not transiently on your Amazon EC2 virtual server.  You can move or copy files there.  On Linux, you can also create "symbolic links" to files and directories there from other file systems on your virtual server, so that files that would otherwise exist transiently will persist.  For example, you might want to do the following to cause the home directories of users to persist:

                % mkdir /ebs/home
                % mv /home/* /ebs/home
                % rmdir /home
                % ln -s /ebs/home /home


        I did some tests, and found that my virtual server can read/write files on the EBS volume as fast or faster than on its own file system, so performance doesn't seem to be an issue.

        For more info, see:
            http://aws.amazon.com/ebs/
            http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-ebs.html

        --Fred

      10. Configure the Server Instance

        Original Version: 8/24/2009
        Last Updated: 11/25/2009

        Your server is now ready to use.  The next step is to configure it to do what you want it to do.  In my case, I created additional usernames, copied a bunch of my files there, and installed and configured various servers:  Apache HTTP server, Tomcat server, MySQL, etc.

        Much of the configuration may already be done. As I mentioned in Launch an EC2 Server Instance, when choosing an AMI to use as the template for your server's initial configuration, you can choose one that is pre-configured with various combinations of useful software packages. For example, the "Java Web Starter" AMI already has Java, Tomcat, MySQL, and Apache HTTP Server installed and configured.  In fact, I started with that AMI, but later went back and started over so I'd have full control over the configuration and full knowledge of how to re-create it from scratch, if necessary some day.  I've also noticed that the "Getting Started on Fedora Core 8" AMI already has Apache HTTP server installed, unlike the "Basic Fedora Core 8" AMI that I used.

        Here's what I did:

        Update software, add users, configure sudo and sshd:

        1. Update all installed software to latest versions:
                  % yum update
        2. Install the tcsh shell that I prefer:
                  % yum install tcsh
        3. Create a non-root user (for example, "user1"), with tcsh as default shell.
          Note: The last step is optional.  I prefer tcsh, but you may prefer bash (the default) or some other shell.
                  % useradd user1
                  % passwd user1
                  % chsh -s /bin/tcsh user1
        4. Configure sudo to allow user1 to run any command:
                  % visudo
                      - Add the following line to the end of the file:
                        user1 ALL=(ALL) ALL
        5. Configure sshd to disallow logins by any user except user1, and to allow user1 to use password authentication so he can login from a machine where he doesn't have his key pair handy.
          Note: Since the AllowUsers line for root is commented out, this explicitly disallows direct login by root.  Instead, you have to login as user1 (or any other users you add here), and then su or sudo to root.  This is much better for security and accountability, since you can check log files to see which user became root at what times to perform what actions.
                  % vi /etc/ssh/sshd_config
                      - Add the following lines to the end of the file:
                        # AllowUsers root
                        AllowUsers user1
                      - Change no to yes in the following line:
                        PasswordAuthentication yes
                  % /etc/rc.d/init.d/sshd reload

        Install Java:

        1. Install runtime and development tools:
                  % yum install java-1.7.0-icedtea
                  % yum install java-1.7.0-icedtea-devel

        Install and configure Apache HTTP server:

        1. Install Apache HTTP server:
                  % yum install httpd
        2. Configure it to start automatically at reboot:
                  % mv /etc/rc.d/rc4.d/K15httpd /etc/rc.d/rc4.d/S85httpd
        3. Enable each user to have his own Web site via a "~user" URL with root directory ~user/public_html, and add support for index pages named index.htm instead of only index.html:
                  % vi /etc/httpd/conf/httpd.conf
                      Comment out the following line:
                      #UserDir disable
                      and uncomment the line:
                      UserDir public_html
                      Add index.htm to the following line:
                      DirectoryIndex index.html index.htm index.html.var
        4. Create a password-protected directory that will be available only to specific people:
                  % mkdir /var/www/html/directory1
                  % vi /var/www/html/directory1/.htaccess
                      Add the following lines to the empty file:
                      AuthType         Basic
                      AuthName        "The name shown on Web password prompts"
                      AuthUserFile     /var/www/html/directory1/.htpasswd
                      Require             valid-user
                  % htpasswd -c /var/www/html/directory1/.htpasswd oneuser
                  % htpasswd /var/www/html/directory1/.htpasswd anotheruser
                  % htpasswd /var/www/html/directory1/.htpasswd athirduser
                  % vi /etc/httpd/conf/httpd.conf
                      Add the following lines:
                      <Directory "/var/www/html/directory1">
                          AllowOverride All
                      </Directory>
        5. Start the server:
                  % /etc/rc.d/init.d/httpd start
        6. Test the Apache HTTP server from any client computer via:
                  http://your_domain_name.com
                
        Install and configure Tomcat server:

        1. Install Tomcat server, sample Web apps, and admin Web apps:
                  % yum install tomcat5
                  % yum install tomcat5-webapps
                  % yum install tomcat5-admin-webapps
        2. Configure it to start automatically at reboot:
                  % mv /etc/rc.d/rc4.d/K20tomcat5 /etc/rc.d/rc4.d/S89tomcat5
        3. Set the password for the admin Web apps:
                  % vi /usr/share/tomcat5/conf/tomcat-users.xml
                      Add the following line among the other such lines:
                      <user username="admin" password="some_good_password" roles="admin,manager"/>
        4. Start the server:
                  % /etc/rc.d/init.d/tomcat5 start
        5. Open the Tomcat port (default is 8080) in the firewall, as described in Set Up Security.
        6. Test the Tomcat server from any client computer via:
                  http://your_domain_name.com:8080

        Install and configure MySQL client and server:

        1. Install MySQL client and server:
                  % yum install mysql
                  % yum install mysql-server
        2. Start the server
                  % /etc/rc.d/init.d/mysqld start
        3. Configure it to start automatically at reboot:
                  % mv /etc/rc.d/rc4.d/K36mysqld /etc/rc.d/rc4.d/S88mysqld
        4. Set the password for the root user:
                  % mysql -u root
                  mysql> use mysql;
                  mysql> update user set password = password('some_good_password') where user = 'root';
                  mysql> flush privileges;


        --Fred

      11. Move Files to the EBS Volume

        Original Version: 8/25/2009
        Last Updated: 8/25/2009

        Now that you've installed a bunch of apps, you may want to move their data to persistent storage, so you won't lose it if you terminate and re-launch the instance, or if an Amazon server crashes while hosting your virtual server.  You don't have to re-install or re-configure the individual apps -- you can simply use symbolic links to redirect them to the EBS file system.  Here's what I did:

        Move tree of Web pages served by Apache HTTP server:

                % mkdir -p -v /ebs/var/www/html
                % mv /var/www/html/* /ebs/var/www/html
                % rmdir /var/www/html
                % ln -s /ebs/var/www/html /var/www/html

        Move Tomcat log files:

                % /etc/rc.d/init.d/tomcat5 stop
               
                % mkdir -p -v /ebs/var/log
                % mv /var/log/tomcat5 /ebs/var/log/tomcat5
                % ln -s /ebs/var/log/tomcat5 /var/log/tomcat5
               
                % /etc/rc.d/init.d/tomcat5 start
               
        Move MySQL database files and logs:

                % /etc/rc.d/init.d/mysqld stop
               
                % mkdir -p -v /ebs/var/lib
                % mv /var/lib/mysql /ebs/var/lib/mysql
                % ln -s /ebs/var/lib/mysql /var/lib/mysql
               
                % mkdir -p -v /ebs/var/log
                % mv /var/log/mysqld.log /ebs/var/log/mysqld.log
                % ln -s /ebs/var/log/mysqld.log /var/log/mysqld.log
               
                % /etc/rc.d/init.d/mysqld start


        For more info about moving MySQL databases and other data to an EBS volume, see:

        --Fred
      12. Reserve an Instance to Save Money

        Original Version: 8/26/2009
        Last Updated: 3/12/2012

        Once you've gained enough confidence in the whole virtual server idea, you may want to save some money by "reserving an instance".  Basically, that means that you pay up-front for a 1- or 3-year subscription, and your cost goes way down.  Here are the relative prices:

        Subscription Length Prepaid Cost Hourly Cost Total Cost (to run 24x7)
        0 years ("On-Demand") $0 $0.08/hour $59/month
        1 year $160/year $0.024/hour $31/month
        3 years $250/3 years $0.019/hour $21/month


        Whenever you have an instance running, if that instance matches the parameters (size: small, medium, large, etc., operating system type: Linux, etc.) of an instance you have reserved, you get the lower reserved instance price of $0.019/hour (for a 3-year reservation) instead of the higher "on-demand" price of $0.08/hour.  For example, if you reserve an instance, then launch 2 instances that match it, the 1st one gets the lower reserved price, and the 2nd one gets the higher on-demand price.  If you then terminate the 1st instance, the 2nd one switches automatically to the lower reserved price.

        12/2/2011 Update:

        Amazon recently added multiple "utilization levels" of reserved instances, so you can save money by reserving an instance, even if you don't plan to run the instance 24x7.  Here are some details:

        Subscription Length Utilization Level Prepaid Cost Hourly Cost Total Cost (to run 24x7)

        Break-even Utilization

        0 years ("On-Demand") N/A $0 $0.08/hour $59/month 0%
        1 year Light $69/year $0.039/hour $36/month 19%
        Medium $160/year $0.024/hour $31/month 33%
        Heavy $195/year $0.016/hour $28/month 48%
        3 years Light $106.30/3 years $0.031/hour $26/month 8%
        Medium $250/3 years $0.019/hour $21/month 16%
        Heavy $300/3 years $0.013/hour $18/month 31%

        Break-even Utilization is the percent of a year that you have to run the instance to break even with the On-Demand price. I computed it as:

        Number of hours = Fixed cost per year / Difference in hourly rate
        Break-even utilization (percent) = Number of hours / Hours in a year
        For Light and Medium utilization, you only pay the hourly rate for the hours that the instance is running, so the break-even utilizations for 1 and 3 years are:
        19% = 69/1 / (0.08-0.039) / (365.25*24)
        33% = 160/1 / (0.08-0.024) / (365.25*24)
        8% = 106.30/3 / (0.08-0.031) / (365.25*24)
        16% = 250/3 / (0.08-0.019) / (365.25*24)
        For Heavy Utilization, the rules are diffferent.  You have to pay the hourly rate for the entire year, even if you sometimes stop the instance.  The break-even utilizations for 1 and 3 years are:
        48% = (195/1 + 0.016*365.25*24) / (0.08-0) / (365.25*24)
        31% = (300/3 + 0.013*365.25*24) / (0.08-0) / (365.25*24)
        These numbers are for a "small" instance.  For "spot" and "micro" instances, which are even cheaper, see AWS Spot and Micro Instances.  For the prices of other sizes, and for more info in general, see:
                http://aws.amazon.com/ec2/reserved-instances/

        --Fred

      13. Manage Your AWS Billing

        Original Version: 10/15/2009
        Last Updated: 11/19/2012

        Amazon helps you predict the amount you will be billed for their services, lets you see the current balance of your next monthly bill, and lets you set up e-mail alerts for when you exceed billing thresholds.

        See the price list for various services at:
               http://aws.amazon.com/ec2/pricing/

        To predict your future bills, even if you have never signed up for an AWS account, use the calculator at:
                http://calculator.s3.amazonaws.com/calc5.html

        To see the current balance (to within a few hours) of your AWS account:

        1. Go to the Amazon AWS main page (not the AWS console) at:
                  http://aws.amazon.com
        2. Click "Your Account"
        3. Click "Account Activity"
        4. Login via your e-mail address and AWS password

        You'll see a page that details your EC2 instance charges ($0.085 or $0.03 per hour, or whatever, with the number of hours so far and a total cost -- always about $22/month for me), plus your EBS storage charges (always $3.00/month for me since I've reserved 30GB of EBS storage). It also shows your EC2 and EBS bandwidth charges and your S3 storage charges, but these have always been negligible for me -- 20 cents per month, or less.

        While at the Account Activity page, click "Billing Alerts" to set it to send you e-mail whenever various types of charges exceed the various thresholds you specify.  This allows you catch an unpredicted expense quickly.

        There's also a 3rd party calculator to compare TCO (total cost of ownership) of AWS virtual servers with your own hardware servers in various configurations. See:

        --Fred

      14. Set Up More Security

        Original Version: 2/21/2010
        Last Updated: 11/11/2011

        Once you have your server up and running, hackers will immediately start trying to attack it.  Therefore, you should set up some additional security measures, like:

        1. logwatch
          Sends you mail summarizing the break-in attempts.  You'll be amazed at how many hackers are knocking on your door at all times.  Probably dozens to hundreds of IP addresses per day, making tens of thousands of attempts.
        2. fail2ban
          Detects and blocks break-in attempts within seconds, and sends you mail about each hacker IP address that it blocked. Once I installed fail2ban, the number of breakin attempts reported by logwatch dropped from tens of thousands to mere dozens, because each attacking IP address was blocked immediately after the 3rd attempt.
        3. tripwire
          Detects successful break-ins by noticing changes to system files, and mails you a report each day. My report every day always reports no breakins.

        For details, see Unix Security.

        Anyone have any other favorites? Also, does Windows have any active security monitoring features like these?

        --Fred

    3. Clone a server instance

      1. Create an AMI From Your Instance

        Original Version: 2/11/2010
        Last Updated: 9/14/2012

        Update:  For instances booted from an EBS device, not from an "instance store", this is much easier.  Simply select the instance in the AWS Console and choose the "Create Image (EBS AMI)" option from the "Instance Actions" dropdown.  That does it all, including optionally rebooting the instance to make sure a clean copy of all files is created, in which case your instance will be offline for 2-3 minutes.  The rest of this tip describes the technique you have to use for the older "instance store" type of instance.

        ---------------------

        Once you have a server instance configured exactly as you like it, you may want to bundle it into an Amazon Machine Image (AMI), so that you can create more instances exactly like it.

        To do so, you "bundle" the instance, upload it to an S3 bucket, and register it as an AMI.  After that you can launch a new instance of the AMI at any time, and you get an exact clone of the original instance.  This is useful if you ever intend to terminate a configured instance and want to be able to re-launch it quickly.  It is also useful as a way to clone an instance, creating multiple identical copies.

        The AWS console does not yet offer a way to bundle and upload the instance, so I did that from the Linux command line of the instance itself.  (Actually, the "Instance Actions" menu in the "Instances" screen of the AWS Console does offer "Bundle Instance (S3 AMI)", but for me, it is always disabled, so either it's not quite ready yet, or I'm doing something wrong.  Any ideas?)

        Gathering security info:

        The Amazon CLI commands ec2-bundle-vol and ec2-upload-bundle are pre-installed on your Linux server instance, but require local copies of your X.509 certificate file and private key file.  You may already have generated these and stored them on your client computer for use with the CLI or SOAP interface.  If not, here's how:
             http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-credentials.html#using-credentials-certificate
        Once created, they have names like:
             cert-32LettersAndNumbers.pem
             pk-32LettersAndNumbers.pem

        When you create them, you are also told your Amazon "Access Key ID", and your Amazon "Secret Access Key", which you need below.  If you don't remember them, you can look them up:
             http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-credentials.html#using-credentials-access-key

        Finally, you need to know your numeric Amazon "user id", which is the same as your Amazon "account number" or "account id", but without the hyphens.  You can see it at the top right of the "Account Activity" page, as described in:
             Manage Your AWS Billing
        or can look it up directly:
             http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-credentials.html#using-credentials-account-id

        Creating the AMI:


        Here's what I did:

            On my client computer, copying to my server instance:

                - Use scp to copy my X.509 certificate and private key to my home directory (~)

            On my Linux server instance:

                % sudo mv -i -v ~/cert-32LettersAndNumbers.pem /mnt
                % sudo mv -i -v ~/pk-32LettersAndNumbers.pem /mnt
                % sudo ec2-bundle-vol
                         -d /mnt
                         -k /mnt/pk-32LettersAndNumbers.pem
                         -c /mnt/cert-32LettersAndNumbers.pem
                         -u MyNumericAmazonUserIdWithoutHyphens
                        
        -r i386
                         -p MyUniqueBundleName
                % sudo ec2-upload-bundle
                         -b MyS3BucketName
                         -m /mnt/MyUniqueBundleName.manifest.xml
                         -a MyAmazonAccessKeyId
                         -s MyAmazonSecretAccessKey               
                % sudo rm -i -v /mnt/pk-32LettersAndNumbers.pem
                % sudo rm -i -v /mnt/cert-32LettersAndNumbers.pem
                % sudo rm -i -v /mnt/MyUniqueBundleName*
                % sudo rmdir -v /mnt/img-mnt

            At the AWS Console:

                - AMIs | Register new AMI
                   - AMI Manifest Path = MyS3BucketName/MyUniqueBundleName.manifest.xml

        Notes:

        1. If files are changing on your instance while you are creating the bundle, you may get a partial file in the bundle.  It is best to temporarily stop any processes that may be writing to files you are bundling.
        2. The /mnt directory is not included in the bundle, so that's a good place to:
          1. Put your X.509 certificate and private key, which should be kept private and not included in the bundle.
          2. Create the bundle itself (the -d option of ec2-bundle-vol).
        3. Be sure to leave out the 2 hyphens when entering your 12-digit numeric Amazon user id (the -u option of ec2-bundle-vol).
        4. To avoid errors, use dots, not underscores, if you want separators between words of long descriptive bundle and bucket names (the -p option of ec2-bundle-vol, and the -b option of ec2-upload-bundle). 
        5. I used a bundle name that included my hostname and the date (trident.AMI.2010.02.06).
        6. If you don't already have an S3 bucket, ec2-upload-bundle should create one.
        7. Use a very unique bucket name to avoid conflicting with those of other users.  I followed the Java package naming convention of reversing my domain name as a prefix:  com.bristle....
        8. Creating the bundle can take a long time if you have a lot of large files on your server instance.  My 4 GB (including 2 GB of JPEGs, which don't compress much further) took over 30 minutes.
        9. Uploading the bundle can also take a while.  My upload took 9 minutes.
        10. For security reasons, delete your X.509 certificate and private key from /mnt after bundling.  Even though they are protected with Linux file protections, there is no need to leave them on the server at all.
        11. To save storage space, delete the bundle and its many "part" files from /mnt.

        For more info, see:
                http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?creating-an-ami-s3-linux.html

        --Fred

      2. Create an Instance From Your AMI

        Original Version: 2/11/2010
        Last Updated: 2/11/2010

        Creating a new server instance from your own AMI is just like creating one from any other AMI.  See:
                Launch an EC2 Server Instance

        The one difference I've found is that, for some reason, my own AMI's are shown in the AWS Console without any Launch buttons.  So instead of:
                AWS Console | Amazon EC2 | Launch an Instance
        or:
                AWS Console | Amazon EC2 | Instances | Launch Instance
        I had to do:
                AWS Console | Amazon EC2 | AMIs | select one | Launch

        Once the new instance is created, you can use it just like the old instance.  All usernames, passwords, installed software, configurations, etc., are identical (except that any EBS volumes are replaced with local copies -- see below).

        If you don't plan to terminate the old instance, the new instance may be a little too identical. You should change the host name:
                Set the hostname

        So far, the world doesn't know about the new instance, so it will continue to use the old instance.

        If the old instance kept files on an EBS volume, and you want the new instance to do the same, rather than use its local copies of the files from the EBS volume, skip the rest of this tip and go to:
                Create an Instance From Your AMI With Some Files on EBS
        This is especially important if you plan to terminate the old instance, replacing it with the new instance, and have dynamically changing files (log files, DB files, etc.) on an EBS volume, and don't want any of those changes to be lost during the transition.

        If there was no EBS volume, you can notify the world about the new instance by assigning it an Elastic IP address (use the same one if you are replacing an old instance):
                Set an Elastic IP Address
        and pointing your DNS server to it (already done if you used the same IP address):
                Point the DNS at the IP Address

        --Fred

      3. Create an Instance From Your AMI With Some Files on EBS

        Original Version: 2/11/2010
        Last Updated: 2/11/2010

        If you have some of your files on an EBS volume, as described in:
                 Mount a Persistent EBS Volume
        and
                 Move Files to the EBS Volume
        the steps above:
                 Create an AMI From Your Instance
        and:
                 Create an Instance From Your AMI
        create an instance with a non-EBS copy of the EBS files.  They do not create a 2nd instance that accesses the same EBS volume as the original instance.

        This makes sense since Amazon has a restriction that an EBS volume can be associated with only one instance at a time, presumably to avoid having to deal with concurrent updates to a single file system by multiple servers.

        If you don't want to leave the files of the new instance in non-EBS storage, you can repeat the instructions in:
                 Mount a Persistent EBS Volume
        and
                 Move Files to the EBS Volume
        to create a new EBS volume and move the files there.

        Since I planned to terminate the first instance, and wanted the same EBS volume to be accessed by the new instance, I did this instead:

            On the new instance:

                % sudo mv /ebs /ebs.old

            On the old instance:

                - Check for current activity, warn users of a temporary outage, etc.
                - Stop server processes that use the EBS files so the drive can be unmounted:
                   % sudo /etc/init.d/mysql stop
                   % sudo /etc/init.d/tomcat stop
                   % sudo /etc/init.d/httpd stop
                % sudo umount /ebs

            At the AWS console:

                - EBS | Volumes:
                   - Select the desired volume
                   - Detach Volume
                   - Attach Volume
                     - Instance = the new instance in the same zone
                     - Device = /dev/sdf
                     - Attach

            On the new instance:

                % sudo mkdir /ebs
                % sudo mount /dev/sdf /ebs
                % shutdown -r now

            At the AWS console:

                - Elastic IPs
                   - Select the elastic IP address
                   - Disassociate
                   - Associate
                     - Instance = the new instance

        The new instance now replaces the old instance, accessing the same EBS volume, using the same Elastic IP address (but a different Amazon internal IP address), the same host name, etc.  All dynamic files on the EBS volume (log files, DB files, etc.) are absolutely current.  They were updated on the EBS volume by the old instance until its Tomcat and MySQL servers were stopped.  They were in place on the EBS volume for further updates by Tomcat and MySQL before the Elastic IP address made it possible for the DNS server to find the new instance.  Nothing slipped through the cracks, even if users around the world tried continuously to access the server.  The server at that IP address was cleanly down from when you stopped Tomcat and MySQL on the old instance till you moved the IP address to the new instance.  They'll have seen a brief outage, but no unexpected behavior and no lost data.

        As soon as you are comfortable that the old instance is no longer needed, you can terminate it at the AWS Console. 

        As a sanity check, you may want to compare the copy of /ebs made when the AMI was created with the latest copy on the EBS volume.  If they compare OK, you can delete the old copy.  Here's what I did:

                % sudo diff -r /ebs /ebs.old | less
                % sudo rm -i -v -R /ebs.old

        --Fred

    4. AWS Spot and Micro Instances

      Original Version: 10/24/2010
      Last Updated: 1/11/2012

      Looking for a server that is even cheaper?

      I have my bristle.com corporate server running at Amazon for a total cost of about $32/month, with a dedicated virtual server where I have root access, running Tomcat, MySQL, etc, and hosting my Web site as well as Web apps I've written for clients.  Very cheap!

      If I didn't need to have the site up full-time, I could run it for as little as 8.5 cents/hour for the hours when it I run it.

      Before I signed up full-time to get my cost down to about 4.5 cents an hour, I used to run it that way.  Near the end of one month, I got started, created a server, ran it for 2 hours, paused it for a few days, and got a Visa change for 17 cents for the 2 hours.  Very cheap!

      Recently, Amazon has also announced Spot Instances and Micro Instances.

      A Spot Instance is like an E-bay auction.  You bid the price/hour that you are willing to pay for a server instance, and when Amazon has spare capacity that no one is buying for a higher price, they fire up your server, let it run until someone offers more, and then shut it down.  Great way to run a background computation that can be started and stopped, and has no interactive users.  Month-end financials, and other batch processing.

      A Micro instance is a low-power virtual server, with less RAM and disk than any of their regular instances.  It has a virtual CPU that is pretty slow over the long haul, but can do short bursts of fast.  The hourly charge is 2 cents/hour, and like all of the other instances, there is no up-front startup cost, no long-term commitment, etc.

      So, if you just want to play around with a server, and learn about Linux, databases, Web servers, or Cloud Computing, you can fire up an Amazon Micro instance, run it for a while, try a bunch of stuff, and shut it down, for 2 cents/hour.  If you run it for less than an hour in any given month, you really will get a charge on your monthly bill for only 2 cents.  If you leave it running full-time, the total cost is 2 cents/hour -- 48 cents/day -- $14/month -- $175.20/year.  Very cheap!

      Finally, if even 2 cents/hour is more than you want to spend, Amazon has announced a special offer starting in November 2010.  You can get a Micro instance free for a year.

      Free is hard to beat!  Give it a try!

      For more info:
               http://aws.amazon.com/ec2/pricing/

      --Fred

    5. Static Web Site at AWS S3

      Original Version: 11/24/2011

      If all you need is a server to host a static Web site (HTML pages, images, videos, CSS files, JavaScript, but no database or other server-side programming), you don't even need to set up an EC2 server.  Instead, you can just store the Web pages and other files at Amazon S3 (Simple Storage Service), and enable the S3 bucket as a website via the AWS console.  Now anyone can browse your S3 files via any Web browser.  Here's a brief article with more details:

      --Fred

    6. AWS Support Pricing

      Original Version: 1/6/2011
      Last Updated: 6/15/2012

      So what about support?

      Now you have a server that is running on the Amazon infrastructure, and something could go wrong that requires help from Amazon.  Hasn't happened to me yet, but it's good to be prepared.

      Amazon offers various levels of free and paid support, and has added more features and cut prices a couple of times so it's been getting better.  Currently:

      Level Price Description
      Basic Free 24-7 phone support by phone or e-mail, for billing and "system health issues".  On-line "resource center", FAQs, forums and
      "service health dashboard"
      Developer (was Bronze) $49/month 12-hour response times.  "1:1 customer support for any AWS question".  "Access to AWS Support engineers via email through the AWS online support center during local business hours to help configure, operate, and maintain core AWS services and features."
      Silver $100+/month 4-hour response times
      Business (was Gold) $100+/month (was $400+/month) 1-hour response times.  Support engineers available 24/7 via phone, chat or email.  "AWS Trusted Advisor" (automated monitoring to identify opportunities to save money, improve system performance, or close security gaps). Support for the most common third-party software running on AWS.
      Enterprise (was Platinum) $15K+/month 15-minute response times.  Account manager. Periodic business reviews.

      For more info, see:

      --Fred

    7. Move server to new hardware

      Original Version: 9/14/2012
      Last Updated: 9/14/2012

      Sometimes Amazon needs to replace, upgrade or maintain the hardware that hosts your virtual server.  So, they send you an e-mail, a couple weeks in advance, warning you of the event, and telling you what to do.  However, I found their instructions somewhat incomplete, so here's what I did.

      I got mail from Amazon on 9/12/2012, saying:

      Dear Amazon EC2 Customer,

      One or more of your Amazon EC2 instances in the us-east-1 region is scheduled for retirement. The following instance(s) will be shut down after 12:00 AM UTC on 2012-09-28.

         <my instance id>

      We recommend that you launch a replacement for each retiring instance and begin migrating to it. You can do this by stopping and re-starting your instance, or by terminating it and launching a new one in its place.

      You can see more information on the instances scheduled for retirement in the AWS Management Console at:
          https://console.aws.amazon.com/ec2/home?region=us-east-1#s=Events

      For more information about scheduled retirement events, please see Monitoring Scheduled Events in the EC2 user guide:
           http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html

      If your instance's root device is an EBS volume, the instance will be stopped after the retirement date, and you can start it again at any time. You can prevent retirement for this instance by issuing a stop and start from the AWS Management Console. Doing so will migrate your instance to new hardware and help reduce unforeseen downtime. For more information about how to stop and start your instance please see Stopping and Starting Instances in the EC2 user guide:
           http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/starting-stopping-instances.html

      If your instance's root device is an instance store, it will be terminated after the retirement date. We recommend that you launch a replacement instance from your most recent AMI and migrate all necessary data to the replacement instance before this time.

      If you have any questions or concerns, you can contact the AWS Support Team on the community forums and via AWS Premium Support at:
          http://aws.amazon.com/support

      Sincerely,
      Amazon Web Services

      This message was produced and distributed by Amazon Web Services LLC, 410 Terry Avenue North, Seattle, Washington 98109-5210

      Reference: <reference id for this notification>


      I checked the AWS console, and sure enough, there was an "instance-stop" event scheduled for my EBS-based instance with a description "The instance is running on degraded hardware".

      I periodically create an AMI from each server instance anyhow, as a backup mechanism, so I did that at the AWS console via the "Create Image (EBS AMI)" instance action, as described in Create an AMI From Your Instance.  This rebooted the instance, and created the AMI, but did not Stop and Start the instance, so the event was still scheduled.

      I did as Amazon suggested.  I selected the instance and did a "Stop" from the "Instance Actions" dropdown.  I waited 30-60 seconds for the status to change to "stopped", and did a "Start".  About 60 seconds later the status changed to running.  However, I could no longer access the server from my laptop.

      After a couple minutes, it occurred to me to check the external IP address and DNS name of the server, and found that it had been disassociated from the "Elastic IP Address" that I had assigned, and reverted to a dynamic IP address.  That's why I couldn't reach it.  To fix this, I re-associated it with my Elastic IP address, as described in Set an Elastic IP Address.

      Then it occurred to me that if the external IP address had been changed by stopping and restarting the server, most likely the internal IP address had also changed.  So, I updated the /etc/hosts file to contain the new internal IP address, as described in Set the hostname.

      Now, everything is fine.

      Moral of the story:  After doing a Stop/Start, don't forget to re-associate the Elastic IP address, and update /etc/hosts with the new internal IP address.

      --Fred

©Copyright 2007-2021, Bristle Software, Inc.  All rights reserved.