Since I’ve already created an image I liked in the us-west-1 region, I would like to reuse it in other regions. Turns out there is no mechanism within Amazon EC2 to do that. (See How do I launch an Amazon EBS volume from a snapshot across Regions?). I did find one post that talked a bit about how it can be done “out of band”. So I figured I would give that a try instead of doing a full recreation in the new region.
Prepare the Source Instance and Volume
Start an instance in the source region
Here I’ll start an instance in us-west-1a where I have the EBS image I want to copy. In this case I’ll use the image I want to copy, but it could be any image as long as its in the same region as the EBS AMI image that is to be copied. Though we are going to use the instance info to figure out some parameters for creating the new AMI. So if you don’t make the source instance the same AMI as the one you are copying you will need to supply some of the parameters yourself.
You can use a tool like ElasticFox to do the following creating of instances. Here we’ll do it with command line tools.
Set some Shell source variables on host machine
Just to make using these instructions as a cookbook, we’ll have some shell variables that you can set once and then all the instructions will use the variables so you can just cut and paste the instructions into your shell.
src_keypair=id_runa-staging-us-west src_fullpath_keypair=~/.ssh/runa/id_runa-staging-us-west src_availability_zone=us-west-1a src_instance_type=m1.large src_region=us-west-1 src_origin_ami=ami-1f4e1f5a src_device=/dev/sdh src_dir=/src src_user=ubuntu
Start up the source instance and capture the instanceid
src_instanceid=$(ec2-run-instances \ --key $src_keypair \ --availability-zone $src_availability_zone \ --instance-type $src_instance_type \ $src_origin_ami \ --region $src_region | \ egrep ^INSTANCE | cut -f2) echo "src_instanceid=$src_instanceid" # Wait for the instance to move to the “running” state while src_public_fqdn=$(ec2-describe-instances --region $src_region "$src_instanceid" | \ egrep ^INSTANCE | cut -f4) && test -z $src_public_fqdn; do echo -n .; sleep 1; done echo src_public_fqdn=$src_public_fqdn
This should loop till you see something like:
$ echo src_public_fqdn=$src_public_fqdn src_public_fqdn=ec2-184-72-2-93.us-west-1.compute.amazonaws.com
Create a volume from the EBS AMI snapshot
Normally when starting an EBS AMI instance, it automatically created a volume from the snapshot associated with the AMI. Here we create the volume from the snapshot ourselves
# Get the volume id ec2-describe-instances --region $src_region "$src_instanceid" > /tmp/src_instance_info src_volumeid=$(egrep ^BLOCKDEVICE /tmp/src_instance_info | cut -f3); echo $src_volumeid # Now get the snapshot id from the volume id ec2-describe-volumes --region $src_region $src_volumeid | egrep ^VOLUME > /tmp/volume_info src_snapshotid=$(cut /tmp/volume_info | cut -f2) echo $src_snapshotid src_size=$(cut /tmp/volume_info | cut -f2) echo $src_size # Create a new volume from the snapshot src_volumeid=$(ec2-create-volume --region $src_region --snapshot $src_snapshotid -z $src_availability_zone | egrep ^VOLUME | cut -f2) echo $src_volumeid
Mount the EBS Image of the AMI you want to copy
Now we’ll mount the EBS AMI image as a plain mount on the running source instance. In this case we’re going to use the same image as we launched, but it doesn’t have to be the same image or even the same architecture.
ec2-attach-volume --region $src_region $src_volumeid -i $src_instanceid -d $src_device
You should see something like:
ATTACHMENT vol-6e7fee06 i-fb0804be /dev/sdh attaching 2010-03-14T09:02:58+0000
Prepare the Destination Instance and Volume
Set some Shell destination variables on host machine
You’ll want to tune these to your needs. This example makes the destination size the same as the source. You could make the destination an arbitrary size as long as it fits the source data.
dst_keypair=runa-production-us-east dst_fullpath_keypair=~/.ssh/runa/id_runa-production-us-east dst_availability_zone=us-east-1b dst_instance_type=m1.large dst_region=us-east-1 dst_origin_ami=ami-7d43ae14 dst_size=$src_size dst_device=/dev/sdh dst_dir=/dst dst_user=ubuntu
Start up the destination instance and capture the dst_instanceid
dst_instanceid=$(ec2-run-instances \ --key $dst_keypair \ --availability-zone $dst_availability_zone \ --instance-type $dst_instance_type \ $dst_origin_ami \ --region $dst_region | \ egrep ^INSTANCE | cut -f2) echo "dst_instanceid=$dst_instanceid" # Wait for the instance to move to the “running” state while dst_public_fqdn=$(ec2-describe-instances --region $dst_region "$dst_instanceid" | \ egrep ^INSTANCE | cut -f4) && test -z $dst_public_fqdn; do echo -n .; sleep 1; done echo dst_public_fqdn=$dst_public_fqdn
This should loop till you see something like:
$ echo dst_public_fqdn=$dst_public_fqdn dst_public_fqdn=ec2-184-73-71-160.compute-1.amazonaws.com
Create an empty destination volume
dst_volumeid=$(ec2-create-volume --region $dst_region --size $dst_size -z $dst_availability_zone | egrep ^VOLUME | cut -f2) echo $dst_volumeid
Mount the EBS Image of the AMI you want to copy
Now we’ll mount the EBS AMI image as a plain mount on the running source instance. In this case we’re going to use the same image as we launched, but it doesn’t have to be the same image or even the same architecture.
ec2-attach-volume --region $dst_region $dst_volumeid -i $dst_instanceid -d $dst_device
You should see something like:
ATTACHMENT vol-450ed02c i-65be1f0e /dev/sdh attaching 2010-03-14T09:39:20+0000
Copy the data from the Source Volume to the Destination Volume
Copy your credentials to the source machine
We’re going to use rsync to copy from the source to the destination tunneled thru ssh. This eliminates any issues with EC2 security groups. But it does mean you have to copy an ssh private key to the source machine that will then be able to access the destination machine via ssh.
scp -i $src_fullpath_keypair $dst_fullpath_keypair ${src_user}@${src_public_fqdn}:.ssh
Mount the source and destination volumes on their instances
ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn} sudo mkdir -p $src_dir ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn} sudo mount $src_device $src_dir ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mkfs.ext3 -F $dst_device ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mkdir -p $dst_dir ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mount $dst_device $dst_dir
Get the FQDN of the Amazon internal address of the destination machine
We’re assuming that the dst instance is the us-east equivalent base AMI of the us-west source base AMI so we can use these kernel and ramdisk to build the new AMI later.
ec2-describe-instances --region $dst_region "$dst_instanceid" > /tmp/dst_instance_info dst_internal_fqdn=$(egrep ^INSTANCE /tmp/dst_instance_info | cut -f5); echo $dst_internal_fqdn dst_kernel=$(egrep ^INSTANCE /tmp/dst_instance_info | cut -f13); echo $dst_kernel dst_ramdisk=$(egrep ^INSTANCE /tmp/dst_instance_info | cut -f14) ;echo $dst_ramdisk
Commands to run on the source machine
You could do the rsync by logging into the source machine and do the following. I tried to do this by using ssh commands, but the fact that the first ssh from source to destination has to be authenticated was a blocker for me. You could log into the source machine and then sudo ssh to the destination machine (you have to do sudo ssh since the rsync has to be run with sudo and the keys are stored separately for the sudo user and the regular user).
I’ll show both ways.
Here’s how you can ssh to the source machine:
ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn}
Set up some shell variables on the source machine shell environment
# This is the key you just copied over dst_fullpath_keypair=~/.ssh/id_runa-production-us-east # You need to use the Public FQDN of the destination since its cross region dst_keypair=runa-production-us-east src_public_fqdn=ec2-184-72-2-93.us-west-1.compute.amazonaws.com dst_public_fqdn=ec2-184-73-71-160.compute-1.amazonaws.com dst_user=ubuntu src_user=ubuntu src_dir=/src dst_dir=/dst
Do the rsync
We are using the rsync options
- P Keep partial transferred files and Show Progress
- H Preserve Hard Links
- A Preserve ACLs
- X Preserve extended attributes
- a Archive mode
- z Compress files for transfer
rsync -PHAXaz --rsh "ssh -i /home/${src_user}/.ssh/id_${dst_keypair}" --rsync-path "sudo rsync" ${src_dir}/ ${dst_user}@${dst_public_fqdn}:${dst_dir}/
If you want to do the rsync from your local host
I found that I still had to log into the source instance
ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn}
and then on the source instance do:
sudo ssh -i /home/${src_user}/.ssh/id_${dst_keypair} ${dst_user}@${dst_public_fqdn}
and accept “The authenticity of host” for the first time so the destination host is in the known keys of the sudo user
Then back on your local host you can issue the remote command that will run on the source instance and rsync to the destination host:
ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn} sudo "rsync -PHAXaz --rsh \"ssh -i /home/${src_user}/.ssh/id_${dst_keypair}\" --rsync-path \"sudo rsync\" ${src_dir}/ ${dst_user}@${dst_public_fqdn}:${dst_dir}/"
Complete the new AMI from your Local Host
The remaining steps will be done back on your local host. This assumes that the shell variables we set up earlier are still there.
Some Cleanup for new Region
Ubuntu has their apt sources tied to the region you are in. So we have to update the apt sources for the new region.
We’ll do this by chrooting to the mount /dst directory and running some commands as if they were being run on an ami with the /dst image. We might as well update things at the same time to the latest packages.
# Allow network access from chroot environment ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo cp /etc/resolv.conf $dst_dir/etc/ # Upgrade the system and install packages ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir mount -t proc none /proc ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir mount -t devpts none /dev/pts cat <<EOF > /tmp/policy-rc.d #!/bin/sh exit 101 EOF scp -i $dst_fullpath_keypair /tmp/policy-rc.d ${dst_user}@${dst_public_fqdn}:/tmp ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mv /tmp/policy-rc.d $dst_dir/usr/sbin/policy-rc.d ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} chmod 755 $dst_dir/usr/sbin/policy-rc.d # This has to be done to set up the Locale & apt sources ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} DEBIAN_FRONTEND=noninteractive sudo -E chroot $dst_dir /usr/bin/ec2-set-defaults # Update the apt sources ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} DEBIAN_FRONTEND=noninteractive sudo -E chroot $dst_dir apt-get update # Optionally update the packages ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} DEBIAN_FRONTEND=noninteractive sudo -E chroot $dst_dir apt-get dist-upgrade -y # Optionally update your gems ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir gem update --system ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir gem update
Clean up from the building of the image
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo chroot $dst_dir umount /proc ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir umount /dev/pts ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E rm -f $dst_dir/usr/sbin/policy-rc.d
There are a few more shell variables we’ll need
I got the kernel and ramdisk from the destination instance since it has the alestic.com us-east-1 equivalent base AMI to the us-west-1 one that we are copying from.
# Some info for creating the name and description codename=karmic release=9.10 tag=server # Make sure you set this as appropriate # 64bit arch=x86_64 # You will need to set the aki and ari values base on the actual base AMI you used # It will be different for different regions. These are set for x86_64 and us-east-1 ebsopts="--kernel=${dst_kernel} --ramdisk=${dst_ramdisk}" ebsopts="$ebsopts --block-device-mapping /dev/sdb=ephemeral0" now=$(date +%Y%m%d-%H%M) # Make this specific to what you are making chef_version="0.8.6" prefix=runa-chef-${chef_version}-ubuntu-${release}-${codename}-${tag}-${arch}-${now} description="Runa Chef ${chef_version} Ubuntu $release $codename $tag $arch $now"
Snapshot the Destination Volume and register the new AMI in the destination region
# Unmount the destination filesystem ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo umount $dst_dir # Detach the Destination Volume (it may speed up the snapshot) ec2-detach-volume --region $dst_region "$dst_volumeid" # Make the snapshot dst_snapshotid=$(ec2-create-snapshot -region $dst_region -d "$description" $dst_volumeid | cut -f2) # Wait for snapshot to complete. This can take a while while ec2-describe-snapshots --region $dst_region "$dst_snapshotid" | grep -q pending do echo -n .; sleep 1; done # Register the Destination Snapshot as a new AMI in the Destination Region new_ami=$(ec2-register \ --region $dst_region \ --architecture $arch \ --name "$prefix" \ --description "$description" \ $ebsopts \ --snapshot "$dst_snapshotid") echo $new_ami
Conclusion
You should now have a shiny new ami in your destination region. Use the value of $new_ami to start a new instance in your destination region using your favorite tool or technique.
I’m glad my post was helpful! I was too lazy too actually write down instructions as detailed as yours, but will now update my original blog entry with a link to this one!
Michael
Thanks for writing this, but it should definitely be made easier by amazon !
We create this online-tool that copies EC2 snapshots from one region to another: https://cloudyscripts.com/tool/show/4
Only step missing is registering the snapshot as AMI.
Ylastic lets you do this entire process very simply through a UI. You provide a few parameters, and click a button – http://blog.ylastic.com/super-simple-ami-migration-between-ec2-region
[…] http://blog2.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/ […]
[…] can only bring up instances off AMIs that are already in those regions. And transferring regions is a pain in the ass. There is absolutely zero reason Amazon doesn’t provide an API call to copy an AMI from […]
[…] Harder one http://blog2.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/ […]
Looks like this will work only for linux ebs ami, not windows ami, is this correct? I need to move windows ebs ami, any other suggestions?
I’ve tried Cloudy_Scripts, it also does work for windows ami.
correction:
… Cloudy_Scripts, it also does NOT work …
[…] Regiones: toda estructura de Amazon está dividida en regiones. Las regiones son arquitecturas físicamente INDEPENDIENTES entre sí. Puedes montar máquinas en una u otra región, pero no puedes crear máquinas en una región y migrarlas a otras. Son independientes, y eso hay que tenerlo en cuenta. Hay cinco regiones disponibles: dos en USA (una por costa), una en Irlanda, otra en Tokyo y otra en Singapur. Es buena idea tener una máquina en una región y algún clon en otra; pero para ello, tendrás que copiar la AMI de una a otra. Aquí te explican cómo hacerlo (no es fácil). […]
We don’t use command line.
http://books.google.com/books/about/In_the_Beginning_was_the_Command_Line.html?id=OmnF5MGRNn8C
There is a perl script that will automate this from the command line:
http://search.cpan.org/~lds/VM-EC2-1.10/bin/migrate-ebs-image.pl
The command looks like this:
migrate-ebs-image.pl –from us-east-1 –to ap-southeast-1 ami-123456