Using sudo, ssh, rsync on the Official Ubuntu Images for EC2

The official Ubuntu images for EC2 do not allow ssh directly to the root account, but instead provide access through a normal “ubuntu” user account. This practice fits the standard Ubuntu security model available in other environments and, admittedly, can take a bit of getting used to if you are not familiar with it.

This document describes how to work inside this environment using the “ubuntu” user and the sudo utility to execute commands as the root user when necessary.


First, to connect to an instance of an official Ubuntu image for EC2, you need to ssh to it as “ubuntu” instead of as “root”. For example:

ssh -i KEYPAIR.pem ubuntu@HOSTNAME 

Note that existing EC2 documentation and tools like the EC2 Console and Elasticfox assume that all EC2 instances accept connections to root, so you’ll have to remember this change.

If you accidentally ssh to root on one of the official Ubuntu images, a short message will be output reminding you to use “ubuntu” instead.


When logged in under the “ubuntu” user, you can run commands as root using the sudo command. For example:

sudo apt-get update && sudo apt-get upgrade -y 

Note that sudo clears the environment variables before running the command. If you need to have them set, then use the sudo -E option.


The official Ubuntu images for EC2 are configured so that no password is required for sudo from the “ubuntu” user. Yes, this sacrifices a bit of security from standard Ubuntu operation, but any published hardcoded password would be more insecure, and randomly assigned passwords quickly become unmanageable when running many instances, in addition to preventing some types of remote automation described below.

Note that this policy does not allow logging in to the “ubuntu” user without a password. The password is disabled for login and not required for sudo. Login is done through the usual EC2 ssh keypair management as described above.

If you wish to increase security in this area, set the ubuntu user password and adjust the /etc/sudoers file.

sudo passwd ubuntu sudo perl -pi -e 's/^(ubuntu.*)NOPASSWD:(.*)/$1$2/' /etc/sudoers 

Make sure you set the password successfully first and remember it. If you change the sudoers file first, you will be stuck with no root access on that instance.


If you want to switch to a root shell once you are logged in to the “ubuntu” user, simply use the command:

sudo -i 

This is generally not recommended as it loses the enhanced logging of commands used as root and you risk accidentally entering commands when you did not intend to use root.


To automate a remote command as root from an external system, connect to “ubuntu” and use sudo:

ssh -i KEYPAIR.pem ubuntu@HOSTNAME 'sudo apt-get install -y apache2' 


Now for the trickiest one. Sometimes you want to rsync files from an external system to the EC2 instance and you want the receiving end to be run as root so that it can set file ownerships and permissions correctly.

rsync -Paz --rsh "ssh -i KEYPAIR.pem" --rsync-path "sudo rsync" LOCALFILES ubuntu@HOSTNAME:REMOTEDIR/ 

The --rsh option specifies how to connect to the EC2 instance using the correct keypair. The command in the --rsync-path option makes sure rsync is running as root on the receiving end.

The -Paz options are just some of my favorites. They aren’t a key part of this rsync approach.

In order for this method to work, the “ubuntu” user must be able to sudo without a password (which is the default on the official Ubuntu images as described above).


Finally, if you wish to circumvent the Ubuntu security standard and revert to the old practice of allowing ssh and rsync as root, this command will open it up for a new instance of the official Ubuntu images:

ssh -i KEYPAIR.pem ubuntu@HOSTNAME 'sudo cp /home/ubuntu/.ssh/authorized_keys /root/.ssh/' 

This is not recommended, but it may be a way to get existing EC2 automation code to continue working until you can upgrade to the sudo practices described above.

Comments are closed.

Member Area