I spun up my 500th EC2 instance yesterday. Or maybe my 600th. I’ve honestly lost count at this point. And somehow, the AWS console UI has changed again since last Tuesday — the buttons moved, the wizard has a new layout, and the default options feel like a trap designed to maximize my monthly bill.
If you just want a raw, headless Ubuntu Linux server to test some code or run a quick script, the official documentation makes it sound like you’re launching a space shuttle. You aren’t. You just need a computer in a data center that you can SSH into.
Well, that’s not entirely accurate — let me back up. Here is exactly how I deploy an Ubuntu CLI instance right now, along with the stupid mistakes I still occasionally make.
The AMI Trap and Graviton
When you hit “Launch Instance,” AWS shoves the Amazon Linux AMI in your face. Ignore it. Search for Ubuntu. As of right now, you want Ubuntu 24.04 LTS (Noble Numbat). It’s stable, the packages are fresh enough, and you won’t have to touch the OS upgrade path for years.
Here’s where people mess up. By default, AWS selects the x86 architecture. If you’re just running basic Python scripts, Node apps, or Docker containers, switch that architecture dropdown to ARM64.
Why? Because then you can select a t4g.small instance instead of a t3.micro. The “g” stands for Graviton, Amazon’s custom silicon. It’s noticeably faster and costs about 20% less. I am incredibly cheap when it comes to cloud hosting, so I exclusively use Graviton for personal staging environments now. Just remember that if you’re compiling C code from scratch, you’ll be building for ARM.
Security Groups: The 4-Second Rule
I learned this the hard way back in 2018. If you leave Port 22 (SSH) open to 0.0.0.0/0 (the entire internet), automated botnets will start brute-forcing your server within four seconds of it booting up. I actually timed it once. Four seconds.
The AWS wizard will warn you about this, but people ignore the warning because finding your own IP address feels like extra work. Do the extra work.
In the Network settings, under Firewall (security groups), select “Allow SSH traffic from” and change “Anywhere” to “My IP”. AWS will automatically detect your current public IP and lock it down. If your ISP changes your IP tomorrow, yeah, you’ll have to log back into the AWS console and update the rule. It takes thirty seconds. Do it anyway.
The Dreaded Permission Denied Wall
You created your key pair. You downloaded the .pem file. The instance is running. You open your terminal, paste the SSH command AWS gives you, and hit enter.
And you get smacked with this:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for 'my-key.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: my-key.pem
Permission denied (publickey).
I still do this. I download the key, forget about file permissions, and OpenSSH 9.6p1 throws an absolute fit because my downloads folder is readable by other user accounts on my Mac.
The fix is literally one command. You have to restrict read access to just yourself.
chmod 400 ~/Downloads/my-key.pem
ssh -i "~/Downloads/my-key.pem" ubuntu@ec2-198-51-100-23.compute-1.amazonaws.com
And, but, notice the username is ubuntu. Not root, not admin, not ec2-user. If you try ec2-user on an Ubuntu AMI, AWS will actively reject the connection and tell you to use the right username. Ask me how many times I’ve blindly typed the wrong one out of muscle memory.
The Post-Boot CPU Spike (My Arch Nemesis)
Here is a massive gotcha that almost nobody documents. You finally get into your fresh Ubuntu 24.04 instance. You type sudo apt update. The terminal hangs. You type it again. Nothing.
You open a second terminal, SSH in, run htop, and see the CPU is pegged at 100%.
What’s happening? snapd (Ubuntu’s package manager daemon) and unattended-upgrades are both fighting for system resources to update the server the exact millisecond it boots. On a tiny instance with limited CPU credits, this background process basically strangles the machine for the first three to five minutes of its life.
If you’re just using this box as a disposable Docker host, this is infuriating. I usually kill the auto-upgrades immediately so I can actually work.
# Stop the background apt lock from ruining your day
sudo systemctl stop unattended-upgrades
sudo systemctl disable unattended-upgrades
# Now you can actually install what you need
sudo apt update
sudo apt install -y docker.io docker-compose-v2
I benchmarked this last month. Probably waiting for the default Ubuntu background tasks to finish on a t4g.nano took 4 minutes and 12 seconds before I could install Docker. Disabling the service let me do it in 18 seconds. If you’re automating your deployments with Terraform or Ansible, you absolutely need to handle that apt lock, or your pipeline will time out and fail.
That’s it. You have a server. It’s secure. It’s cheap. Just remember to actually terminate the instance when you’re done playing with it, or you’ll get a $4 invoice from AWS next month that costs more to process than the compute time was actually worth.




