Actually, I should clarify — I stopped trusting cloud dashboards about three years ago. You know the drill: you click “Deploy,” the little spinner spins for 45 seconds, and then… nothing. Or worse, a generic “Deployment Failed” message that tells you absolutely nothing about why your static site is currently a digital ghost.
So, I went back to basics. Last Tuesday, I spun up a fresh VM on Azure to host a simple documentation site. No fancy containers, no serverless edge functions, just a raw Ubuntu box and a terminal. And honestly? It was faster. Much faster.
But here’s the thing: you don’t need to memorize the entire Linux manual to run a server. I certainly haven’t. In fact, out of the thousands of available commands, I rely on a tiny, repetitive subset to get a web server from “empty box” to “live traffic.”
Here is exactly what I typed into my terminal, mistakes and all.
Getting In Without Passwords
If you are still typing passwords to log into your servers in 2026, stop. Seriously. It’s insecure and, more importantly, it’s annoying. I always set up SSH keys immediately.
The first time I tried to connect to this new VM, I got the dreaded Permission denied (publickey). I’d forgotten to specify the key file path. Classic Monday morning brain fog.
# The command that actually works
ssh -i ~/.ssh/my_project_key.pem azureuser@20.x.x.x
Once I’m in, the first thing I do is check who I am and where I am. Paranoia? Maybe. But I once wiped the wrong database because I thought I was on staging when I was actually on prod. Never again.
whoami
hostname
# Check the OS version just to be sure
cat /etc/os-release
This box was running Ubuntu 24.04.1 LTS. Good. Stable.
The “Update Everything” Ritual
Fresh VMs are never actually “fresh.” They’re usually images captured weeks or months ago. The first thing I run is the update sequence. On Ubuntu 24.04, the needrestart tool is aggressive — it pops up a pink/purple screen asking which daemons to restart. I usually just tab to “Ok” and pray, but strictly speaking, you should read it.
sudo apt update && sudo apt upgrade -y
I also grab the essentials immediately. I cannot function without git or curl.
sudo apt install git curl nginx -y
Locking the Door (Firewall)
Here is where I have burned myself before. I used to rely entirely on the cloud provider’s security groups (the web UI firewall). But having a local firewall on the machine is a good fallback.
Warning: Do not enable the firewall before allowing SSH. I did this back in 2023 on a client project. I enabled UFW, hit enter, and the terminal froze. I had locked myself out. The only fix was detaching the storage volume and mounting it on another VM to fix the config. It took four hours. I still have nightmares about it.
So, run these in this specific order:
# 1. Allow SSH first!
sudo ufw allow ssh
# 2. Allow web traffic
sudo ufw allow 'Nginx Full'
# 3. NOW enable it
sudo ufw enable
When you run that enable command, it asks: Command may disrupt existing ssh connections. Proceed with operation (y|n)?. My heart still skips a beat every time I type y.
Permissions: The Real Enemy
Linux permissions are the number one reason my deployments fail. By default, Nginx looks in /var/www/html. But that folder is owned by root. If I try to deploy my site there using my regular user account, I get a slap on the wrist.
A lot of bad tutorials tell you to run chmod 777. Do not do this. It makes the folder writable by everyone, including any compromised process on the system. It’s lazy and dangerous.
Instead, I take ownership of the folder. It’s cleaner.
# Change owner to the current user (me)
sudo chown -R $USER:$USER /var/www/html
# Now I can copy my files without sudo
cp -r ./my-static-site/* /var/www/html/
I tested this immediately by hitting the IP address in my browser. I got the default “Welcome to nginx!” page initially because I forgot to remove the default index.nginx-debian.html file. A quick rm fixed that, and my site loaded.
Taming Nginx
I don’t use Apache anymore. I haven’t for years. Nginx just handles static files better, and the config syntax doesn’t make my eyes bleed (mostly).
The default config is usually fine for a test, but for a real deployment, you need to edit the sites-available file. I use nano. Fight me. I know vim is powerful, but I don’t need power; I need to change three lines of text without looking up a cheat sheet for keyboard shortcuts.
sudo nano /etc/nginx/sites-available/default
I usually strip it down to the basics. Here is the config I used for this deployment:
server {
listen 80;
server_name my-project.com;
root /var/www/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
After saving, there is one command you must memorize. It checks if you made a typo before you restart the server and crash everything.
sudo nginx -t
If you see syntax is ok, you’re golden. If not, it usually tells you exactly which line you messed up (missing semicolon, line 14, every single time). Then, restart:
sudo systemctl restart nginx
When It All Goes Wrong
It didn’t work perfectly the first time. It rarely does. I got a 403 Forbidden error on one of my subdirectories.
When this happens, don’t guess. Look at the logs. The logs never lie, even if they are cryptic. I keep a second terminal window open just for this.
# Watch the error log in real-time
tail -f /var/log/nginx/error.log
The error was clear: directory index of "/var/www/html/assets/" is forbidden. I realized my build script hadn’t generated an index.html for that specific folder, and Nginx was configured to block directory listings (which is good security, actually). I fixed the build script, re-uploaded, and the error vanished.
The Reality Check
You can spend days learning every flag for tar or memorizing sed patterns. But for 95% of my web deployments, this is the toolkit. It’s not flashy. It doesn’t use AI wrappers. It’s just moving files, setting permissions, and telling a daemon to restart.
And you know what? Sometimes the old ways are the best ways. Just remember to allow SSH in your firewall rules. Please.




