It’s been 2 years since I migrated this site from a native install on a VPS to another VPS running Docker. I covered my migration in a number of posts, the first of which is here:
The surprising thing (maybe? maybe not?) is that the site has been up and running for the past 2 years with no issues. I think I rebooted the VPS a couple of times for reasons I can’t remember, but other than that the site’s been up reliably for the past 2 years.
It’s also been 2 years since I last renewed my SSL certificate, so time to do a couple of updates. More to come later.
Following on from Part 1 and subsequent posts, I now have the app deployed locally on WildFly 17, up and running, and also redeployed to a small 1 cpu 1 GB VPS: http://www.spotviz.info . At this point I’m starting to think about how I’m going to redesign the system to take advantage of the cloud.
Here are my re-design and deployment goals:
monthly runtime costs since this is a hobby project should be low. Less that $5 a month is my goal
take advantage of AWS services as much as possible, but only where use of those services still meet my monthly cost goal
if there are AWS free tier options that make sense to take advantage of, favor these services if they help keep costs down
Here’s a refresher on my diagram showing how the project was previously structured and deployed:
As of September 2019, the original app is now redeployed as a monolithic single .war again to WildFly 17, running on a single VPS. MongoDB is also running on the same VPS. The web app is up at: http://www.spotviz.info
There’s many options for how I could redesign and rebuild parts of this to take advantage of the cloud. Here’s the various parts that could either be redesigned, and/or split into separate deployments:
WSJT-X log file parser and uploader client app (the only part that probably won’t change, other than being updated to support the latest WSJT-X log file format)
Front end webapp: AngularJS static website assets
JAX-WS endpoint for uploading spots for processing
MDB for processing the upload queue
HamQTH api webservice client for looking up callsign info
MongoDB for storing parsed spots, callsigns, locations
Rest API used by AngularJS frontend app for querying spot data
Here’s a number of options that I’m going to investigate:
Option 1: redeploy the whole .war unchanged as previously deployed to OpenShift back in 2015, to a VM somewhere in the cloud. Cheapest options would be to a VPS. AWS LightSail VPS options are still not as a cheap as VPS deals you can get elsewhere (check LowEndBox for deals), and AWS EC2 instances running 24×7 are more expensive (still cheap, but not as cheap as VPS deals)
Update September 2019: COMPLETE: original app is now deployed and up and running
Option 2: Using AWS services: If I split the app into individual parts I can incrementally take on one or more of these options:
Before changing the DNS settings on GoDaddy, set up Route 53 to manage DNS for your domain first, because you’ll need the AWS DNS server names when updating the GoDaddy DNS config.
In the AWS Console, go to Route 53, then Hosted Zones, and press the ‘Created Hosted Zone’ button:
Enter your domain name, select ‘Public Hosted Zone’ and press ‘Create’:
At this point Route 53 will have assigned a list of DNS nameservers for your domain – remember this list for when you update the GoDaddy config
Next, press ‘Create Record Set’, select Type A record and enter the name of your subdomain, e.g. www, and enter the IP address of your server that this name should resolve to:
Now let’s update the GoDaddy config. By default, here’s the DNS entries managed by GoDaddy when you register a domain with them:
First step, cancel the GoDaddy nameservers. Do this from the ‘Change’ button:
Change the type to Custom:
Then enter the dns server names from Route 53 and press ‘Save’. You should now be done. Assuming the setup on Route 53 has propagated, hit your domain name in a browser and hopefully you’re up and running.
This website running this blog has been running in Docker containers on a small-ish 4GB VPS for the past 9 months pretty much issue free. You can follow by journey to migrate this site to Docker in posts here and here.
Since I’ve been spending time recently getting up to speed with Kubernetes, the next logic step would be to deploy to a Kubernetes cluster, which would give me an opportunity to find out what it takes to run Kubernetes. I’ve looked at managed offerings on Google and AWS, but the cost for a few small personal projects is a little more than I want to spend.
I have a 8GB VPS ready to go, so far installed with Docker and Kubernetes running as a single node cluster, and I’m starting to plan my strategy for migrating to this new server. The first thing I’ve been thinking about is whether I should take my existing Docker images and just deploy to Kubernetes as is. Where I’ve got stuck so far is I don’t know enough about how to run NGINX with WordPress and MySQL in multiple pods, so I think I might install the WordPress Chart using Helm and for the time being not worry about how to do this myself.
The next thing I’ve been looking at is how to configure an Ingress to access different deployed services via different urls. I’ve been looking at setting up Traefik as the Ingress Controller to do this, and will be sharing a post about that config shortly. What I’m interested in is being able to deploy a number of different projects, including my WordPress site, and have them accessed via different urls, and it looks like Traefik will handle this fine.
I plan on writing some further posts as I make this transition over the next couple of weeks.