For no particular reason, a Sparc workstation is on it’s way – part 2: it arrived!

It arrived a couple of days earlier than expected, ahead of the other multiple parts that I needed (keyboard, mouse, scsi disk etc).

It’s a bit grubby and scratched up, but not that bad for an older machine:

It’s incredibly heavy though, wow, it must way at least 60 pounds.

Inside looks like there is the Creator 3D graphics card, and a SunPCI card (more on that later).

I have a Sun5c keyboard and mouse, so plugging them in, attaching a monitor and powering on …. nothing.

Uh oh.

I started to go down the path of running the diags over the serial connection, and using my Atari ST as a terminal, but before I got to far with that I though I should check basics and make sure everything is well seated.

Turns out both cards were only half in their slots. One more try, but still nothing. Next, pulled the CPU board out and gave it a blow and then reinserted…. power on, fans running, diags run and we have a banner:

Next up, waiting until Monday for my SCSI disk to arrive with disk sled, and then we’ll start a Solaris install!

For no particular reason, a Sparc workstation is on it’s way

I was shopping for one of these on ebay:

https://en.wikipedia.org/wiki/SPARCstation_20

By Caroline Ford – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1504020

But then got caught up on the idea that an Ultra 5 with IDE disk support might be a better idea:

https://en.wikipedia.org/wiki/Ultra_5/10

By Liftarn – https://commons.wikimedia.org/w/index.php?curid=2094130

After a lively discussion in the Facebook Vintage Unix Machines group about the pros and cons of older Sparcstations, Ultra 1 and 2, vs Ultra 5/10, I decided to shop for an Ultra 1 or 2. I made an offer on one but didn’t get it. And then I decided to go for an Ultra 60 since it was cheaper than anything else I could find, although in a unknown working condition, other than ‘it powers on’. So when it turns up it will be a learning experience to see if it’s actually in working condition or not.

I believe from photos from the ebay listing that there’s a SunPCI card in there, so that will be interesting to play with, and also the Creator 3D graphics card.

On my shopping list of needed parts:

  • a Sun Type 5 keyboard and mouse (with Sun mini DIN connector)
  • a 13W3 video to VGA adapter
  • an SCA SCSI disk
  • possible future purchase, a SCSI2SD adapter

MongoDB on MacOS Catalina 10.15: “exception in initAndListen: 29 Data directory /data/db not found”

By default, MongoDB usually stores it’s data files when running on MacOS under /data/db. After upgrading to Catalina 10.15, when starting MongoDB I get this error:

STORAGE  [initandlisten] exception in initAndListen: 29 Data directory /data/db not found., terminating

According to this question and answer here, Catalina no longer allows apps to read or write to non-standard folders beneath / so you need to move the data files elsewhere. After my upgrade, the files were moved to a ‘Relocated Items/Security’ folder. Moving them into my user dir and then starting up with the suggested:

mongod --dbpath ~/data/db

fixes the issue.

Revisiting my spotviz.info webapp: visualizing WSJT-X FT8 spots over time – part 6: Redesigning to take advantage of the Cloud

Following on from Part 1 and subsequent posts, I now have the app deployed locally on WildFly 17, up and running, and also redeployed to a small 1 cpu 1 GB VPS: http://www.spotviz.info . At this point I’m starting to think about how I’m going to redesign the system to take advantage of the cloud.

Here are my re-design and deployment goals:

  • monthly runtime costs since this is a hobby project should be low. Less that $5 a month is my goal
  • take advantage of AWS services as much as possible, but only where use of those services still meet my monthly cost goal
  • if there are AWS free tier options that make sense to take advantage of, favor these services if they help keep costs down

Here’s a refresher on my diagram showing how the project was previously structured and deployed:

As of September 2019, the original app is now redeployed as a monolithic single .war again to WildFly 17, running on a single VPS. MongoDB is also running on the same VPS. The web app is up at: http://www.spotviz.info

There’s many options for how I could redesign and rebuild parts of this to take advantage of the cloud. Here’s the various parts that could either be redesigned, and/or split into separate deployments:

  • WSJT-X log file parser and uploader client app (the only part that probably won’t change, other than being updated to support the latest WSJT-X log file format)
  • Front end webapp: AngularJS static website assets
  • JAX-WS endpoint for uploading spots for processing
  • MDB for processing the upload queue
  • HamQTH api webservice client for looking up callsign info
  • MongoDB for storing parsed spots, callsigns, locations
  • Rest API used by AngularJS frontend app for querying spot data

Here’s a number of options that I’m going to investigate:

Option 1: redeploy the whole .war unchanged as previously deployed to OpenShift back in 2015, to a VM somewhere in the cloud. Cheapest options would be to a VPS. AWS LightSail VPS options are still not as a cheap as VPS deals you can get elsewhere (check LowEndBox for deals), and AWS EC2 instances running 24×7 are more expensive (still cheap, but not as cheap as VPS deals)

Update September 2019: COMPLETE: original app is now deployed and up and running

Option 2: Using AWS services: If I split the app into individual parts I can incrementally take on one or more of these options:

  • Route 53 for DNS (September 2019: COMPLETE!)
  • Serve AngularJS static content from AWS S3 (next easiest change) (December 2019: COMPLETE!)
  • AWS API Gateway for log file upload endpoint and RestAPIs for data lookups
  • AWS Lambdas for handling uploads and RestAPIs
  • Rely on scaling on demand of Lambdas for handling upload and parsing requests, removing need for the queue
  • Refactor data store from MongoDB to DynamoDB

Option 3: Other variations:

  • Replace use of WildFly queue with AWS SQS
  • Replace queue approach with a streams processing approach, either AWS Kinesis or AWS MSK

More updates coming later.