After a couple of days I hadn't managed to set up commercial WAR hosting at all, and yet had spent $50+ and considerable time with Google and email and chat.
Manually getting my new minimal WAR file into AWS was easy: "Launch New Environment" in the AWS Elastic Beanstalk view online, choose a host name in the elasticbeanstalk.com domain, upload the WAR right from the webpage (browse and select from your local filesystem) and a couple of other clicks and you're away.
(I initially selected 64-bit t1.micro env, but 32-bit would do fine and in fact be a little more space-efficient.)
From there the site was running in under five minutes, with fairly clear status reporting in the AWS Management Console in my browser.
Using the fixed URL I could easily provide a DNS CNAME alias from my preferred domain name.
A slight irritation is that each tweak away from the default settings (eg, heap min and max sizes, min and max instances for auto-scaling, etc) requires intervention on the AWS page and then a few minutes' wait.
Killing my app off entirely and creating a new 32-bit environment took only a few minutes more, as did deploying a new version, again all via the AWS Web interface.
Accessing the AWS-hosted page yields something like:
Hello world again @ 1302901056368! (Client IP address X.X.X.X)
showing the un-proxied IPv4 address of my laptop, as desired.
The AWS management console seems robust and reliable, and handles the inevitable complexity quite well (it is capable of hosting multiple versions of multiple applications in multiple locations, etc, etc), and is a similar effort to push out new versions to my hand-rolled solution.
(AWS seems to be running the Oracle JVM from its acceptance of the -Xconcgc flag for concurrent garbage collection.)
What happens next
I will continue to develop a test deployment in AWS, notwithstanding bill risk.
For this I will need to tweak my application to avoid treating CPU as an all-you-can-eat "free" resource, and to take a slightly different approach to caching (and especially pre-caching of content before the first likely user request) so that I can afford to lose everything when an instance goes away. I'll keep AWS load balancing turned off to avoid breaking current wired-in assumptions in my code that one DNS host name fronts one JVM instance.
But I'll also continue to look out for cost-capped alternatives to AWS, cheaper and easier to manage than my current dedicated servers, at least until AWS creates a money-limiting emergency stop mechanism. And before that point it doesn't make much sense to do the refactoring needed to take advantage of AWS facilities such as CloudFront that I already have working – albeit less elegant – solutions for. ®
1) Find "Launching New Environments" in the AWS Elastic Beanstalk here.
2) "Configuring Containers" explains how to change JVM parameters such as heap size and where to add my favourite -Xconcgc flag.