diff options
| author | Owen Jacobson <owen@grimoire.ca> | 2024-08-30 20:44:50 -0400 |
|---|---|---|
| committer | Owen Jacobson <owen@grimoire.ca> | 2024-08-30 20:44:50 -0400 |
| commit | da82a5541d3f1e79c2c45a7e5edb6644c1448b09 (patch) | |
| tree | f371745c352de42a8a74e9576ca6ea5a8584f315 | |
| parent | e52d32e5d08684d6a3be8bcf20896aa8398c186a (diff) | |
Draft of this-site
| -rw-r--r-- | content/code/this-site.md | 32 |
1 files changed, 32 insertions, 0 deletions
diff --git a/content/code/this-site.md b/content/code/this-site.md new file mode 100644 index 0000000..ab1b31a --- /dev/null +++ b/content/code/this-site.md @@ -0,0 +1,32 @@ +--- +title: This Site +date: 2024-08-13T19:11:26-04:00 +draft: true +--- + +It's useful to write down what you can about how any complex technical system works. There are many like it, but this one is mine. +<!--more--> + +## Hosting + +`grimoire.ca` - the web server you're probably reading this from - runs on a Debian virtual machine hosted by Amazon Web Services' Elastic Cloud Computing (EC2) product line. + +Because AWS [promises](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-retirement.html) that EC2 instances may be terminated at Amazon's discretion, the instances are managed by an autoscaling group that attempts to keep exactly one instance running. Brief outages are tolerable because this is a personal, for-fun project with no uptime requirements or commitments. + +For faster boot time, I use a custom machine image, which has all of the software used for this site pre-installed and pre-configured. An instance can be replaced at any time by shutting it down and waiting for the autoscaling group to boot a new instance. I periodically rebuild this AMI to capture updates from the Debian project, as well as to install new software. + +DNS for the `grimoire.ca` zone is also hosted by Amazon, via the Route53 product. Instances register themselves with Route53 when they boot, using my [`aws-autoscaling-dns` tool](https://git.grimoire.ca/aws-autoscaling-dns/) run via a user-data script. This tool is pre-installed on the AMI used to boot instances. The autoscaling group is provisioned with an instance profile and IAM policy that allows instances to update only and exactly the DNS entries associated with the site. + +## Persistent Data + +Because the instances are ephemeral, and because some of the services this site provides are not, the running instances use an Elastic Filesystem NFS mount to hold long-lived data. This filesystem is mounted at `/srv` on boot, before network services are started. + +I selected NFS, out of the available approaches, as I wanted something that would cope reasonably well if multiple instances are running simultaneously. The main alternative - a dedicated Elastic Block Store volume mounted as a local filesystem on the instance - cannot be shared, and would cause instance boot problems in the event of a configuration error on my part. + +## Websites + +The instances running this site host several websites, at https://grimoire.ca/, at various subdomains, and at a handful of unrelated top-level domains. + +All HTTP connections from the internet, to ports 80 or 443, terminate at Caddy. Caddy handles HTTPS termination, including obtaining certificates for them (using its built-in ACME client). Caddy is configured to load `/srv/*/Caddyfile`, with the convention that subdirectories of `/srv` that contain websites be named after the site's primary DNS name (`/srv/grimoire.ca`, for this site). + +Caddy's state is also stored on `/srv`, in `/srv/caddy`. This state primarily consists of the collection of TLS certificates and private keys. |
