Staging Deployment Guide (RPi 3)
Staging This guide covers the staging environment only. For the production Raspberry Pi 4 deployment see the Infrastructure Migration Guide.
Hardware constraints
Section titled “Hardware constraints”The staging host is a Raspberry Pi 3 (1 GB RAM). Its limited memory means that running a Vite build or a multi-stage Docker image compilation directly on the device will cause an Out-Of-Memory (OOM) crash.
The solution: build the Docker image remotely on a GitHub-hosted runner (x86_64, 7 GB RAM)
using QEMU cross-compilation for linux/arm64, push the result to GHCR, and then have the
RPi 3 only pull and run the pre-built image.
Repository structure
Section titled “Repository structure”The following files are specific to the staging environment:
| File | Purpose |
|---|---|
docker-compose.staging.yaml | Compose file for RPi 3; pulls pre-built image, adds mem_limit guards |
backend-api/.env.staging | Sample environment file with staging-specific values |
.github/workflows/deploy-staging.yaml | GitHub Actions workflow — builds arm64 image and deploys to RPi 3 |
One-time setup on the RPi 3
Section titled “One-time setup on the RPi 3”-
Enable cgroup memory limits
By default, Raspberry Pi OS does not have cgroup memory support enabled, which means Docker’s
mem_limitguards will be ignored. To fix this:Terminal window # For Raspberry Pi OS (Debian Bookworm):sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1/' /boot/firmware/cmdline.txt# For older OS versions:# sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1/' /boot/cmdline.txt -
Install Docker and Docker Compose
Terminal window curl -fsSL https://get.docker.com | shsudo usermod -aG docker $USER# Re-login or run: newgrp docker -
Register a GitHub Actions self-hosted runner
In the GitHub repository go to Settings → Actions → Runners → New self-hosted runner, choose Linux / ARM64, and follow the instructions. When prompted for labels add:
self-hosted,rpi3The
deploy-staging.yamlworkflow targetsruns-on: [self-hosted, rpi3]. -
Log in to GHCR
Retrieve the PAT from Vaultwarden (
GitHub – read:packages PAT (dicechess RPi)):Terminal window echo "<PAT_FROM_VAULTWARDEN>" | docker login ghcr.io -u rabestro --password-stdin -
Install E2E dependencies (Playwright)
Since the CI runner cannot use
sudoto install system libraries, install them once manually:Terminal window sudo apt-get update && sudo apt-get install -y \libnss3 libatk1.0-0 libatk-bridge2.0-0 libcups2 libdrm2 \libxkbcommon0 libxcomposite1 libxdamage1 libxrandr2 \libgbm1 libasound2 libpango-1.0-0 libpangocairo-1.0-0 -
Create the deployment directory and staging
.envTerminal window mkdir -p ~/apps/dicechess-staging/datacd ~/apps/dicechess-staging# Copy .env.staging from the repository and fill in real values:cp /path/to/repo/backend-api/.env.staging .env.stagingnano .env.staging
Deploying a new version
Section titled “Deploying a new version”Staging deploys occur in two ways:
- Automatic: Every time a new release is created via the Ops: Release workflow, the system automatically builds and deploys the corresponding version to the RPi 3 for validation.
- Manual: You can still trigger a manual push for testing specific branches or SHAs.
-
Open the repository on GitHub and go to Actions → CD: Staging Deploy.
-
Click Run workflow.
-
In the
image_tagfield enter the tag you want to deploy — for examplestaging(default) or a specific short SHA likesha-abc1234. -
Click Run workflow to confirm.
- Check out the repository on a GitHub-hosted ubuntu runner (not on the RPi 3).
- Use QEMU + Docker Buildx to cross-compile a
linux/arm64image. - Push the image to GHCR as
ghcr.io/rabestro/dicechess-lab:<tag>. - SSH into the RPi 3 via the self-hosted runner and run
docker compose pull && docker compose up -d. - Run a basic
/api/healthcheck to confirm the container is up. - Execute E2E Tests: Run the full Playwright test suite (
chromiumandmobile-chrome) directly on the RPi 3 against the live application (http://localhost:8000).
Refresh staging data from production backup
Section titled “Refresh staging data from production backup”When you need fresh data on staging, use Actions → Ops: Backup and Stage Refresh with:
run_stage_refresh=truenotify_telegram=true(optional)
This workflow first creates a validated production backup on rpi4, then transfers and restores it on
rpi3 using scripts/ops/restore_staging_backup.sh.
Memory limits
Section titled “Memory limits”docker-compose.staging.yaml sets explicit limits to prevent runaway processes from exhausting
the 1 GB of RAM:
| Service | mem_limit |
|---|---|
web (FastAPI app) | 512 MB |
db-admin (sqlite-web) | 128 MB |
| OS + runner agent (reserved) | ~256 MB |
If you observe the application being OOM-killed (exit code 137), check docker stats and
consider reducing background workers in the FastAPI startup.
Differences from production
Section titled “Differences from production”| Aspect | Production (RPi 4) | Staging (RPi 3) |
|---|---|---|
| Compose file | docker-compose.yaml | docker-compose.staging.yaml |
| Image tag | latest | staging (or manual SHA) |
| Env file | .env | .env.staging |
| Deploy trigger | Automatic after Staging E2E success | Automatic after Ops: Release |
| E2E Validation | Not run on RPi 4 | Mandatory on RPi 3 |
| Memory limits | None (4 GB RAM) | 512 MB / 128 MB |
LOG_LEVEL | INFO | DEBUG |
| Runner label | rpi4 | rpi3 |
Rollback
Section titled “Rollback”Because each deployment pulls an immutable image tag from GHCR, rolling back is a single
docker compose command on the RPi 3:
If the issue is a bad Git tag or GitHub release metadata (not only a bad image tag), follow the Release Rollback Guide.
cd ~/apps/dicechess-stagingDICECHESS_IMAGE_TAG=sha-<previous_sha> docker compose up -dFind available tags at https://github.com/rabestro/dicechess-lab/pkgs/container/dicechess-lab.