Skip to content

Staging Deployment Guide (RPi 3)

Staging This guide covers the staging environment only. For the production Raspberry Pi 4 deployment see the Infrastructure Migration Guide.

The staging host is a Raspberry Pi 3 (1 GB RAM). Its limited memory means that running a Vite build or a multi-stage Docker image compilation directly on the device will cause an Out-Of-Memory (OOM) crash.

The solution: build the Docker image remotely on a GitHub-hosted runner (x86_64, 7 GB RAM) using QEMU cross-compilation for linux/arm64, push the result to GHCR, and then have the RPi 3 only pull and run the pre-built image.

The following files are specific to the staging environment:

FilePurpose
docker-compose.staging.yamlCompose file for RPi 3; pulls pre-built image, adds mem_limit guards
backend-api/.env.stagingSample environment file with staging-specific values
.github/workflows/deploy-staging.yamlGitHub Actions workflow — builds arm64 image and deploys to RPi 3
  1. Enable cgroup memory limits

    By default, Raspberry Pi OS does not have cgroup memory support enabled, which means Docker’s mem_limit guards will be ignored. To fix this:

    Terminal window
    # For Raspberry Pi OS (Debian Bookworm):
    sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1/' /boot/firmware/cmdline.txt
    # For older OS versions:
    # sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1/' /boot/cmdline.txt
  2. Install Docker and Docker Compose

    Terminal window
    curl -fsSL https://get.docker.com | sh
    sudo usermod -aG docker $USER
    # Re-login or run: newgrp docker
  3. Register a GitHub Actions self-hosted runner

    In the GitHub repository go to Settings → Actions → Runners → New self-hosted runner, choose Linux / ARM64, and follow the instructions. When prompted for labels add:

    self-hosted,rpi3

    The deploy-staging.yaml workflow targets runs-on: [self-hosted, rpi3].

  4. Log in to GHCR

    Retrieve the PAT from Vaultwarden (GitHub – read:packages PAT (dicechess RPi)):

    Terminal window
    echo "<PAT_FROM_VAULTWARDEN>" | docker login ghcr.io -u rabestro --password-stdin
  5. Install E2E dependencies (Playwright)

    Since the CI runner cannot use sudo to install system libraries, install them once manually:

    Terminal window
    sudo apt-get update && sudo apt-get install -y \
    libnss3 libatk1.0-0 libatk-bridge2.0-0 libcups2 libdrm2 \
    libxkbcommon0 libxcomposite1 libxdamage1 libxrandr2 \
    libgbm1 libasound2 libpango-1.0-0 libpangocairo-1.0-0
  6. Create the deployment directory and staging .env

    Terminal window
    mkdir -p ~/apps/dicechess-staging/data
    cd ~/apps/dicechess-staging
    # Copy .env.staging from the repository and fill in real values:
    cp /path/to/repo/backend-api/.env.staging .env.staging
    nano .env.staging

Staging deploys occur in two ways:

  1. Automatic: Every time a new release is created via the Ops: Release workflow, the system automatically builds and deploys the corresponding version to the RPi 3 for validation.
  2. Manual: You can still trigger a manual push for testing specific branches or SHAs.
  1. Open the repository on GitHub and go to Actions → CD: Staging Deploy.

  2. Click Run workflow.

  3. In the image_tag field enter the tag you want to deploy — for example staging (default) or a specific short SHA like sha-abc1234.

  4. Click Run workflow to confirm.

  1. Check out the repository on a GitHub-hosted ubuntu runner (not on the RPi 3).
  2. Use QEMU + Docker Buildx to cross-compile a linux/arm64 image.
  3. Push the image to GHCR as ghcr.io/rabestro/dicechess-lab:<tag>.
  4. SSH into the RPi 3 via the self-hosted runner and run docker compose pull && docker compose up -d.
  5. Run a basic /api/health check to confirm the container is up.
  6. Execute E2E Tests: Run the full Playwright test suite (chromium and mobile-chrome) directly on the RPi 3 against the live application (http://localhost:8000).

Refresh staging data from production backup

Section titled “Refresh staging data from production backup”

When you need fresh data on staging, use Actions → Ops: Backup and Stage Refresh with:

  • run_stage_refresh=true
  • notify_telegram=true (optional)

This workflow first creates a validated production backup on rpi4, then transfers and restores it on rpi3 using scripts/ops/restore_staging_backup.sh.

docker-compose.staging.yaml sets explicit limits to prevent runaway processes from exhausting the 1 GB of RAM:

Servicemem_limit
web (FastAPI app)512 MB
db-admin (sqlite-web)128 MB
OS + runner agent (reserved)~256 MB

If you observe the application being OOM-killed (exit code 137), check docker stats and consider reducing background workers in the FastAPI startup.

AspectProduction (RPi 4)Staging (RPi 3)
Compose filedocker-compose.yamldocker-compose.staging.yaml
Image taglateststaging (or manual SHA)
Env file.env.env.staging
Deploy triggerAutomatic after Staging E2E successAutomatic after Ops: Release
E2E ValidationNot run on RPi 4Mandatory on RPi 3
Memory limitsNone (4 GB RAM)512 MB / 128 MB
LOG_LEVELINFODEBUG
Runner labelrpi4rpi3

Because each deployment pulls an immutable image tag from GHCR, rolling back is a single docker compose command on the RPi 3:

If the issue is a bad Git tag or GitHub release metadata (not only a bad image tag), follow the Release Rollback Guide.

Terminal window
cd ~/apps/dicechess-staging
DICECHESS_IMAGE_TAG=sha-<previous_sha> docker compose up -d

Find available tags at https://github.com/rabestro/dicechess-lab/pkgs/container/dicechess-lab.