Skip to content

Move Synchronization Lifecycle

This page explains the two related maintenance flows that often get confused:

  1. Game metadata ingestion — the admin backend fetches a paginated list of games and stores the lightweight match metadata.
  2. Move-history enrichment — the backend backfills the detailed move history for games that exist in the local database but still do not have moves_json cached.

Both operations are managed as Asynchronous Background Tasks via an optimized, in-memory task runner, which guarantees zero blocking overhead, protects hardware (e.g., Raspberry Pi 3 staging) from SQLite write congestion, and allows real-time tracking with graceful cancellation.

To prevent long-running ETL processes from causing HTTP 504 gateway timeouts, the Dice Chess Trainer utilizes a lightweight, highly efficient In-Memory Job Manager singleton on the backend.

flowchart TD
    A[Admin Dashboard - Tasks Tab] -->|1. Start Job| B[POST /api/admin/jobs]
    B -->|2. Register Job & Trigger| C[JobManager Singleton]
    C -->|3. Immediate Return| A
    C -->|4. Launch Async| D[FastAPI BackgroundTasks]
    D -->|5. Run Loops| E{Job Type?}
    E -->|SYNC_METADATA| F[Sync Games Metadata]
    E -->|SYNC_MOVES| G[Sync Missing Moves]

    A -->|6. Poll every 1.5s| H[GET /api/admin/jobs]
    H -->|7. Return State & Progress| A

The JobManager keeps job details entirely in memory (to protect physical SD cards and eliminate database write contention). To avoid memory bloat, it keeps a strict bounding limit of 15 finished jobs (older completed, failed, or cancelled jobs are pruned automatically when new ones are created).


All synchronization operations are controlled via the following endpoints:

MethodEndpointDescription
POST/api/admin/jobsSchedules and triggers a new background task.
GET/api/admin/jobsLists all jobs currently registered in memory.
GET/api/admin/jobs/{id}Retrieves the detailed state (progress, result counters, error) of a specific job.
POST/api/admin/jobs/{id}/cancelGracefully transitions an active task to the CANCELLED status.

The metadata sync path is the first stage of the data pipeline. It asks the external Dice Chess API for a page of games, then stores the game summary data locally.

When an administrator triggers a metadata sync from the UI, the frontend issues:

POST /api/admin/jobs
Content-Type: application/json
{
"job_type": "SYNC_METADATA",
"parameters": {
"limit": 50,
"skip": 0,
"player_id": "custom-player-id",
"start_date_ms": 1710000000000
}
}

The backend registers a new job, triggers the background worker, and returns an immediate 200 OK response with the job’s metadata (e.g. id: "job_bc7ec53b", status: "QUEUED").

The background worker then queries the external Dice Chess endpoints in sequential chunks:

POST /api/player/history
GET /api/user-profile?id=<PLAYER_ID>

As the worker progresses, the job’s progress percentage and result counters are updated in real-time:

  • inserted — number of new games successfully ingested.
  • skipped_existing — number of games that already existed in the local database.
  • errors — list of failures (capped at 50 records).

Once a game exists in the local database, the app can enrich it with its full move history. This is the data used by the trainer playback and by any feature that needs turn-by-turn board state reconstruction.

From the Maintenance tab, the frontend schedules this operation:

POST /api/admin/jobs
Content-Type: application/json
{
"job_type": "SYNC_MOVES",
"parameters": {
"limit": 25,
"delay_ms": 250
}
}

The background task does not blindly scan every game. It selects a batch of candidates using this database rule:

  1. Find games where moves_json IS NULL.
  2. Order them by start_time DESC NULLS LAST, then by game_id ASC.
  3. Take up to limit games.

This guarantees that the sync process prefers newer games first, but still has a stable order for ties.

Because the move sync loops can take time (incorporating delay_ms to respect external rate limits), the loop checks the job’s status before starting each batch chunk. If an administrator clicks “Cancel” in the UI, the job status is set to CANCELLED, and the loop exits cleanly.

sequenceDiagram
    participant Admin as Admin UI
    participant API as FastAPI admin router
    participant JM as JobManager Singleton
    participant DB as SQLite database
    participant DSC as Dice Chess API

    Admin->>API: POST /api/admin/jobs (SYNC_MOVES)
    API->>JM: create_job() & run_job()
    JM-->>API: returns Job(status="QUEUED")
    API-->>Admin: returns Job details immediately

    Note over JM, DB: Background Task Begins (RUNNING)
    JM->>DB: SELECT game_id WHERE moves_json IS NULL LIMIT batch_size
    DB-->>JM: candidate game IDs

    loop for each candidate game batch
        alt Status is CANCELLED
            Note over JM: Exit loop gracefully
        else Status is RUNNING
            JM->>DSC: GET /game-move-history?gameId=...
            DSC-->>JM: move history payload
            JM->>DB: save_game_moves(gameId, payload)
            JM->>JM: update progress % and counters
        end
    end

    Note over JM: Transition to COMPLETED

The Frontend PWA leverages Svelte 5 reactive runes ($state, $derived, $effect) to provide a premium monitoring dashboard:

  • Auto-Polling: To preserve battery and bandwidth, the frontend polls /api/admin/jobs every 1.5 seconds only when the user is actively viewing the Tasks tab, or if there is an active running job (QUEUED or RUNNING) currently in memory. Once all jobs finish, polling automatically shuts down.
  • Pulsing Notification Dot: An active red blinking dot appears next to the “Tasks” tab whenever a background job is running, keeping the administrator informed even while navigating other administrative panels.