~/abhipraya
PPL: New Learnings Applied to SIRA [Sprint 2, Week 1]
Overview
Sprint 2 Week 1 (Mar 24 to 30) combined feature development with infrastructure hardening. The week produced two full-stack features (multi-device session management and email integration), a developer tooling improvement (Superset workspace isolation), and iterative CI quality gate improvements. Each required learning at least one technology from scratch.
1. Multi-Device Session Management
SIRA needed per-device session tracking: users should see which devices are logged in, and admins should be able to revoke sessions remotely. This isn’t a standard Supabase Auth feature; it required a custom session layer on top of the existing JWT auth.
What I Learned
- User-agent parsing with the
user-agentsPython library. Parse raw UA strings into structured{browser, os}pairs for display on the Devices page. - JWT session binding: embed a
session_idclaim into the JWT at login time, then validate it on every authenticated request. If the session is revoked, the next API call returns 401 and the frontend redirects to login. - Activity throttling: updating
last_active_aton every request would hammer the database. Instead, only update if the last activity was more than 5 minutes ago:
ACTIVITY_THROTTLE_MINUTES = 5
async def _validate_and_track_session(db: Client, session_id: str) -> None:
session = await get_session_by_session_id(db, session_id)
if session is None:
return
if not session["is_active"]:
raise HTTPException(status_code=401, detail="Session has been revoked")
last_active = datetime.fromisoformat(session["last_active_at"])
if datetime.now(UTC) - last_active > timedelta(minutes=ACTIVITY_THROTTLE_MINUTES):
await update_last_active(db, session_id)
- Session limit enforcement: cap at 5 active sessions per user, evicting the oldest when a new login exceeds the limit. This prevents unlimited session accumulation without requiring manual cleanup.
Implementation Scope
Full TDD across 4 layers: DB queries (8 tests), service (10 tests), router (7 tests), and auth dependency (8 tests), plus frontend tests for the Devices page and login session toast. The feature touched 24 files across both apps/api and apps/web.
The TDD commit history shows strict red-green discipline:
red(api): add tests for user_sessions DB queries→green(api): implement user_sessions DB queriesred(api): add tests for session service→green(api): implement session servicered(api): add tests for session-aware get_current_user→green(api): add session validation and activity trackingred(api): add tests for session router→green(api): implement session REST endpoints
Each layer was tested before implementation, ensuring the interface contracts were defined by tests first.
Evidence:
2. Email Integration with Resend API and Jinja2
Built a complete email sending system: Resend as the transactional email provider, Jinja2 for server-side template rendering, and a settings page for managing templates.
What I Learned
- Resend API integration via
httpx. Resend’s API is simpler than traditional SMTP: one POST request with JSON body, no connection pooling or TLS handshake management. - Jinja2 templating in FastAPI: render HTML email templates with context variables (client name, invoice amount, due date). The
EmailTemplateServicehandles CRUD for templates and rendering with variable substitution. - Email HTML is a different world:
<table>layout, inline styles, no CSS grid. Modern CSS features like flexbox and grid aren’t supported by email clients (Outlook still uses Word’s rendering engine). I had to exclude email templates from SonarQube analysis because web code quality rules don’t apply to email HTML. - Template variable injection: Jinja2’s
Environment(autoescape=True)prevents XSS in rendered HTML, but SonarQube’s Bandit scanner flaggedMarkup()as a potential vulnerability (B704). The fix was a targeted# nosec B704suppression with a comment explaining why the input is safe (comes from sandboxed templates, not user input).
Evidence:
3. CodeMirror Editor Integration
The email template editor needed syntax highlighting, line numbers, and a formatting toolbar. I integrated CodeMirror 6 into the React frontend.
What I Learned
- CodeMirror 6 architecture: unlike v5, CodeMirror 6 is modular. You compose an editor from extensions (syntax highlighting, line numbers, keybindings). Each extension is a separate import.
- React lifecycle issues with CodeMirror: the editor div must exist in the DOM before CodeMirror mounts. When the component renders conditionally (behind a loading state), the editor initializes on an empty div and shows no content. Fix: defer CodeMirror initialization until the template data is loaded.
- Active line highlighting and formatting toolbar: added a sticky toolbar inside the editor border with bold/italic/link buttons that insert markup at the cursor position.
Evidence:
4. Toast Notification Audit
Audited all CRUD operations across the app and found inconsistent toast behavior: some mutations showed success/error toasts, others silently succeeded or failed. Created a standardized getErrorMessage utility that extracts readable error messages from API responses (handling different error shapes: string, object with detail, nested validation errors).
What I Learned
- Error shape normalization: FastAPI can return errors in multiple formats (
{"detail": "string"},{"detail": [{"msg": "...", "loc": [...]}]}, or plain strings). A single utility that handles all shapes prevents each component from reinventing error extraction. - TanStack Query
onError/onSuccesspatterns: centralized toast logic in the mutation hooks (use-clients.ts,use-invoices.ts, etc.) rather than in individual components. This way, every consumer of the hook gets consistent feedback.
Evidence:
5. Superset Workspace Isolation
Superset is a macOS app for isolated dev workspaces using git worktrees. Each developer gets their own workspace with persistent sessions. I configured it for the SIRA project, which required solving a non-trivial isolation problem: each workspace needs its own Redis, Supabase, and Celery instances on separate ports.
What I Learned
- Port block allocation: each workspace gets a block of 10 ports (web+0, api+1, redis+2, supabase-api+3, supabase-db+4, studio+5, auth+6, inbucket+7). Port blocks are dynamically allocated based on available ranges.
- Docker Compose profile isolation: each workspace’s infra services run in their own Docker Compose profile, preventing container name collisions across workspaces.
- Worktree-aware database commands: the
make db-*commands needed to auto-detect which workspace they’re running in and target the correct Supabase container. Fixed by checkingSUPERSET_WORKSPACE_NAMEand deriving the container name.
The challenge was making all of this transparent to the developer. After setup, make dev just works in any workspace without manual port configuration or container name management.
Evidence:
6. Schema-Test CI Hardening
Iterated on the Schemathesis CI job through seven rounds of fixes (detailed in the Part A QA blog). Key new concepts learned:
- JWT generation in CI: generate valid test tokens using
PyJWTso Schemathesis tests actual endpoint logic, not just auth rejection - Rate limiter bypass for test environments:
ENVIRONMENT=testdisables the rate limiter so the fuzzer can fire rapid requests without getting 429s - UUID type validation in FastAPI: changing path params from
strtoUUIDgives you free input validation (422 instead of 500 for invalid UUIDs)
Evidence:
- MR !107
- Commits:
32ab3ce,155d856,450ce9f
7. CI Reporting System (450+ Lines of Bash)
Built ci-report.sh from scratch: a shared bash script with 8 subcommands that parses output from pytest, vitest, vite build, schemathesis, bandit, pnpm audit, pytest-bdd, and SonarQube. Each parser extracts metrics from raw CI output and posts a formatted markdown table as an MR comment.
Key technical challenges solved:
- ANSI escape code stripping:
teecaptures raw terminal output with color codes (\e[32m). All regex parsing silently failed until I wrotestrip_ansi()usingsed 's/\x1b\[[0-9;]*[a-zA-Z]//g' - SonarQube API exploration: the
measures/componentendpoint returned 403 (insufficient privileges), so I designed a dual quality-gate fetch (PR gate + main branch gate) to show full metrics - Green/red collapsible UX: HTML
<details>/<summary>for passing results, expanded tables for failures
Evidence:
8. PostgreSQL Connection Debugging
Connecting the CI runner to a staging Supabase project required solving a chain of infrastructure problems:
- IPv4 vs IPv6: Supabase direct connections are IPv6 only; VPS has no IPv6. Solution: use the Supavisor pooler (IPv4)
- Pooler hostname: documentation showed
aws-0-ap-southeast-1but the actual host wasaws-1-ap-northeast-1(different region numbering) - Password URL encoding: special characters (
#,%) in the DB password broke connection strings. Solution:PGPASSWORDenv var instead of embedding in URL - PostgREST role grants: after
DROP SCHEMA public CASCADE, theanon/authenticated/service_roleroles lose all table permissions. Fixed withALTER DEFAULT PRIVILEGESbefore running migrations
Evidence:
9. VPS Resource Analysis
Diagnosed OOM crashes on the 3.8GB VPS by mapping memory usage per process: Docker containers (~147MB total), dockerd (287MB), gitlab-runner (38MB). Found stale blue/green deployment containers consuming ~1GB unnecessarily. Drafted an upgrade proposal with specific recommendations (8GB RAM, 40GB disk).
Evidence:
- SSH session logs,
docker stats,free -h,ps aux --sort=-%mem