This is the operator runbook for local proof, not a hosted deployment guide.
.env.example to .env.set -a
source .env
set +a
UV_PROJECT_ENVIRONMENT="${UV_PROJECT_ENVIRONMENT:-$SOURCE_HARBOR_CACHE_ROOT/project-venv}" \
uv sync --frozen --extra dev --extra e2e
./bin/doctor to see env/runtime blockers before boot../bin/bootstrap-full-stack and ./bin/full-stack up.Keep the local boot topology honest:
./bin/bootstrap-full-stack now treats the core stack and the reader
stack separately..runtime-cache/ when
Docker is unavailable but local postgres / initdb / pg_ctl /
temporal binaries exist../bin/full-stack up can now self-heal this same core layer: if worker
preflight sees Temporal down, it first attempts the repo-owned
core_services.sh up path before declaring the stack blocked..runtime-cache/tmp/local-temporal/dev.sqlite instead of silently borrowing
a host-global state directory.7233 does not count as a green
local fallback.--with-reader-stack 1 instead of assuming it is part of the
default first-run contract.http://127.0.0.1:9000/healthz when that port stays free, but current local truth should always be read from .runtime-cache/run/full-stack/resolved.envhttp://127.0.0.1:3000 only when that port stays free; otherwise trust .runtime-cache/run/full-stack/resolved.env./bin/doctor./bin/full-stack status/opsbash scripts/ci/python_tests.sh.runtime-cache/logs/components/full-stack.runtime-cache/logs/local-core.runtime-cache/tmp/local-temporal/dev.sqlite.runtime-cache/reports.runtime-cache/tmp/web-runtime/workspace/apps/web.runtime-cache/tmp/web-runtime/workspace/apps/web/.env.localThat temporary .env.local is intentional. It pins the local browser-facing
API base URL and the local write-session fallback into the repo-managed web
runtime so manual intake and other web writes keep working even when local env
profiles contain non-truthy strings like CI=false.
Current local video-first note:
mode=full run can now succeed again on the local stackgemini-3-flash-previewACTIVE.runtime-cache/reports/mutation/mutmut-cicd-stats.json./bin/disk-space-audit./bin/disk-space-audit-check./bin/disk-space-cleanup --wave safe./bin/runtime-cache-maintenancepython3 scripts/runtime/maintain_external_cache.py --jsonpython3 scripts/runtime/docker_hygiene.py --json./bin/disk-space-legacy-migration --json./bin/disk-space-legacy-migration --apply --yes --auto-mappingspython3 scripts/governance/migrate_local_private_ledgers.py --jsonpython3 scripts/governance/report_worktree_status.py
This report now fail-closes to partial when no authoritative local-private plan ledger exists yet, instead of exiting without a report.Do not hand-delete .runtime-cache/ when local verification expands the repo
footprint. Use runtime-cache-maintenance for repo-side maintenance, and use
disk-space-cleanup --wave ... only when you are intentionally running a
governed cleanup wave from
reference/disk-space-governance.md.
Current scratch-space rule:
.runtime-cache/tmp is budgeted at 1024MB / 80000 filesweb-runtime/, screenshots, or ad-hoc debug folders push it
over budget, stop the stack first and clean only rebuildable scratch paths,
then rerun ./bin/runtime-cache-maintenanceThe two runtime-heavy local caches worth recognizing by name are:
.runtime-cache/tmp/web-runtime/ for the repo-managed Next.js workspace copy.runtime-cache/tmp/local-temporal/ for repo-owned Temporal fallback state.runtime-cache/reports/mutation/mutmut-cicd-stats.json for the latest
mutation-readiness receipt consumed by repo-side strict CIIf you want a clean local runtime reset before another verification pass, use
./bin/full-stack down first; that shutdown path now also attempts to tear down
repo-owned core services instead of leaving Postgres/Temporal residue behind.
If you create an ad-hoc mutation workspace such as .runtime-cache/tmp/mutation-debug,
delete it after the debugging turn ends so the tmp/ budget does not fail-close
future governance runs.
Do not hand-delete the repo-owned external cache root resolved by
SOURCE_HARBOR_CACHE_ROOT either.
project-venv/ and state/*.db are protected runtime objectsworkspace/, artifacts/, browser/, and tmp/ are governed by TTL,
quiet-window, and budget rulesproject-venv-* directories are verify-first cleanup candidates,
not random junkIf a local browser proof actually needs login state, SourceHarbor uses a dedicated browser root instead of your default personal Chrome root:
./bin/bootstrap-repo-chrome --json
./bin/start-repo-chrome --json
./bin/open-repo-chrome-tabs --site-set login-strong-check --json
python3 scripts/runtime/resolve_chrome_profile.py --mode repo-runtime --json
The runtime model is now:
SOURCE_HARBOR_CHROME_USER_DATA_DIRSOURCE_HARBOR_CHROME_PROFILE_DIR./bin/stop-repo-chrome stops only the repo-owned Chrome instance and keeps other repos’ Chrome roots alone./bin/open-repo-chrome-tabs --site-set login-strong-check opens the current manual-login tab pack:
Hosted CI stays login-free. Real-profile browser proof is a local-only lane.
When you intentionally keep browser proof sessions in local env files, treat
GITHUB_COOKIE, GOOGLE_COOKIE, RESEND_COOKIE, and YOUTUBE_COOKIE as
maintainer-local, read-only proof helpers only. They are not public repo
contract requirements, and they must never be committed, synced to shared
stores, or echoed into runtime artifacts.
Treat these as the current local-proof site roles, not as a promise that every site should become a deep integration target.
| Site | Why it exists in the local runbook | Strongest layer today | Current gate | Verdict |
|---|---|---|---|---|
| Google Account | proves repo-owned Chrome login persistence and restart sanity | DOM / page-state proof | local login state when you intentionally run real-profile checks | already-covered |
| YouTube | proves the strongest current source + browser proof lane | hybrid: Data API + DOM / page-state proof | shared operator key persistence plus local login state when strict live proof is reopened | already-covered |
| Bilibili account center | proves whether the repo-owned profile still has the Bilibili account session needed for stronger local checks | DOM today, hybrid later only if account-side automation becomes worth the maintenance cost | human login in the repo-owned profile | external-blocked |
| Resend dashboard | proves notification/admin readiness and sender-chain follow-through, not source ingestion | admin UI + provider configuration | human login plus RESEND_FROM_EMAIL / sender-domain setup |
external-blocked |
| RSSHub / RSS sources | source-universe intake coverage lives here, not in browser proof | HTTP / API | source availability and route/feed correctness | already-covered |
Do not treat Google Account or Resend as future ingestion targets just because they are part of the login-check tab set. They are operator proof surfaces.
If you want the longer-lived “what can still be deepened safely” map, read site-capability.md.
.runtime-cache/logs/ for the matching component log./api/v1/jobs/<job-id> or /api/v1/feed/digests if the issue is inside a pipeline run../bin/smoke-full-stack --offline-fallback 0 as the long live-smoke lane, not as the same thing as the local supervisor proof.When you need the strict repo-side closeout gate from a maintainer workstation,
./bin/repo-side-strict-ci --mode pre-push still prefers the standard-env
container path, but it can now fall back to the host-bootstrapped pre-push
quality gate when Docker itself is the only missing layer.
For the disk-space map, safe cleanup boundary, and legacy-path migration rules, read reference/disk-space-governance.md.