Base URLs
- Backend:
http://localhost:8000 - API base:
http://localhost:8000/api - OpenAPI:
http://localhost:8000/docs
Protocols
- REST/JSON for most endpoints
- WebSocket for
/api/ingest/ws/{job_id} - SSE for
POST /api/chat/stream
Contract Conventions
- Request schemas are strict for most write endpoints.
- Provenance fields are explicit in chat responses.
X-Response-Timeis returned for latency observability.- Rate limiting can return
429withRetry-After.
Primary Endpoint Families
/api/ingest*: ingestion, progress, screenshots, and page title/api/library*: source inventory and lifecycle/api/career*: taxonomy, scoring, recommendations/api/projects*: project ideation and tracking/api/social*: social OAuth connect/disconnect and direct publishing/api/network-ops*: LinkedIn network snapshot persistence and KPI deltas/api/chat*: chat and streaming/api/conversations*: conversation history and updates/api/feedback*: quality and review signals/api/evals*: evaluation runs and comparisons
GET /api/library/sources includes a metrics block with filtered-vs-total counts for sources, chunks, tools, and concepts in the current query scope.
YouTube SourceItem responses also include a description field sourced from ingest metadata and exposed on both list and detail endpoints.
GET /api/career/composites returns composite readiness_pct (coverage-based activation/sorting) and confidence_pct (evidence-depth signal), with score_pct retained as a readiness alias for compatibility.
Services Surface Endpoint Map
The/monitor UI surface depends on these endpoint groups:
| Services area | Endpoints |
|---|---|
| Health | /health, /health?deep=true |
| Provider/runtime status | /api/stats/providers, /api/stats/slo |
| Cost/usage telemetry | /api/stats/costs |
| Eval runs and comparisons | /api/evals/runs, /api/evals/runs/{run_id}, /api/evals/compare |
| Databases topology | local store paths (data/*.db, data/chroma/) + architecture/data-store docs (no dedicated API endpoint) |
| Exploration queue | local curated monitor content for not-yet-tried external tools (no dedicated API endpoint) |
| Tracing status (integration-level) | runtime tracing integrations + stats endpoints |
| LinkedIn network KPI card | /api/network-ops/linkedin/summary |
Provenance Contract (Chat)
Client-facing provenance fields:sourcesanswer_origin(values:library_rag,web_rag,general,policy,skill)provenance_note- optional
suggested_sources - optional
model_name - optional
provider
suggested_sources as discovery suggestions, not as grounded citations.
Provenance Contract (Other Surfaces)
- Signal policy endpoint
GET /api/ingest/policyreturns structured accept/reject criteria, threshold, model, career goal context, and aggregate stats. Source of truth:backend/services/ingest/policy.py. - Ingest activity relevance entries can include optional
relevance.model_name. - Ingest traceability endpoint
GET /api/ingest/{job_id}/traceincludes pipelineevents(step, version, duration, model when present). - Ingest traceability endpoint
GET /api/ingest/{job_id}/traceincludes extractedskills(summary, concepts, tools, matched canonical skills). - Ingest traceability endpoint
GET /api/ingest/{job_id}/traceincludessuperpowersimpact preview (meta-skill/composite deltas). - Ingest traceability endpoint
GET /api/ingest/{job_id}/traceincludespatternsimpact preview (before/after pattern deltas). - Projects list/detail entries can include optional
model_namefor generated/extracted records. - Career transparency endpoint
GET /api/career/skill-provenancereturns per-skill declared source, learnings hit counts, and extraction model summary for UI hover details.