Merge remote-tracking branch 'origin/master' into fix/compose-smoke-dockerfiles

This commit is contained in:
Arjun (OpenClaw)
2026-02-07 17:32:23 +00:00
192 changed files with 15414 additions and 3740 deletions

11
backend/.coveragerc Normal file
View File

@@ -0,0 +1,11 @@
[run]
branch = True
source =
app
omit =
*/.venv/*
alembic/versions/*
[report]
show_missing = True
skip_covered = True

1
backend/.gitignore vendored
View File

@@ -1,6 +1,7 @@
__pycache__/
*.pyc
.venv/
.venv-tools/
.env
.runlogs/

178
backend/README.md Normal file
View File

@@ -0,0 +1,178 @@
# Mission Control Backend (FastAPI)
This directory contains the **Mission Control backend API** (FastAPI + SQLModel) and its database migrations (Alembic).
- Default API base URL: http://localhost:8000
- Health endpoints: `/healthz`, `/readyz`
- API routes: `/api/v1/*`
## Requirements
- Python **3.12+**
- [`uv`](https://github.com/astral-sh/uv) (recommended; used by this repo)
- Postgres (local or Docker)
- Redis (local or Docker)
## Quick start (local backend + Docker Postgres/Redis)
From the repo root:
```bash
# start dependencies
cp .env.example .env
docker compose -f compose.yml --env-file .env up -d db redis
# run backend
cd backend
cp .env.example .env
uv sync --extra dev
uv run uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
```
Verify:
```bash
curl -f http://localhost:8000/healthz
```
## Configuration / environment variables
Backend settings are defined in `app/core/config.py` via `pydantic-settings`.
The backend loads env files in this order:
1. `backend/.env` (preferred)
2. `.env` (current working directory)
A starter file exists at `backend/.env.example`.
### Core
- `ENVIRONMENT` (default: `dev`)
- In `dev`, if you **dont** explicitly set `DB_AUTO_MIGRATE`, the backend defaults it to `true`.
- `LOG_LEVEL` (default: `INFO`)
- `DATABASE_URL`
- Default: `postgresql+psycopg://postgres:postgres@localhost:5432/openclaw_agency`
- Recommended local/dev default (matches `backend/.env.example`):
`postgresql+psycopg://postgres:postgres@localhost:5432/mission_control`
- `REDIS_URL` (default: `redis://localhost:6379/0`)
- `CORS_ORIGINS` (comma-separated)
- Example: `http://localhost:3000`
- `BASE_URL` (optional)
### Database lifecycle
- `DB_AUTO_MIGRATE`
- If `true`: on startup, the backend attempts to run Alembic migrations (`alembic upgrade head`).
- If there are **no** Alembic revision files yet, it falls back to `SQLModel.metadata.create_all`.
### Auth (Clerk)
Clerk is used for user authentication (optional for local/self-host in many setups).
- `CLERK_JWKS_URL` (string)
- `CLERK_VERIFY_IAT` (default: `true`)
- `CLERK_LEEWAY` (default: `10.0`)
## Database migrations (Alembic)
Migrations live in `backend/alembic/versions/*`.
Common commands:
```bash
cd backend
# apply migrations
uv run alembic upgrade head
# create a new migration (example)
uv run alembic revision --autogenerate -m "add foo"
```
Notes:
- The backend can also auto-run migrations on startup when `DB_AUTO_MIGRATE=true`.
- The database URL is normalized so `postgresql://...` becomes `postgresql+psycopg://...`.
## Running tests / lint / typecheck
From repo root (recommended):
```bash
make backend-test
make backend-lint
make backend-typecheck
make backend-coverage
```
Or from `backend/`:
```bash
cd backend
uv run pytest
uv run flake8 --config .flake8
uv run mypy
```
Formatting:
```bash
make backend-format
make backend-format-check
```
## Scripts
Backend scripts live in `backend/scripts/`:
- `export_openapi.py` export OpenAPI schema
- `seed_demo.py` seed demo data (if applicable)
- `sync_gateway_templates.py` sync repo templates to an existing gateway
Run with:
```bash
cd backend
uv run python scripts/export_openapi.py
```
## Troubleshooting
### Backend cant connect to Postgres
- If you started Postgres via compose, make sure it is healthy:
```bash
docker compose -f compose.yml --env-file .env ps
docker compose -f compose.yml --env-file .env logs -f --tail=200 db
```
- If backend runs **locally** (not in compose), `DATABASE_URL` should usually point at `localhost`.
### Backend cant connect to Redis
- Ensure the Redis container is up:
```bash
docker compose -f compose.yml --env-file .env logs -f --tail=200 redis
```
- Confirm `REDIS_URL=redis://localhost:6379/0` when running backend locally.
### CORS issues from the frontend
- Set `CORS_ORIGINS=http://localhost:3000` (or a comma-separated list) in `backend/.env`.
- Restart the backend after changing env vars.
### Alembic / migrations not applying
- If you want deterministic behavior, run migrations manually:
```bash
cd backend
uv run alembic upgrade head
```
- If `DB_AUTO_MIGRATE=false`, the backend may use `create_all` instead of Alembic.

View File

@@ -0,0 +1,54 @@
"""board groups
Revision ID: 12772fdcdfe9
Revises: 9f0c4fb2a7b8
Create Date: 2026-02-07 17:13:50.597099
"""
from __future__ import annotations
from alembic import op
import sqlalchemy as sa
import sqlmodel
# revision identifiers, used by Alembic.
revision = "12772fdcdfe9"
down_revision = "9f0c4fb2a7b8"
branch_labels = None
depends_on = None
def upgrade() -> None:
op.create_table(
"board_groups",
sa.Column("id", sa.Uuid(), nullable=False),
sa.Column("name", sqlmodel.sql.sqltypes.AutoString(), nullable=False),
sa.Column("slug", sqlmodel.sql.sqltypes.AutoString(), nullable=False),
sa.Column("description", sqlmodel.sql.sqltypes.AutoString(), nullable=True),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.Column("updated_at", sa.DateTime(), nullable=False),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("ix_board_groups_slug", "board_groups", ["slug"], unique=False)
op.add_column("boards", sa.Column("board_group_id", sa.Uuid(), nullable=True))
op.create_index("ix_boards_board_group_id", "boards", ["board_group_id"], unique=False)
op.create_foreign_key(
"fk_boards_board_group_id_board_groups",
"boards",
"board_groups",
["board_group_id"],
["id"],
)
def downgrade() -> None:
op.drop_constraint(
"fk_boards_board_group_id_board_groups", "boards", type_="foreignkey"
)
op.drop_index("ix_boards_board_group_id", table_name="boards")
op.drop_column("boards", "board_group_id")
op.drop_index("ix_board_groups_slug", table_name="board_groups")
op.drop_table("board_groups")

View File

@@ -0,0 +1,122 @@
"""board group memory
Revision ID: 23c771c93430
Revises: 12772fdcdfe9
Create Date: 2026-02-07 18:00:19.065861
"""
from __future__ import annotations
from alembic import op
import sqlalchemy as sa
import sqlmodel
# revision identifiers, used by Alembic.
revision = "23c771c93430"
down_revision = "12772fdcdfe9"
branch_labels = None
depends_on = None
def upgrade() -> None:
# Repair drift: it's possible to end up with alembic_version stamped at 12772fdcdfe9
# without actually applying the board groups schema changes. This migration makes the
# required board_groups + boards.board_group_id objects exist before adding group memory.
conn = op.get_bind()
inspector = sa.inspect(conn)
if not inspector.has_table("board_groups"):
op.create_table(
"board_groups",
sa.Column("id", sa.Uuid(), nullable=False),
sa.Column("name", sqlmodel.sql.sqltypes.AutoString(), nullable=False),
sa.Column("slug", sqlmodel.sql.sqltypes.AutoString(), nullable=False),
sa.Column("description", sqlmodel.sql.sqltypes.AutoString(), nullable=True),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.Column("updated_at", sa.DateTime(), nullable=False),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("ix_board_groups_slug", "board_groups", ["slug"], unique=False)
else:
indexes = {idx.get("name") for idx in inspector.get_indexes("board_groups")}
if "ix_board_groups_slug" not in indexes:
op.create_index("ix_board_groups_slug", "board_groups", ["slug"], unique=False)
inspector = sa.inspect(conn)
board_cols = {col.get("name") for col in inspector.get_columns("boards")}
if "board_group_id" not in board_cols:
op.add_column("boards", sa.Column("board_group_id", sa.Uuid(), nullable=True))
inspector = sa.inspect(conn)
board_indexes = {idx.get("name") for idx in inspector.get_indexes("boards")}
if "ix_boards_board_group_id" not in board_indexes:
op.create_index("ix_boards_board_group_id", "boards", ["board_group_id"], unique=False)
def _has_board_groups_fk() -> bool:
for fk in inspector.get_foreign_keys("boards"):
if fk.get("referred_table") != "board_groups":
continue
if fk.get("constrained_columns") != ["board_group_id"]:
continue
if fk.get("referred_columns") != ["id"]:
continue
return True
return False
if not _has_board_groups_fk():
op.create_foreign_key(
"fk_boards_board_group_id_board_groups",
"boards",
"board_groups",
["board_group_id"],
["id"],
)
inspector = sa.inspect(conn)
if not inspector.has_table("board_group_memory"):
op.create_table(
"board_group_memory",
sa.Column("id", sa.Uuid(), nullable=False),
sa.Column("board_group_id", sa.Uuid(), nullable=False),
sa.Column("content", sa.Text(), nullable=False),
sa.Column("tags", sa.JSON(), nullable=True),
sa.Column(
"is_chat",
sa.Boolean(),
server_default=sa.text("false"),
nullable=False,
),
sa.Column("source", sa.Text(), nullable=True),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.ForeignKeyConstraint(["board_group_id"], ["board_groups.id"]),
sa.PrimaryKeyConstraint("id"),
)
op.create_index(
"ix_board_group_memory_board_group_id",
"board_group_memory",
["board_group_id"],
unique=False,
)
op.create_index(
"ix_board_group_memory_is_chat",
"board_group_memory",
["is_chat"],
unique=False,
)
op.create_index(
"ix_board_group_memory_board_group_id_is_chat_created_at",
"board_group_memory",
["board_group_id", "is_chat", "created_at"],
unique=False,
)
def downgrade() -> None:
op.drop_index(
"ix_board_group_memory_board_group_id_is_chat_created_at",
table_name="board_group_memory",
)
op.drop_index("ix_board_group_memory_is_chat", table_name="board_group_memory")
op.drop_index("ix_board_group_memory_board_group_id", table_name="board_group_memory")
op.drop_table("board_group_memory")

View File

@@ -0,0 +1,67 @@
"""ensure board group memory table
Revision ID: 5fb3b2491090
Revises: 23c771c93430
Create Date: 2026-02-07 18:07:20.588662
"""
from __future__ import annotations
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = "5fb3b2491090"
down_revision = "23c771c93430"
branch_labels = None
depends_on = None
def upgrade() -> None:
conn = op.get_bind()
inspector = sa.inspect(conn)
if inspector.has_table("board_group_memory"):
return
op.create_table(
"board_group_memory",
sa.Column("id", sa.Uuid(), nullable=False),
sa.Column("board_group_id", sa.Uuid(), nullable=False),
sa.Column("content", sa.Text(), nullable=False),
sa.Column("tags", sa.JSON(), nullable=True),
sa.Column(
"is_chat",
sa.Boolean(),
server_default=sa.text("false"),
nullable=False,
),
sa.Column("source", sa.Text(), nullable=True),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.ForeignKeyConstraint(["board_group_id"], ["board_groups.id"]),
sa.PrimaryKeyConstraint("id"),
)
op.create_index(
"ix_board_group_memory_board_group_id",
"board_group_memory",
["board_group_id"],
unique=False,
)
op.create_index(
"ix_board_group_memory_is_chat",
"board_group_memory",
["is_chat"],
unique=False,
)
op.create_index(
"ix_board_group_memory_board_group_id_is_chat_created_at",
"board_group_memory",
["board_group_id", "is_chat", "created_at"],
unique=False,
)
def downgrade() -> None:
# This is a repair migration. Downgrading from 5fb3b2491090 -> 23c771c93430
# should keep the board_group_memory table (it belongs to the prior revision).
return

View File

@@ -0,0 +1,79 @@
"""repair board groups schema
Revision ID: af403671a8c4
Revises: 5fb3b2491090
Create Date: 2026-02-07
"""
from __future__ import annotations
from alembic import op
import sqlalchemy as sa
import sqlmodel
# revision identifiers, used by Alembic.
revision = "af403671a8c4"
down_revision = "5fb3b2491090"
branch_labels = None
depends_on = None
def upgrade() -> None:
# Repair drift: it is possible to end up with alembic_version stamped at (or beyond)
# the board group revisions without having the underlying DB objects present.
conn = op.get_bind()
inspector = sa.inspect(conn)
if not inspector.has_table("board_groups"):
op.create_table(
"board_groups",
sa.Column("id", sa.Uuid(), nullable=False),
sa.Column("name", sqlmodel.sql.sqltypes.AutoString(), nullable=False),
sa.Column("slug", sqlmodel.sql.sqltypes.AutoString(), nullable=False),
sa.Column("description", sqlmodel.sql.sqltypes.AutoString(), nullable=True),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.Column("updated_at", sa.DateTime(), nullable=False),
sa.PrimaryKeyConstraint("id"),
)
op.create_index("ix_board_groups_slug", "board_groups", ["slug"], unique=False)
else:
indexes = {idx.get("name") for idx in inspector.get_indexes("board_groups")}
if "ix_board_groups_slug" not in indexes:
op.create_index("ix_board_groups_slug", "board_groups", ["slug"], unique=False)
inspector = sa.inspect(conn)
board_cols = {col.get("name") for col in inspector.get_columns("boards")}
if "board_group_id" not in board_cols:
op.add_column("boards", sa.Column("board_group_id", sa.Uuid(), nullable=True))
inspector = sa.inspect(conn)
board_indexes = {idx.get("name") for idx in inspector.get_indexes("boards")}
if "ix_boards_board_group_id" not in board_indexes:
op.create_index("ix_boards_board_group_id", "boards", ["board_group_id"], unique=False)
def _has_board_groups_fk() -> bool:
for fk in inspector.get_foreign_keys("boards"):
if fk.get("referred_table") != "board_groups":
continue
if fk.get("constrained_columns") != ["board_group_id"]:
continue
if fk.get("referred_columns") != ["id"]:
continue
return True
return False
if not _has_board_groups_fk():
op.create_foreign_key(
"fk_boards_board_group_id_board_groups",
"boards",
"board_groups",
["board_group_id"],
["id"],
)
def downgrade() -> None:
# Repair migration: do not attempt to undo drift fixes automatically.
return

View File

@@ -87,7 +87,9 @@ async def _require_gateway_main(
) -> tuple[Gateway, GatewayClientConfig]:
session_key = (agent.openclaw_session_id or "").strip()
if not session_key:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="Agent missing session key")
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail="Agent missing session key"
)
gateway = (
await session.exec(select(Gateway).where(col(Gateway.main_session_key) == session_key))
).first()
@@ -675,8 +677,10 @@ async def broadcast_gateway_lead_message(
gateway, config = await _require_gateway_main(session, agent_ctx.agent)
statement = select(Board).where(col(Board.gateway_id) == gateway.id).order_by(
col(Board.created_at).desc()
statement = (
select(Board)
.where(col(Board.gateway_id) == gateway.id)
.order_by(col(Board.created_at).desc())
)
if payload.board_ids:
statement = statement.where(col(Board.id).in_(payload.board_ids))

View File

@@ -0,0 +1,400 @@
from __future__ import annotations
import asyncio
import json
from collections.abc import AsyncIterator
from datetime import datetime, timezone
from uuid import UUID
from fastapi import APIRouter, Depends, HTTPException, Query, Request, status
from sqlalchemy import func
from sqlmodel import col, select
from sqlmodel.ext.asyncio.session import AsyncSession
from sse_starlette.sse import EventSourceResponse
from app.api.deps import ActorContext, get_board_or_404, require_admin_auth, require_admin_or_agent
from app.core.auth import AuthContext
from app.core.config import settings
from app.core.time import utcnow
from app.db.pagination import paginate
from app.db.session import async_session_maker, get_session
from app.integrations.openclaw_gateway import GatewayConfig as GatewayClientConfig
from app.integrations.openclaw_gateway import OpenClawGatewayError, ensure_session, send_message
from app.models.agents import Agent
from app.models.board_group_memory import BoardGroupMemory
from app.models.board_groups import BoardGroup
from app.models.boards import Board
from app.models.gateways import Gateway
from app.schemas.board_group_memory import BoardGroupMemoryCreate, BoardGroupMemoryRead
from app.schemas.pagination import DefaultLimitOffsetPage
from app.services.mentions import extract_mentions, matches_agent_mention
router = APIRouter(tags=["board-group-memory"])
group_router = APIRouter(prefix="/board-groups/{group_id}/memory", tags=["board-group-memory"])
board_router = APIRouter(prefix="/boards/{board_id}/group-memory", tags=["board-group-memory"])
def _parse_since(value: str | None) -> datetime | None:
if not value:
return None
normalized = value.strip()
if not normalized:
return None
normalized = normalized.replace("Z", "+00:00")
try:
parsed = datetime.fromisoformat(normalized)
except ValueError:
return None
if parsed.tzinfo is not None:
return parsed.astimezone(timezone.utc).replace(tzinfo=None)
return parsed
def _serialize_memory(memory: BoardGroupMemory) -> dict[str, object]:
return BoardGroupMemoryRead.model_validate(memory, from_attributes=True).model_dump(mode="json")
async def _gateway_config(session: AsyncSession, board: Board) -> GatewayClientConfig | None:
if board.gateway_id is None:
return None
gateway = await session.get(Gateway, board.gateway_id)
if gateway is None or not gateway.url:
return None
return GatewayClientConfig(url=gateway.url, token=gateway.token)
async def _send_agent_message(
*,
session_key: str,
config: GatewayClientConfig,
agent_name: str,
message: str,
deliver: bool = False,
) -> None:
await ensure_session(session_key, config=config, label=agent_name)
await send_message(message, session_key=session_key, config=config, deliver=deliver)
async def _fetch_memory_events(
session: AsyncSession,
board_group_id: UUID,
since: datetime,
is_chat: bool | None = None,
) -> list[BoardGroupMemory]:
statement = (
select(BoardGroupMemory).where(col(BoardGroupMemory.board_group_id) == board_group_id)
# Old/invalid rows (empty/whitespace-only content) can exist; exclude them to
# satisfy the NonEmptyStr response schema.
.where(func.length(func.trim(col(BoardGroupMemory.content))) > 0)
)
if is_chat is not None:
statement = statement.where(col(BoardGroupMemory.is_chat) == is_chat)
statement = statement.where(col(BoardGroupMemory.created_at) >= since).order_by(
col(BoardGroupMemory.created_at)
)
return list(await session.exec(statement))
async def _notify_group_memory_targets(
*,
session: AsyncSession,
group: BoardGroup,
memory: BoardGroupMemory,
actor: ActorContext,
) -> None:
if not memory.content:
return
tags = set(memory.tags or [])
mentions = extract_mentions(memory.content)
is_broadcast = "broadcast" in tags or "all" in mentions
# Fetch group boards + agents.
boards = list(await session.exec(select(Board).where(col(Board.board_group_id) == group.id)))
if not boards:
return
board_by_id = {board.id: board for board in boards}
board_ids = list(board_by_id.keys())
agents = list(await session.exec(select(Agent).where(col(Agent.board_id).in_(board_ids))))
targets: dict[str, Agent] = {}
for agent in agents:
if not agent.openclaw_session_id:
continue
if actor.actor_type == "agent" and actor.agent and agent.id == actor.agent.id:
continue
if is_broadcast:
targets[str(agent.id)] = agent
continue
if agent.is_board_lead:
targets[str(agent.id)] = agent
continue
if mentions and matches_agent_mention(agent, mentions):
targets[str(agent.id)] = agent
if not targets:
return
actor_name = "User"
if actor.actor_type == "agent" and actor.agent:
actor_name = actor.agent.name
elif actor.user:
actor_name = actor.user.preferred_name or actor.user.name or actor_name
snippet = memory.content.strip()
if len(snippet) > 800:
snippet = f"{snippet[:797]}..."
base_url = settings.base_url or "http://localhost:8000"
for agent in targets.values():
session_key = agent.openclaw_session_id
if not session_key:
continue
board_id = agent.board_id
if board_id is None:
continue
board = board_by_id.get(board_id)
if board is None:
continue
config = await _gateway_config(session, board)
if config is None:
continue
mentioned = matches_agent_mention(agent, mentions)
if is_broadcast:
header = "GROUP BROADCAST"
elif mentioned:
header = "GROUP CHAT MENTION"
else:
header = "GROUP CHAT"
message = (
f"{header}\n"
f"Group: {group.name}\n"
f"From: {actor_name}\n\n"
f"{snippet}\n\n"
"Reply via group chat (shared across linked boards):\n"
f"POST {base_url}/api/v1/boards/{board.id}/group-memory\n"
'Body: {"content":"...","tags":["chat"]}'
)
try:
await _send_agent_message(
session_key=session_key,
config=config,
agent_name=agent.name,
message=message,
)
except OpenClawGatewayError:
continue
@group_router.get("", response_model=DefaultLimitOffsetPage[BoardGroupMemoryRead])
async def list_board_group_memory(
group_id: UUID,
is_chat: bool | None = Query(default=None),
session: AsyncSession = Depends(get_session),
auth: AuthContext = Depends(require_admin_auth),
) -> DefaultLimitOffsetPage[BoardGroupMemoryRead]:
group = await session.get(BoardGroup, group_id)
if group is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND)
statement = (
select(BoardGroupMemory).where(col(BoardGroupMemory.board_group_id) == group_id)
# Old/invalid rows (empty/whitespace-only content) can exist; exclude them to
# satisfy the NonEmptyStr response schema.
.where(func.length(func.trim(col(BoardGroupMemory.content))) > 0)
)
if is_chat is not None:
statement = statement.where(col(BoardGroupMemory.is_chat) == is_chat)
statement = statement.order_by(col(BoardGroupMemory.created_at).desc())
return await paginate(session, statement)
@group_router.get("/stream")
async def stream_board_group_memory(
group_id: UUID,
request: Request,
since: str | None = Query(default=None),
is_chat: bool | None = Query(default=None),
session: AsyncSession = Depends(get_session),
auth: AuthContext = Depends(require_admin_auth),
) -> EventSourceResponse:
group = await session.get(BoardGroup, group_id)
if group is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND)
since_dt = _parse_since(since) or utcnow()
last_seen = since_dt
async def event_generator() -> AsyncIterator[dict[str, str]]:
nonlocal last_seen
while True:
if await request.is_disconnected():
break
async with async_session_maker() as s:
memories = await _fetch_memory_events(
s,
group_id,
last_seen,
is_chat=is_chat,
)
for memory in memories:
if memory.created_at > last_seen:
last_seen = memory.created_at
payload = {"memory": _serialize_memory(memory)}
yield {"event": "memory", "data": json.dumps(payload)}
await asyncio.sleep(2)
return EventSourceResponse(event_generator(), ping=15)
@group_router.post("", response_model=BoardGroupMemoryRead)
async def create_board_group_memory(
group_id: UUID,
payload: BoardGroupMemoryCreate,
session: AsyncSession = Depends(get_session),
auth: AuthContext = Depends(require_admin_auth),
) -> BoardGroupMemory:
group = await session.get(BoardGroup, group_id)
if group is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND)
actor = ActorContext(actor_type="user", user=auth.user)
tags = set(payload.tags or [])
is_chat = "chat" in tags
mentions = extract_mentions(payload.content)
should_notify = is_chat or "broadcast" in tags or "all" in mentions
source = payload.source
if should_notify and not source:
if actor.actor_type == "agent" and actor.agent:
source = actor.agent.name
elif actor.user:
source = actor.user.preferred_name or actor.user.name or "User"
memory = BoardGroupMemory(
board_group_id=group_id,
content=payload.content,
tags=payload.tags,
is_chat=is_chat,
source=source,
)
session.add(memory)
await session.commit()
await session.refresh(memory)
if should_notify:
await _notify_group_memory_targets(session=session, group=group, memory=memory, actor=actor)
return memory
@board_router.get("", response_model=DefaultLimitOffsetPage[BoardGroupMemoryRead])
async def list_board_group_memory_for_board(
is_chat: bool | None = Query(default=None),
board: Board = Depends(get_board_or_404),
session: AsyncSession = Depends(get_session),
actor: ActorContext = Depends(require_admin_or_agent),
) -> DefaultLimitOffsetPage[BoardGroupMemoryRead]:
if actor.actor_type == "agent" and actor.agent:
if actor.agent.board_id and actor.agent.board_id != board.id:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN)
group_id = board.board_group_id
if group_id is None:
statement = select(BoardGroupMemory).where(col(BoardGroupMemory.id).is_(None))
return await paginate(session, statement)
statement = (
select(BoardGroupMemory).where(col(BoardGroupMemory.board_group_id) == group_id)
# Old/invalid rows (empty/whitespace-only content) can exist; exclude them to
# satisfy the NonEmptyStr response schema.
.where(func.length(func.trim(col(BoardGroupMemory.content))) > 0)
)
if is_chat is not None:
statement = statement.where(col(BoardGroupMemory.is_chat) == is_chat)
statement = statement.order_by(col(BoardGroupMemory.created_at).desc())
return await paginate(session, statement)
@board_router.get("/stream")
async def stream_board_group_memory_for_board(
request: Request,
board: Board = Depends(get_board_or_404),
actor: ActorContext = Depends(require_admin_or_agent),
since: str | None = Query(default=None),
is_chat: bool | None = Query(default=None),
) -> EventSourceResponse:
if actor.actor_type == "agent" and actor.agent:
if actor.agent.board_id and actor.agent.board_id != board.id:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN)
group_id = board.board_group_id
since_dt = _parse_since(since) or utcnow()
last_seen = since_dt
async def event_generator() -> AsyncIterator[dict[str, str]]:
nonlocal last_seen
while True:
if await request.is_disconnected():
break
if group_id is None:
await asyncio.sleep(2)
continue
async with async_session_maker() as session:
memories = await _fetch_memory_events(
session,
group_id,
last_seen,
is_chat=is_chat,
)
for memory in memories:
if memory.created_at > last_seen:
last_seen = memory.created_at
payload = {"memory": _serialize_memory(memory)}
yield {"event": "memory", "data": json.dumps(payload)}
await asyncio.sleep(2)
return EventSourceResponse(event_generator(), ping=15)
@board_router.post("", response_model=BoardGroupMemoryRead)
async def create_board_group_memory_for_board(
payload: BoardGroupMemoryCreate,
board: Board = Depends(get_board_or_404),
session: AsyncSession = Depends(get_session),
actor: ActorContext = Depends(require_admin_or_agent),
) -> BoardGroupMemory:
if actor.actor_type == "agent" and actor.agent:
if actor.agent.board_id and actor.agent.board_id != board.id:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN)
group_id = board.board_group_id
if group_id is None:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail="Board is not in a board group",
)
group = await session.get(BoardGroup, group_id)
if group is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND)
tags = set(payload.tags or [])
is_chat = "chat" in tags
mentions = extract_mentions(payload.content)
should_notify = is_chat or "broadcast" in tags or "all" in mentions
source = payload.source
if should_notify and not source:
if actor.actor_type == "agent" and actor.agent:
source = actor.agent.name
elif actor.user:
source = actor.user.preferred_name or actor.user.name or "User"
memory = BoardGroupMemory(
board_group_id=group_id,
content=payload.content,
tags=payload.tags,
is_chat=is_chat,
source=source,
)
session.add(memory)
await session.commit()
await session.refresh(memory)
if should_notify:
await _notify_group_memory_targets(session=session, group=group, memory=memory, actor=actor)
return memory
router.include_router(group_router)
router.include_router(board_router)

View File

@@ -0,0 +1,221 @@
from __future__ import annotations
import re
from typing import Any, cast
from uuid import UUID, uuid4
from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy import delete, func, update
from sqlmodel import col, select
from sqlmodel.ext.asyncio.session import AsyncSession
from app.api.deps import ActorContext, require_admin_auth, require_admin_or_agent
from app.core.auth import AuthContext
from app.core.time import utcnow
from app.db import crud
from app.db.pagination import paginate
from app.db.session import get_session
from app.models.agents import Agent
from app.models.board_groups import BoardGroup
from app.models.boards import Board
from app.models.gateways import Gateway
from app.schemas.board_group_heartbeat import (
BoardGroupHeartbeatApply,
BoardGroupHeartbeatApplyResult,
)
from app.schemas.board_groups import BoardGroupCreate, BoardGroupRead, BoardGroupUpdate
from app.schemas.common import OkResponse
from app.schemas.pagination import DefaultLimitOffsetPage
from app.schemas.view_models import BoardGroupSnapshot
from app.services.agent_provisioning import DEFAULT_HEARTBEAT_CONFIG, sync_gateway_agent_heartbeats
from app.services.board_group_snapshot import build_group_snapshot
router = APIRouter(prefix="/board-groups", tags=["board-groups"])
def _slugify(value: str) -> str:
slug = re.sub(r"[^a-z0-9]+", "-", value.lower()).strip("-")
return slug or uuid4().hex
@router.get("", response_model=DefaultLimitOffsetPage[BoardGroupRead])
async def list_board_groups(
session: AsyncSession = Depends(get_session),
auth: AuthContext = Depends(require_admin_auth),
) -> DefaultLimitOffsetPage[BoardGroupRead]:
statement = select(BoardGroup).order_by(func.lower(col(BoardGroup.name)).asc())
return await paginate(session, statement)
@router.post("", response_model=BoardGroupRead)
async def create_board_group(
payload: BoardGroupCreate,
session: AsyncSession = Depends(get_session),
auth: AuthContext = Depends(require_admin_auth),
) -> BoardGroup:
data = payload.model_dump()
if not (data.get("slug") or "").strip():
data["slug"] = _slugify(data.get("name") or "")
return await crud.create(session, BoardGroup, **data)
@router.get("/{group_id}", response_model=BoardGroupRead)
async def get_board_group(
group_id: UUID,
session: AsyncSession = Depends(get_session),
auth: AuthContext = Depends(require_admin_auth),
) -> BoardGroup:
group = await session.get(BoardGroup, group_id)
if group is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND)
return group
@router.get("/{group_id}/snapshot", response_model=BoardGroupSnapshot)
async def get_board_group_snapshot(
group_id: UUID,
include_done: bool = False,
per_board_task_limit: int = 5,
session: AsyncSession = Depends(get_session),
auth: AuthContext = Depends(require_admin_auth),
) -> BoardGroupSnapshot:
group = await session.get(BoardGroup, group_id)
if group is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND)
if per_board_task_limit < 0:
raise HTTPException(status_code=status.HTTP_422_UNPROCESSABLE_ENTITY)
return await build_group_snapshot(
session,
group=group,
exclude_board_id=None,
include_done=include_done,
per_board_task_limit=per_board_task_limit,
)
@router.post("/{group_id}/heartbeat", response_model=BoardGroupHeartbeatApplyResult)
async def apply_board_group_heartbeat(
group_id: UUID,
payload: BoardGroupHeartbeatApply,
session: AsyncSession = Depends(get_session),
actor: ActorContext = Depends(require_admin_or_agent),
) -> BoardGroupHeartbeatApplyResult:
group = await session.get(BoardGroup, group_id)
if group is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND)
if actor.actor_type == "agent":
agent = actor.agent
if agent is None:
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED)
if agent.board_id is None:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN)
if not agent.is_board_lead:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN)
board = await session.get(Board, agent.board_id)
if board is None or board.board_group_id != group_id:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN)
boards = list(await session.exec(select(Board).where(col(Board.board_group_id) == group_id)))
board_by_id = {board.id: board for board in boards}
board_ids = list(board_by_id.keys())
if not board_ids:
return BoardGroupHeartbeatApplyResult(
board_group_id=group_id,
requested=payload.model_dump(mode="json"),
updated_agent_ids=[],
failed_agent_ids=[],
)
agents = list(await session.exec(select(Agent).where(col(Agent.board_id).in_(board_ids))))
if not payload.include_board_leads:
agents = [agent for agent in agents if not agent.is_board_lead]
updated_agent_ids: list[UUID] = []
for agent in agents:
raw = agent.heartbeat_config
heartbeat: dict[str, Any] = (
cast(dict[str, Any], dict(raw))
if isinstance(raw, dict)
else cast(dict[str, Any], DEFAULT_HEARTBEAT_CONFIG.copy())
)
heartbeat["every"] = payload.every
if payload.target is not None:
heartbeat["target"] = payload.target
elif "target" not in heartbeat:
heartbeat["target"] = DEFAULT_HEARTBEAT_CONFIG.get("target", "none")
agent.heartbeat_config = heartbeat
agent.updated_at = utcnow()
session.add(agent)
updated_agent_ids.append(agent.id)
await session.commit()
agents_by_gateway_id: dict[UUID, list[Agent]] = {}
for agent in agents:
board_id = agent.board_id
if board_id is None:
continue
board = board_by_id.get(board_id)
if board is None or board.gateway_id is None:
continue
agents_by_gateway_id.setdefault(board.gateway_id, []).append(agent)
failed_agent_ids: list[UUID] = []
gateway_ids = list(agents_by_gateway_id.keys())
gateways = list(await session.exec(select(Gateway).where(col(Gateway.id).in_(gateway_ids))))
gateway_by_id = {gateway.id: gateway for gateway in gateways}
for gateway_id, gateway_agents in agents_by_gateway_id.items():
gateway = gateway_by_id.get(gateway_id)
if gateway is None or not gateway.url or not gateway.workspace_root:
failed_agent_ids.extend([agent.id for agent in gateway_agents])
continue
try:
await sync_gateway_agent_heartbeats(gateway, gateway_agents)
except Exception:
failed_agent_ids.extend([agent.id for agent in gateway_agents])
return BoardGroupHeartbeatApplyResult(
board_group_id=group_id,
requested=payload.model_dump(mode="json"),
updated_agent_ids=updated_agent_ids,
failed_agent_ids=failed_agent_ids,
)
@router.patch("/{group_id}", response_model=BoardGroupRead)
async def update_board_group(
payload: BoardGroupUpdate,
group_id: UUID,
session: AsyncSession = Depends(get_session),
auth: AuthContext = Depends(require_admin_auth),
) -> BoardGroup:
group = await session.get(BoardGroup, group_id)
if group is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND)
updates = payload.model_dump(exclude_unset=True)
if "slug" in updates and updates["slug"] is not None and not updates["slug"].strip():
updates["slug"] = _slugify(updates.get("name") or group.name)
for key, value in updates.items():
setattr(group, key, value)
group.updated_at = utcnow()
return await crud.save(session, group)
@router.delete("/{group_id}", response_model=OkResponse)
async def delete_board_group(
group_id: UUID,
session: AsyncSession = Depends(get_session),
auth: AuthContext = Depends(require_admin_auth),
) -> OkResponse:
group = await session.get(BoardGroup, group_id)
if group is None:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND)
# Boards reference groups, so clear the FK first to keep deletes simple.
await session.execute(
update(Board).where(col(Board.board_group_id) == group_id).values(board_group_id=None)
)
await session.execute(delete(BoardGroup).where(col(BoardGroup.id) == group_id))
await session.commit()
return OkResponse()

View File

@@ -24,6 +24,7 @@ from app.integrations.openclaw_gateway import (
from app.models.activity_events import ActivityEvent
from app.models.agents import Agent
from app.models.approvals import Approval
from app.models.board_groups import BoardGroup
from app.models.board_memory import BoardMemory
from app.models.board_onboarding import BoardOnboardingSession
from app.models.boards import Board
@@ -34,7 +35,8 @@ from app.models.tasks import Task
from app.schemas.boards import BoardCreate, BoardRead, BoardUpdate
from app.schemas.common import OkResponse
from app.schemas.pagination import DefaultLimitOffsetPage
from app.schemas.view_models import BoardSnapshot
from app.schemas.view_models import BoardGroupSnapshot, BoardSnapshot
from app.services.board_group_snapshot import build_board_group_snapshot
from app.services.board_snapshot import build_board_snapshot
router = APIRouter(prefix="/boards", tags=["boards"])
@@ -68,6 +70,25 @@ async def _require_gateway_for_create(
return await _require_gateway(session, payload.gateway_id)
async def _require_board_group(session: AsyncSession, board_group_id: object) -> BoardGroup:
group = await crud.get_by_id(session, BoardGroup, board_group_id)
if group is None:
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail="board_group_id is invalid",
)
return group
async def _require_board_group_for_create(
payload: BoardCreate,
session: AsyncSession = Depends(get_session),
) -> BoardGroup | None:
if payload.board_group_id is None:
return None
return await _require_board_group(session, payload.board_group_id)
async def _apply_board_update(
*,
payload: BoardUpdate,
@@ -77,6 +98,8 @@ async def _apply_board_update(
updates = payload.model_dump(exclude_unset=True)
if "gateway_id" in updates:
await _require_gateway(session, updates["gateway_id"])
if "board_group_id" in updates and updates["board_group_id"] is not None:
await _require_board_group(session, updates["board_group_id"])
for key, value in updates.items():
setattr(board, key, value)
if updates.get("board_type") == "goal":
@@ -157,12 +180,15 @@ async def _cleanup_agent_on_gateway(
@router.get("", response_model=DefaultLimitOffsetPage[BoardRead])
async def list_boards(
gateway_id: UUID | None = Query(default=None),
board_group_id: UUID | None = Query(default=None),
session: AsyncSession = Depends(get_session),
actor: ActorContext = Depends(require_admin_or_agent),
) -> DefaultLimitOffsetPage[BoardRead]:
statement = select(Board)
if gateway_id is not None:
statement = statement.where(col(Board.gateway_id) == gateway_id)
if board_group_id is not None:
statement = statement.where(col(Board.board_group_id) == board_group_id)
statement = statement.order_by(func.lower(col(Board.name)).asc(), col(Board.created_at).desc())
return await paginate(session, statement)
@@ -171,6 +197,7 @@ async def list_boards(
async def create_board(
payload: BoardCreate,
_gateway: Gateway = Depends(_require_gateway_for_create),
_board_group: BoardGroup | None = Depends(_require_board_group_for_create),
session: AsyncSession = Depends(get_session),
auth: AuthContext = Depends(require_admin_auth),
) -> Board:
@@ -197,6 +224,27 @@ async def get_board_snapshot(
return await build_board_snapshot(session, board)
@router.get("/{board_id}/group-snapshot", response_model=BoardGroupSnapshot)
async def get_board_group_snapshot(
include_self: bool = Query(default=False),
include_done: bool = Query(default=False),
per_board_task_limit: int = Query(default=5, ge=0, le=100),
board: Board = Depends(get_board_or_404),
session: AsyncSession = Depends(get_session),
actor: ActorContext = Depends(require_admin_or_agent),
) -> BoardGroupSnapshot:
if actor.actor_type == "agent" and actor.agent:
if actor.agent.board_id and actor.agent.board_id != board.id:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN)
return await build_board_group_snapshot(
session,
board=board,
include_self=include_self,
include_done=include_done,
per_board_task_limit=per_board_task_limit,
)
@router.patch("/{board_id}", response_model=BoardRead)
async def update_board(
payload: BoardUpdate,

View File

@@ -12,6 +12,8 @@ from app.api.agent import router as agent_router
from app.api.agents import router as agents_router
from app.api.approvals import router as approvals_router
from app.api.auth import router as auth_router
from app.api.board_group_memory import router as board_group_memory_router
from app.api.board_groups import router as board_groups_router
from app.api.board_memory import router as board_memory_router
from app.api.board_onboarding import router as board_onboarding_router
from app.api.boards import router as boards_router
@@ -72,6 +74,8 @@ api_v1.include_router(activity_router)
api_v1.include_router(gateway_router)
api_v1.include_router(gateways_router)
api_v1.include_router(metrics_router)
api_v1.include_router(board_groups_router)
api_v1.include_router(board_group_memory_router)
api_v1.include_router(boards_router)
api_v1.include_router(board_memory_router)
api_v1.include_router(board_onboarding_router)

View File

@@ -1,6 +1,8 @@
from app.models.activity_events import ActivityEvent
from app.models.agents import Agent
from app.models.approvals import Approval
from app.models.board_group_memory import BoardGroupMemory
from app.models.board_groups import BoardGroup
from app.models.board_memory import BoardMemory
from app.models.board_onboarding import BoardOnboardingSession
from app.models.boards import Board
@@ -14,8 +16,10 @@ __all__ = [
"ActivityEvent",
"Agent",
"Approval",
"BoardGroupMemory",
"BoardMemory",
"BoardOnboardingSession",
"BoardGroup",
"Board",
"Gateway",
"TaskDependency",

View File

@@ -0,0 +1,21 @@
from __future__ import annotations
from datetime import datetime
from uuid import UUID, uuid4
from sqlalchemy import JSON, Column
from sqlmodel import Field, SQLModel
from app.core.time import utcnow
class BoardGroupMemory(SQLModel, table=True):
__tablename__ = "board_group_memory"
id: UUID = Field(default_factory=uuid4, primary_key=True)
board_group_id: UUID = Field(foreign_key="board_groups.id", index=True)
content: str
tags: list[str] | None = Field(default=None, sa_column=Column(JSON))
is_chat: bool = Field(default=False, index=True)
source: str | None = None
created_at: datetime = Field(default_factory=utcnow)

View File

@@ -0,0 +1,20 @@
from __future__ import annotations
from datetime import datetime
from uuid import UUID, uuid4
from sqlmodel import Field
from app.core.time import utcnow
from app.models.tenancy import TenantScoped
class BoardGroup(TenantScoped, table=True):
__tablename__ = "board_groups"
id: UUID = Field(default_factory=uuid4, primary_key=True)
name: str
slug: str = Field(index=True)
description: str | None = None
created_at: datetime = Field(default_factory=utcnow)
updated_at: datetime = Field(default_factory=utcnow)

View File

@@ -17,6 +17,7 @@ class Board(TenantScoped, table=True):
name: str
slug: str = Field(index=True)
gateway_id: UUID | None = Field(default=None, foreign_key="gateways.id", index=True)
board_group_id: UUID | None = Field(default=None, foreign_key="board_groups.id", index=True)
board_type: str = Field(default="goal", index=True)
objective: str | None = None
success_metrics: dict[str, object] | None = Field(default=None, sa_column=Column(JSON))

View File

@@ -1,6 +1,7 @@
from app.schemas.activity_events import ActivityEventRead
from app.schemas.agents import AgentCreate, AgentRead, AgentUpdate
from app.schemas.approvals import ApprovalCreate, ApprovalRead, ApprovalUpdate
from app.schemas.board_group_memory import BoardGroupMemoryCreate, BoardGroupMemoryRead
from app.schemas.board_memory import BoardMemoryCreate, BoardMemoryRead
from app.schemas.board_onboarding import (
BoardOnboardingAnswer,
@@ -22,6 +23,8 @@ __all__ = [
"ApprovalCreate",
"ApprovalRead",
"ApprovalUpdate",
"BoardGroupMemoryCreate",
"BoardGroupMemoryRead",
"BoardMemoryCreate",
"BoardMemoryRead",
"BoardOnboardingAnswer",

View File

@@ -0,0 +1,21 @@
from __future__ import annotations
from typing import Any
from uuid import UUID
from sqlmodel import SQLModel
class BoardGroupHeartbeatApply(SQLModel):
# Heartbeat cadence string understood by the OpenClaw gateway (e.g. "2m", "10m", "30m").
every: str
# Optional heartbeat target (most deployments use "none").
target: str | None = None
include_board_leads: bool = False
class BoardGroupHeartbeatApplyResult(SQLModel):
board_group_id: UUID
requested: dict[str, Any]
updated_agent_ids: list[UUID]
failed_agent_ids: list[UUID]

View File

@@ -0,0 +1,26 @@
from __future__ import annotations
from datetime import datetime
from uuid import UUID
from sqlmodel import SQLModel
from app.schemas.common import NonEmptyStr
class BoardGroupMemoryCreate(SQLModel):
# For writes, reject blank/whitespace-only content.
content: NonEmptyStr
tags: list[str] | None = None
source: str | None = None
class BoardGroupMemoryRead(SQLModel):
id: UUID
board_group_id: UUID
# For reads, allow legacy rows that may have empty content (avoid response validation 500s).
content: str
tags: list[str] | None = None
source: str | None = None
is_chat: bool = False
created_at: datetime

View File

@@ -0,0 +1,28 @@
from __future__ import annotations
from datetime import datetime
from uuid import UUID
from sqlmodel import SQLModel
class BoardGroupBase(SQLModel):
name: str
slug: str
description: str | None = None
class BoardGroupCreate(BoardGroupBase):
pass
class BoardGroupUpdate(SQLModel):
name: str | None = None
slug: str | None = None
description: str | None = None
class BoardGroupRead(BoardGroupBase):
id: UUID
created_at: datetime
updated_at: datetime

View File

@@ -12,6 +12,7 @@ class BoardBase(SQLModel):
name: str
slug: str
gateway_id: UUID | None = None
board_group_id: UUID | None = None
board_type: str = "goal"
objective: str | None = None
success_metrics: dict[str, object] | None = None
@@ -35,6 +36,7 @@ class BoardUpdate(SQLModel):
name: str | None = None
slug: str | None = None
gateway_id: UUID | None = None
board_group_id: UUID | None = None
board_type: str | None = None
objective: str | None = None
success_metrics: dict[str, object] | None = None

View File

@@ -1,9 +1,13 @@
from __future__ import annotations
from sqlmodel import SQLModel
from datetime import datetime
from uuid import UUID
from sqlmodel import Field, SQLModel
from app.schemas.agents import AgentRead
from app.schemas.approvals import ApprovalRead
from app.schemas.board_groups import BoardGroupRead
from app.schemas.board_memory import BoardMemoryRead
from app.schemas.boards import BoardRead
from app.schemas.tasks import TaskRead
@@ -22,3 +26,29 @@ class BoardSnapshot(SQLModel):
approvals: list[ApprovalRead]
chat_messages: list[BoardMemoryRead]
pending_approvals_count: int = 0
class BoardGroupTaskSummary(SQLModel):
id: UUID
board_id: UUID
board_name: str
title: str
status: str
priority: str
assigned_agent_id: UUID | None = None
assignee: str | None = None
due_at: datetime | None = None
in_progress_at: datetime | None = None
created_at: datetime
updated_at: datetime
class BoardGroupBoardSnapshot(SQLModel):
board: BoardRead
task_counts: dict[str, int] = Field(default_factory=dict)
tasks: list[BoardGroupTaskSummary] = Field(default_factory=list)
class BoardGroupSnapshot(SQLModel):
group: BoardGroupRead | None = None
boards: list[BoardGroupBoardSnapshot] = Field(default_factory=list)

View File

@@ -479,6 +479,78 @@ async def _patch_gateway_agent_list(
await openclaw_call("config.patch", params, config=config)
async def patch_gateway_agent_heartbeats(
gateway: Gateway,
*,
entries: list[tuple[str, str, dict[str, Any]]],
) -> None:
"""Patch multiple agent heartbeat configs in a single gateway config.patch call.
Each entry is (agent_id, workspace_path, heartbeat_dict).
"""
if not gateway.url:
raise OpenClawGatewayError("Gateway url is required")
config = GatewayClientConfig(url=gateway.url, token=gateway.token)
cfg = await openclaw_call("config.get", config=config)
if not isinstance(cfg, dict):
raise OpenClawGatewayError("config.get returned invalid payload")
base_hash = cfg.get("hash")
data = cfg.get("config") or cfg.get("parsed") or {}
if not isinstance(data, dict):
raise OpenClawGatewayError("config.get returned invalid config")
agents_section = data.get("agents") or {}
lst = agents_section.get("list") or []
if not isinstance(lst, list):
raise OpenClawGatewayError("config agents.list is not a list")
entry_by_id: dict[str, tuple[str, dict[str, Any]]] = {
agent_id: (workspace_path, heartbeat) for agent_id, workspace_path, heartbeat in entries
}
updated_ids: set[str] = set()
new_list: list[dict[str, Any]] = []
for raw_entry in lst:
if not isinstance(raw_entry, dict):
new_list.append(raw_entry)
continue
agent_id = raw_entry.get("id")
if not isinstance(agent_id, str) or agent_id not in entry_by_id:
new_list.append(raw_entry)
continue
workspace_path, heartbeat = entry_by_id[agent_id]
new_entry = dict(raw_entry)
new_entry["workspace"] = workspace_path
new_entry["heartbeat"] = heartbeat
new_list.append(new_entry)
updated_ids.add(agent_id)
for agent_id, (workspace_path, heartbeat) in entry_by_id.items():
if agent_id in updated_ids:
continue
new_list.append({"id": agent_id, "workspace": workspace_path, "heartbeat": heartbeat})
patch = {"agents": {"list": new_list}}
params = {"raw": json.dumps(patch)}
if base_hash:
params["baseHash"] = base_hash
await openclaw_call("config.patch", params, config=config)
async def sync_gateway_agent_heartbeats(gateway: Gateway, agents: list[Agent]) -> None:
"""Sync current Agent.heartbeat_config values to the gateway config."""
if not gateway.workspace_root:
raise OpenClawGatewayError("gateway workspace_root is required")
entries: list[tuple[str, str, dict[str, Any]]] = []
for agent in agents:
agent_id = _agent_key(agent)
workspace_path = _workspace_path(agent, gateway.workspace_root)
heartbeat = _heartbeat_config(agent)
entries.append((agent_id, workspace_path, heartbeat))
if not entries:
return
await patch_gateway_agent_heartbeats(gateway, entries=entries)
async def _remove_gateway_agent_list(
agent_id: str,
config: GatewayClientConfig,

View File

@@ -0,0 +1,158 @@
from __future__ import annotations
from collections import defaultdict
from typing import Any
from uuid import UUID
from sqlalchemy import case, func
from sqlmodel import col, select
from sqlmodel.ext.asyncio.session import AsyncSession
from app.models.agents import Agent
from app.models.board_groups import BoardGroup
from app.models.boards import Board
from app.models.tasks import Task
from app.schemas.board_groups import BoardGroupRead
from app.schemas.boards import BoardRead
from app.schemas.view_models import (
BoardGroupBoardSnapshot,
BoardGroupSnapshot,
BoardGroupTaskSummary,
)
_STATUS_ORDER = {"in_progress": 0, "review": 1, "inbox": 2, "done": 3}
_PRIORITY_ORDER = {"high": 0, "medium": 1, "low": 2}
def _status_weight_expr() -> Any:
whens = [(col(Task.status) == key, weight) for key, weight in _STATUS_ORDER.items()]
return case(*whens, else_=99)
def _priority_weight_expr() -> Any:
whens = [(col(Task.priority) == key, weight) for key, weight in _PRIORITY_ORDER.items()]
return case(*whens, else_=99)
async def build_group_snapshot(
session: AsyncSession,
*,
group: BoardGroup,
exclude_board_id: UUID | None = None,
include_done: bool = False,
per_board_task_limit: int = 5,
) -> BoardGroupSnapshot:
statement = select(Board).where(col(Board.board_group_id) == group.id)
if exclude_board_id is not None:
statement = statement.where(col(Board.id) != exclude_board_id)
boards = list(await session.exec(statement.order_by(func.lower(col(Board.name)).asc())))
if not boards:
return BoardGroupSnapshot(group=BoardGroupRead.model_validate(group, from_attributes=True))
boards_by_id = {board.id: board for board in boards}
board_ids = list(boards_by_id.keys())
task_counts: dict[UUID, dict[str, int]] = defaultdict(lambda: defaultdict(int))
for board_id, status_value, total in list(
await session.exec(
select(col(Task.board_id), col(Task.status), func.count(col(Task.id)))
.where(col(Task.board_id).in_(board_ids))
.group_by(col(Task.board_id), col(Task.status))
)
):
if board_id is None:
continue
task_counts[board_id][str(status_value)] = int(total or 0)
task_statement = select(Task).where(col(Task.board_id).in_(board_ids))
if not include_done:
task_statement = task_statement.where(col(Task.status) != "done")
task_statement = task_statement.order_by(
col(Task.board_id).asc(),
_status_weight_expr().asc(),
_priority_weight_expr().asc(),
col(Task.updated_at).desc(),
col(Task.created_at).desc(),
)
tasks = list(await session.exec(task_statement))
assigned_ids = {task.assigned_agent_id for task in tasks if task.assigned_agent_id is not None}
agent_name_by_id: dict[UUID, str] = {}
if assigned_ids:
for agent_id, name in list(
await session.exec(
select(col(Agent.id), col(Agent.name)).where(col(Agent.id).in_(assigned_ids))
)
):
agent_name_by_id[agent_id] = name
tasks_by_board: dict[UUID, list[BoardGroupTaskSummary]] = defaultdict(list)
if per_board_task_limit > 0:
for task in tasks:
if task.board_id is None:
continue
current = tasks_by_board[task.board_id]
if len(current) >= per_board_task_limit:
continue
board = boards_by_id.get(task.board_id)
if board is None:
continue
current.append(
BoardGroupTaskSummary(
id=task.id,
board_id=task.board_id,
board_name=board.name,
title=task.title,
status=task.status,
priority=task.priority,
assigned_agent_id=task.assigned_agent_id,
assignee=(
agent_name_by_id.get(task.assigned_agent_id)
if task.assigned_agent_id is not None
else None
),
due_at=task.due_at,
in_progress_at=task.in_progress_at,
created_at=task.created_at,
updated_at=task.updated_at,
)
)
snapshots: list[BoardGroupBoardSnapshot] = []
for board in boards:
board_read = BoardRead.model_validate(board, from_attributes=True)
counts = dict(task_counts.get(board.id, {}))
snapshots.append(
BoardGroupBoardSnapshot(
board=board_read,
task_counts=counts,
tasks=tasks_by_board.get(board.id, []),
)
)
return BoardGroupSnapshot(
group=BoardGroupRead.model_validate(group, from_attributes=True),
boards=snapshots,
)
async def build_board_group_snapshot(
session: AsyncSession,
*,
board: Board,
include_self: bool = False,
include_done: bool = False,
per_board_task_limit: int = 5,
) -> BoardGroupSnapshot:
if not board.board_group_id:
return BoardGroupSnapshot(group=None, boards=[])
group = await session.get(BoardGroup, board.board_group_id)
if group is None:
return BoardGroupSnapshot(group=None, boards=[])
return await build_group_snapshot(
session,
group=group,
exclude_board_id=None if include_self else board.id,
include_done=include_done,
per_board_task_limit=per_board_task_limit,
)

View File

@@ -1,11 +1,13 @@
from __future__ import annotations
import asyncio
import random
import re
from collections.abc import Awaitable, Callable
from typing import TypeVar
from uuid import UUID, uuid4
from sqlalchemy import func
from sqlmodel import col, select
from sqlmodel.ext.asyncio.session import AsyncSession
@@ -14,6 +16,7 @@ from app.core.time import utcnow
from app.integrations.openclaw_gateway import GatewayConfig as GatewayClientConfig
from app.integrations.openclaw_gateway import OpenClawGatewayError, openclaw_call
from app.models.agents import Agent
from app.models.board_memory import BoardMemory
from app.models.boards import Board
from app.models.gateways import Gateway
from app.models.users import User
@@ -38,10 +41,22 @@ def _is_transient_gateway_error(exc: Exception) -> bool:
return False
if "unsupported file" in message:
return False
if "connect call failed" in message or "connection refused" in message:
return True
if "errno 111" in message or "econnrefused" in message:
return True
if "did not receive a valid http response" in message:
return True
if "no route to host" in message or "network is unreachable" in message:
return True
if "host is down" in message or "name or service not known" in message:
return True
if "received 1012" in message or "service restart" in message:
return True
if "http 503" in message or ("503" in message and "websocket" in message):
return True
if "http 502" in message or "http 504" in message:
return True
if "temporar" in message:
return True
if "timeout" in message or "timed out" in message:
@@ -51,20 +66,58 @@ def _is_transient_gateway_error(exc: Exception) -> bool:
return False
class _GatewayBackoff:
def __init__(
self,
*,
timeout_s: float = 10 * 60,
base_delay_s: float = 0.75,
max_delay_s: float = 30.0,
jitter: float = 0.2,
) -> None:
self._timeout_s = timeout_s
self._base_delay_s = base_delay_s
self._max_delay_s = max_delay_s
self._jitter = jitter
self._delay_s = base_delay_s
def reset(self) -> None:
self._delay_s = self._base_delay_s
async def run(self, fn: Callable[[], Awaitable[T]]) -> T:
# Use per-call deadlines so long-running syncs can still tolerate a later
# gateway restart without having an already-expired retry window.
deadline_s = asyncio.get_running_loop().time() + self._timeout_s
while True:
try:
value = await fn()
self.reset()
return value
except Exception as exc:
if not _is_transient_gateway_error(exc):
raise
now = asyncio.get_running_loop().time()
remaining = deadline_s - now
if remaining <= 0:
raise TimeoutError(
"Gateway unreachable after 10 minutes (template sync timeout). "
f"Last error: {exc}"
) from exc
sleep_s = min(self._delay_s, remaining)
if self._jitter:
sleep_s *= 1.0 + random.uniform(-self._jitter, self._jitter)
sleep_s = max(0.0, min(sleep_s, remaining))
await asyncio.sleep(sleep_s)
self._delay_s = min(self._delay_s * 2.0, self._max_delay_s)
async def _with_gateway_retry(
fn: Callable[[], Awaitable[T]],
*,
attempts: int = 3,
base_delay_s: float = 0.75,
backoff: _GatewayBackoff,
) -> T:
for attempt in range(attempts):
try:
return await fn()
except Exception as exc:
if attempt >= attempts - 1 or not _is_transient_gateway_error(exc):
raise
await asyncio.sleep(base_delay_s * (2**attempt))
raise AssertionError("unreachable")
return await backoff.run(fn)
def _agent_id_from_session_key(session_key: str | None) -> str | None:
@@ -137,13 +190,18 @@ async def _get_agent_file(
agent_gateway_id: str,
name: str,
config: GatewayClientConfig,
backoff: _GatewayBackoff | None = None,
) -> str | None:
try:
payload = await openclaw_call(
"agents.files.get",
{"agentId": agent_gateway_id, "name": name},
config=config,
)
async def _do_get() -> object:
return await openclaw_call(
"agents.files.get",
{"agentId": agent_gateway_id, "name": name},
config=config,
)
payload = await (backoff.run(_do_get) if backoff else _do_get())
except OpenClawGatewayError:
return None
if isinstance(payload, str):
@@ -167,8 +225,14 @@ async def _get_existing_auth_token(
*,
agent_gateway_id: str,
config: GatewayClientConfig,
backoff: _GatewayBackoff | None = None,
) -> str | None:
tools = await _get_agent_file(agent_gateway_id=agent_gateway_id, name="TOOLS.md", config=config)
tools = await _get_agent_file(
agent_gateway_id=agent_gateway_id,
name="TOOLS.md",
config=config,
backoff=backoff,
)
if not tools:
return None
values = _parse_tools_md(tools)
@@ -183,32 +247,45 @@ async def _gateway_default_agent_id(
config: GatewayClientConfig,
*,
fallback_session_key: str | None = None,
backoff: _GatewayBackoff | None = None,
) -> str | None:
last_error: OpenClawGatewayError | None = None
# Gateways may reject WS connects transiently under load (HTTP 503).
for attempt in range(3):
try:
payload = await openclaw_call("agents.list", config=config)
agent_id = _extract_agent_id(payload)
if agent_id:
return agent_id
break
except OpenClawGatewayError as exc:
last_error = exc
message = str(exc).lower()
if (
"503" not in message
and "temporar" not in message
and "rejected" not in message
and "timeout" not in message
):
break
await asyncio.sleep(0.5 * (2**attempt))
try:
_ = last_error
async def _do_list() -> object:
return await openclaw_call("agents.list", config=config)
payload = await (backoff.run(_do_list) if backoff else _do_list())
agent_id = _extract_agent_id(payload)
if agent_id:
return agent_id
except OpenClawGatewayError:
pass
return _agent_id_from_session_key(fallback_session_key)
async def _paused_board_ids(session: AsyncSession, board_ids: list[UUID]) -> set[UUID]:
if not board_ids:
return set()
commands = {"/pause", "/resume"}
statement = (
select(BoardMemory.board_id, BoardMemory.content)
.where(col(BoardMemory.board_id).in_(board_ids))
.where(col(BoardMemory.is_chat).is_(True))
.where(func.lower(func.trim(col(BoardMemory.content))).in_(commands))
.order_by(col(BoardMemory.board_id), col(BoardMemory.created_at).desc())
# Postgres: DISTINCT ON (board_id) to get latest command per board.
.distinct(col(BoardMemory.board_id))
)
paused: set[UUID] = set()
for board_id, content in await session.exec(statement):
cmd = (content or "").strip().lower()
if cmd == "/pause":
paused.add(board_id)
return paused
async def sync_gateway_templates(
session: AsyncSession,
gateway: Gateway,
@@ -235,6 +312,21 @@ async def sync_gateway_templates(
return result
client_config = GatewayClientConfig(url=gateway.url, token=gateway.token)
backoff = _GatewayBackoff(timeout_s=10 * 60)
# First, wait for the gateway to be reachable (e.g. while it is restarting).
try:
async def _do_ping() -> object:
return await openclaw_call("agents.list", config=client_config)
await backoff.run(_do_ping)
except TimeoutError as exc:
result.errors.append(GatewayTemplatesSyncError(message=str(exc)))
return result
except OpenClawGatewayError as exc:
result.errors.append(GatewayTemplatesSyncError(message=str(exc)))
return result
boards = list(await session.exec(select(Board).where(col(Board.gateway_id) == gateway.id)))
boards_by_id = {board.id: board for board in boards}
@@ -250,6 +342,8 @@ async def sync_gateway_templates(
return result
boards_by_id = {board_id: board}
paused_board_ids = await _paused_board_ids(session, list(boards_by_id.keys()))
if boards_by_id:
agents = list(
await session.exec(
@@ -275,10 +369,27 @@ async def sync_gateway_templates(
)
continue
if board.id in paused_board_ids:
result.agents_skipped += 1
continue
agent_gateway_id = _gateway_agent_id(agent)
auth_token = await _get_existing_auth_token(
agent_gateway_id=agent_gateway_id, config=client_config
)
try:
auth_token = await _get_existing_auth_token(
agent_gateway_id=agent_gateway_id,
config=client_config,
backoff=backoff,
)
except TimeoutError as exc:
result.errors.append(
GatewayTemplatesSyncError(
agent_id=agent.id,
agent_name=agent.name,
board_id=board.id,
message=str(exc),
)
)
return result
if not auth_token:
if not rotate_tokens:
@@ -321,6 +432,7 @@ async def sync_gateway_templates(
)
try:
async def _do_provision() -> None:
await provision_agent(
agent,
@@ -333,8 +445,19 @@ async def sync_gateway_templates(
reset_session=reset_sessions,
)
await _with_gateway_retry(_do_provision)
await _with_gateway_retry(_do_provision, backoff=backoff)
result.agents_updated += 1
except TimeoutError as exc: # pragma: no cover - gateway/network dependent
result.agents_skipped += 1
result.errors.append(
GatewayTemplatesSyncError(
agent_id=agent.id,
agent_name=agent.name,
board_id=board.id,
message=str(exc),
)
)
return result
except Exception as exc: # pragma: no cover - gateway/network dependent
result.agents_skipped += 1
result.errors.append(
@@ -360,10 +483,21 @@ async def sync_gateway_templates(
)
return result
main_gateway_agent_id = await _gateway_default_agent_id(
client_config,
fallback_session_key=gateway.main_session_key,
)
try:
main_gateway_agent_id = await _gateway_default_agent_id(
client_config,
fallback_session_key=gateway.main_session_key,
backoff=backoff,
)
except TimeoutError as exc:
result.errors.append(
GatewayTemplatesSyncError(
agent_id=main_agent.id,
agent_name=main_agent.name,
message=str(exc),
)
)
return result
if not main_gateway_agent_id:
result.errors.append(
GatewayTemplatesSyncError(
@@ -374,9 +508,21 @@ async def sync_gateway_templates(
)
return result
main_token = await _get_existing_auth_token(
agent_gateway_id=main_gateway_agent_id, config=client_config
)
try:
main_token = await _get_existing_auth_token(
agent_gateway_id=main_gateway_agent_id,
config=client_config,
backoff=backoff,
)
except TimeoutError as exc:
result.errors.append(
GatewayTemplatesSyncError(
agent_id=main_agent.id,
agent_name=main_agent.name,
message=str(exc),
)
)
return result
if not main_token:
if rotate_tokens:
raw_token = generate_agent_token()
@@ -417,6 +563,7 @@ async def sync_gateway_templates(
)
try:
async def _do_provision_main() -> None:
await provision_main_agent(
main_agent,
@@ -428,8 +575,17 @@ async def sync_gateway_templates(
reset_session=reset_sessions,
)
await _with_gateway_retry(_do_provision_main)
await _with_gateway_retry(_do_provision_main, backoff=backoff)
result.main_updated = True
except TimeoutError as exc: # pragma: no cover - gateway/network dependent
result.errors.append(
GatewayTemplatesSyncError(
agent_id=main_agent.id,
agent_name=main_agent.name,
message=str(exc),
)
)
return result
except Exception as exc: # pragma: no cover - gateway/network dependent
result.errors.append(
GatewayTemplatesSyncError(

View File

@@ -0,0 +1,64 @@
from __future__ import annotations
from dataclasses import dataclass
from app.services import agent_provisioning
def test_slugify_normalizes_and_trims():
assert agent_provisioning._slugify("Hello, World") == "hello-world"
assert agent_provisioning._slugify(" A B ") == "a-b"
def test_slugify_falls_back_to_uuid_hex(monkeypatch):
class _FakeUuid:
hex = "deadbeef"
monkeypatch.setattr(agent_provisioning, "uuid4", lambda: _FakeUuid())
assert agent_provisioning._slugify("!!!") == "deadbeef"
def test_agent_id_from_session_key_parses_agent_prefix():
assert agent_provisioning._agent_id_from_session_key(None) is None
assert agent_provisioning._agent_id_from_session_key("") is None
assert agent_provisioning._agent_id_from_session_key("not-agent") is None
assert agent_provisioning._agent_id_from_session_key("agent:") is None
assert agent_provisioning._agent_id_from_session_key("agent:riya:main") == "riya"
def test_extract_agent_id_supports_lists_and_dicts():
assert agent_provisioning._extract_agent_id(["", " ", "abc"]) == "abc"
assert agent_provisioning._extract_agent_id([{"agent_id": "xyz"}]) == "xyz"
payload = {
"defaultAgentId": "dflt",
"agents": [{"id": "ignored"}],
}
assert agent_provisioning._extract_agent_id(payload) == "dflt"
payload2 = {
"agents": [{"id": ""}, {"agentId": "foo"}],
}
assert agent_provisioning._extract_agent_id(payload2) == "foo"
def test_extract_agent_id_returns_none_for_unknown_shapes():
assert agent_provisioning._extract_agent_id("nope") is None
assert agent_provisioning._extract_agent_id({"agents": "not-a-list"}) is None
@dataclass
class _AgentStub:
name: str
openclaw_session_id: str | None = None
heartbeat_config: dict | None = None
is_board_lead: bool = False
def test_agent_key_uses_session_key_when_present(monkeypatch):
agent = _AgentStub(name="Alice", openclaw_session_id="agent:alice:main")
assert agent_provisioning._agent_key(agent) == "alice"
monkeypatch.setattr(agent_provisioning, "_slugify", lambda value: "slugged")
agent2 = _AgentStub(name="Alice", openclaw_session_id=None)
assert agent_provisioning._agent_key(agent2) == "slugged"

View File

@@ -4,7 +4,9 @@ from fastapi import FastAPI, HTTPException
from fastapi.testclient import TestClient
from pydantic import BaseModel, Field
from app.core.error_handling import REQUEST_ID_HEADER, install_error_handling
from starlette.requests import Request
from app.core.error_handling import REQUEST_ID_HEADER, _error_payload, _get_request_id, install_error_handling
def test_request_validation_error_includes_request_id():
@@ -80,3 +82,38 @@ def test_response_validation_error_returns_500_with_request_id():
assert body["detail"] == "Internal Server Error"
assert isinstance(body.get("request_id"), str) and body["request_id"]
assert resp.headers.get(REQUEST_ID_HEADER) == body["request_id"]
def test_client_provided_request_id_is_preserved():
app = FastAPI()
install_error_handling(app)
@app.get("/needs-int")
def needs_int(limit: int) -> dict[str, int]:
return {"limit": limit}
client = TestClient(app)
resp = client.get("/needs-int?limit=abc", headers={REQUEST_ID_HEADER: " req-123 "})
assert resp.status_code == 422
body = resp.json()
assert body["request_id"] == "req-123"
assert resp.headers.get(REQUEST_ID_HEADER) == "req-123"
def test_get_request_id_returns_none_for_missing_or_invalid_state() -> None:
# Empty state
req = Request({"type": "http", "headers": [], "state": {}})
assert _get_request_id(req) is None
# Non-string request_id
req = Request({"type": "http", "headers": [], "state": {"request_id": 123}})
assert _get_request_id(req) is None
# Empty string request_id
req = Request({"type": "http", "headers": [], "state": {"request_id": ""}})
assert _get_request_id(req) is None
def test_error_payload_omits_request_id_when_none() -> None:
assert _error_payload(detail="x", request_id=None) == {"detail": "x"}

View File

@@ -12,6 +12,21 @@ def test_matches_agent_mention_matches_first_name():
assert matches_agent_mention(agent, {"cooper"}) is False
def test_matches_agent_mention_no_mentions_is_false():
agent = Agent(name="Alice")
assert matches_agent_mention(agent, set()) is False
def test_matches_agent_mention_empty_agent_name_is_false():
agent = Agent(name=" ")
assert matches_agent_mention(agent, {"alice"}) is False
def test_matches_agent_mention_matches_full_normalized_name():
agent = Agent(name="Alice Cooper")
assert matches_agent_mention(agent, {"alice cooper"}) is True
def test_matches_agent_mention_supports_reserved_lead_shortcut():
lead = Agent(name="Riya", is_board_lead=True)
other = Agent(name="Lead", is_board_lead=False)

View File

@@ -0,0 +1,85 @@
from __future__ import annotations
import pytest
from app.core.error_handling import REQUEST_ID_HEADER, RequestIdMiddleware
@pytest.mark.asyncio
async def test_request_id_middleware_passes_through_non_http_scope() -> None:
called = False
async def app(scope, receive, send): # type: ignore[no-untyped-def]
nonlocal called
called = True
middleware = RequestIdMiddleware(app)
scope = {"type": "websocket", "headers": []}
await middleware(scope, lambda: None, lambda message: None) # type: ignore[arg-type]
assert called is True
@pytest.mark.asyncio
async def test_request_id_middleware_ignores_blank_client_header_and_generates_one() -> None:
captured_request_id: str | None = None
response_headers: list[tuple[bytes, bytes]] = []
async def app(scope, receive, send): # type: ignore[no-untyped-def]
nonlocal captured_request_id
captured_request_id = scope.get("state", {}).get("request_id")
await send({"type": "http.response.start", "status": 200, "headers": []})
await send({"type": "http.response.body", "body": b"ok"})
async def send(message): # type: ignore[no-untyped-def]
if message["type"] == "http.response.start":
response_headers.extend(list(message.get("headers") or []))
middleware = RequestIdMiddleware(app)
scope = {
"type": "http",
"headers": [(REQUEST_ID_HEADER.lower().encode("latin-1"), b" ")],
}
await middleware(scope, lambda: None, send)
assert isinstance(captured_request_id, str) and captured_request_id
# Header should reflect the generated id, not the blank one.
values = [v for k, v in response_headers if k.lower() == REQUEST_ID_HEADER.lower().encode("latin-1")]
assert values == [captured_request_id.encode("latin-1")]
@pytest.mark.asyncio
async def test_request_id_middleware_does_not_duplicate_existing_header() -> None:
sent_start = False
start_headers: list[tuple[bytes, bytes]] | None = None
async def app(scope, receive, send): # type: ignore[no-untyped-def]
# Simulate an app that already sets the request id header.
await send(
{
"type": "http.response.start",
"status": 200,
"headers": [(REQUEST_ID_HEADER.lower().encode("latin-1"), b"already")],
}
)
await send({"type": "http.response.body", "body": b"ok"})
async def send(message): # type: ignore[no-untyped-def]
nonlocal sent_start, start_headers
if message["type"] == "http.response.start":
sent_start = True
start_headers = list(message.get("headers") or [])
middleware = RequestIdMiddleware(app)
scope = {"type": "http", "headers": []}
await middleware(scope, lambda: None, send)
assert sent_start is True
assert start_headers is not None
# Ensure the middleware did not append a second copy.
values = [v for k, v in start_headers if k.lower() == REQUEST_ID_HEADER.lower().encode("latin-1")]
assert values == [b"already"]

View File

@@ -0,0 +1,320 @@
from __future__ import annotations
from dataclasses import dataclass, field
from uuid import UUID, uuid4
import pytest
from app.services import task_dependencies
def test_dedupe_uuid_list_preserves_order_and_removes_duplicates():
a = uuid4()
b = uuid4()
c = uuid4()
values = [a, b, a, c, b]
assert task_dependencies._dedupe_uuid_list(values) == [a, b, c]
def test_blocked_by_dependency_ids_flags_not_done_and_missing_status():
a = uuid4()
b = uuid4()
c = uuid4()
status_by_id = {
a: task_dependencies.DONE_STATUS,
b: "in_progress",
# c intentionally missing
}
assert task_dependencies.blocked_by_dependency_ids(
dependency_ids=[a, b, c],
status_by_id=status_by_id,
) == [b, c]
@pytest.mark.parametrize(
("nodes", "edges", "expected"),
[
# A -> B -> C (acyclic)
(
[UUID(int=1), UUID(int=2), UUID(int=3)],
{UUID(int=1): {UUID(int=2)}, UUID(int=2): {UUID(int=3)}},
False,
),
# A -> B -> C -> A (cycle)
(
[UUID(int=1), UUID(int=2), UUID(int=3)],
{UUID(int=1): {UUID(int=2)}, UUID(int=2): {UUID(int=3)}, UUID(int=3): {UUID(int=1)}},
True,
),
# Self-loop (cycle)
(
[UUID(int=1)],
{UUID(int=1): {UUID(int=1)}},
True,
),
],
)
def test_has_cycle(nodes, edges, expected):
assert task_dependencies._has_cycle(nodes, edges) is expected
@dataclass
class _FakeSession:
exec_results: list[object]
executed: list[object] = field(default_factory=list)
added: list[object] = field(default_factory=list)
async def exec(self, _query):
if not self.exec_results:
raise AssertionError("No more exec_results left for session.exec")
return self.exec_results.pop(0)
async def execute(self, statement):
self.executed.append(statement)
def add(self, value):
self.added.append(value)
@pytest.mark.asyncio
async def test_dependency_ids_by_task_id_empty_short_circuit():
session = _FakeSession(exec_results=[])
result = await task_dependencies.dependency_ids_by_task_id(
session,
board_id=uuid4(),
task_ids=[],
)
assert result == {}
@pytest.mark.asyncio
async def test_dependency_ids_by_task_id_groups_rows_by_task_id():
task_id = uuid4()
dep1 = uuid4()
dep2 = uuid4()
rows = [(task_id, dep1), (task_id, dep2)]
session = _FakeSession(exec_results=[rows])
result = await task_dependencies.dependency_ids_by_task_id(
session,
board_id=uuid4(),
task_ids=[task_id],
)
assert result == {task_id: [dep1, dep2]}
@pytest.mark.asyncio
async def test_dependency_status_by_id_empty_short_circuit():
session = _FakeSession(exec_results=[])
result = await task_dependencies.dependency_status_by_id(
session,
board_id=uuid4(),
dependency_ids=[],
)
assert result == {}
@pytest.mark.asyncio
async def test_dependency_status_by_id_maps_rows():
dep = uuid4()
session = _FakeSession(exec_results=[[(dep, "done")]])
result = await task_dependencies.dependency_status_by_id(
session,
board_id=uuid4(),
dependency_ids=[dep],
)
assert result == {dep: "done"}
@pytest.mark.asyncio
async def test_blocked_by_for_task_uses_passed_dependency_ids():
board_id = uuid4()
dep1 = uuid4()
dep2 = uuid4()
session = _FakeSession(exec_results=[[(dep1, "done"), (dep2, "inbox")]])
blocked = await task_dependencies.blocked_by_for_task(
session,
board_id=board_id,
task_id=uuid4(),
dependency_ids=[dep1, dep2],
)
assert blocked == [dep2]
@pytest.mark.asyncio
async def test_blocked_by_for_task_fetches_dependency_ids_when_not_provided():
board_id = uuid4()
task_id = uuid4()
dep = uuid4()
# 1) dependency_ids_by_task_id -> {task_id: [dep]}
# 2) dependency_status_by_id -> [(dep, "inbox")]
session = _FakeSession(exec_results=[[(task_id, dep)], [(dep, "inbox")]])
blocked = await task_dependencies.blocked_by_for_task(
session,
board_id=board_id,
task_id=task_id,
dependency_ids=None,
)
assert blocked == [dep]
@pytest.mark.asyncio
async def test_blocked_by_for_task_returns_empty_when_no_deps():
board_id = uuid4()
task_id = uuid4()
# dependency_ids_by_task_id -> empty rows => no deps
session = _FakeSession(exec_results=[[]])
blocked = await task_dependencies.blocked_by_for_task(
session,
board_id=board_id,
task_id=task_id,
dependency_ids=None,
)
assert blocked == []
@pytest.mark.asyncio
async def test_validate_dependency_update_returns_empty_when_no_dependencies():
session = _FakeSession(exec_results=[])
result = await task_dependencies.validate_dependency_update(
session,
board_id=uuid4(),
task_id=uuid4(),
depends_on_task_ids=[],
)
assert result == []
@pytest.mark.asyncio
async def test_validate_dependency_update_rejects_self_dependency():
task_id = uuid4()
session = _FakeSession(exec_results=[])
with pytest.raises(task_dependencies.HTTPException) as exc:
await task_dependencies.validate_dependency_update(
session,
board_id=uuid4(),
task_id=task_id,
depends_on_task_ids=[task_id],
)
assert exc.value.status_code == 422
@pytest.mark.asyncio
async def test_validate_dependency_update_rejects_missing_dependency_tasks():
board_id = uuid4()
task_id = uuid4()
dep_id = uuid4()
# existing_ids should not include dep_id
session = _FakeSession(exec_results=[set()])
with pytest.raises(task_dependencies.HTTPException) as exc:
await task_dependencies.validate_dependency_update(
session,
board_id=board_id,
task_id=task_id,
depends_on_task_ids=[dep_id],
)
assert exc.value.status_code == 404
assert exc.value.detail["missing_task_ids"] == [str(dep_id)]
@pytest.mark.asyncio
async def test_validate_dependency_update_rejects_cycles(monkeypatch):
board_id = uuid4()
task_a = uuid4()
task_b = uuid4()
# existing_ids contains dependency
existing_ids = {task_b}
# task_ids list on board
all_task_ids = [task_a, task_b]
# existing edges: B depends on A, then set A depends on B => cycle
existing_edges = [(task_b, task_a)]
session = _FakeSession(exec_results=[existing_ids, all_task_ids, existing_edges])
with pytest.raises(task_dependencies.HTTPException) as exc:
await task_dependencies.validate_dependency_update(
session,
board_id=board_id,
task_id=task_a,
depends_on_task_ids=[task_b],
)
assert exc.value.status_code == 409
@pytest.mark.asyncio
async def test_validate_dependency_update_returns_deduped_ids_when_ok():
board_id = uuid4()
task_id = uuid4()
dep1 = uuid4()
dep2 = uuid4()
existing_ids = {dep1, dep2}
all_task_ids = [task_id, dep1, dep2]
existing_edges: list[tuple[UUID, UUID]] = []
session = _FakeSession(exec_results=[existing_ids, all_task_ids, existing_edges])
normalized = await task_dependencies.validate_dependency_update(
session,
board_id=board_id,
task_id=task_id,
depends_on_task_ids=[dep1, dep2, dep1],
)
assert normalized == [dep1, dep2]
@pytest.mark.asyncio
async def test_replace_task_dependencies_deletes_then_adds(monkeypatch):
board_id = uuid4()
task_id = uuid4()
dep1 = uuid4()
dep2 = uuid4()
async def _fake_validate(*_args, **_kwargs):
return [dep1, dep2]
monkeypatch.setattr(task_dependencies, "validate_dependency_update", _fake_validate)
session = _FakeSession(exec_results=[])
normalized = await task_dependencies.replace_task_dependencies(
session,
board_id=board_id,
task_id=task_id,
depends_on_task_ids=[dep1, dep2],
)
assert normalized == [dep1, dep2]
assert len(session.executed) == 1
assert len(session.added) == 2
@pytest.mark.asyncio
async def test_dependent_task_ids_returns_rows_as_list():
board_id = uuid4()
dep_task_id = uuid4()
dependent_id = uuid4()
session = _FakeSession(exec_results=[[dependent_id]])
result = await task_dependencies.dependent_task_ids(
session,
board_id=board_id,
dependency_task_id=dep_task_id,
)
assert result == [dependent_id]