How the Action Queue makes every moderator action auditable

April 17, 2026·5 min read·The Kovra team

A moderation tool is only as good as its audit trail. When a user comes back six weeks after a ban and claims they were acted on unfairly, you need to be able to say exactly what happened: what action fired, who fired it, what the reason was, whether the user was DMed, whether the DM succeeded, and — crucially — where that action came from (was it a dashboard click? a slash command? an AutoMod rule?).

Most bot stacks fail this test because the paths diverge. The dashboard hits one API, the slash command hits another, AutoMod hits a third. Each writes cases slightly differently. Six weeks later, you're reconstructing what happened from three different audit tables with three different schemas.

Kovra's Action Queue makes this impossible by design. Every action — regardless of source — flows through the same pipeline and writes the same case row.

The pipeline

  1. An action is requested. Could be a dashboard button click, a /mod ban slash command, or an AutoMod rule triggering. Each entry point produces the same canonical request payload: { action, target_id, reason, source, moderator_id, request_id }.
  2. The request is signed. The API server signs the payload with HMAC-SHA256 using a secret that only the API and Guard (the moderation bot) share. This means Guard can trust that a request actually came from Kovra's API, not from a forged webhook.
  3. The request is pushed to a Redis Stream. The stream is per-guild-partitioned so one server's moderation burst doesn't back up another server's queue. Each request gets a UUIDv7 + Redis SET NX lock for idempotency — if the same request is submitted twice (retry, network flake), only the first one executes.
  4. Guard consumes the stream. XREADGROUP with a per-shard consumer group ensures each event is delivered exactly once. Guard verifies the HMAC signature, looks up the action executor, and runs it.
  5. The executor talks to Discord. Ban, kick, mute, and warn all run the same executor regardless of source. The executor writes the case to Postgres (mod_cases) with the source field set correctly (dashboard, discord, or automod), DMs the target if required, and posts to the configured log channel.
  6. The result is streamed back. The executor publishes a result event to a Redis Pub/Sub channel. The API's SSE stream picks it up and forwards to whichever dashboard session submitted the request. The user sees "done · case #42 created" within a second.

Why it matters

One case schema means one audit trail. Open the Cases page, filter by source=automod, and you see every action AutoMod took in the last 30 days. Switch to source=dashboard, you see every action your team took manually. The case number is monotonic and guild-local (case #42 means the 42nd case ever created in your server, regardless of who or what triggered it).

Per-moderator rate limits run on the API before the action enters the queue. If one moderator tries to ban 200 users in 5 minutes, the API rejects the 30 above their hourly budget — even if they're using mass-ban from the dashboard. This is audit-visible too: rate-limit rejections show up in the audit log with the attempted action.

What we learned

Signing every action inside our own system (not just external webhooks) is the kind of paranoia that pays off in incidents. When a staff account gets phished and someone tries to forge a dashboard request directly against the API, the HMAC check on Guard's side is the last line of defense. We've had zero forged-request incidents, but we sleep better knowing the layer is there.

Next post in this series: the per-guild SSE stream that keeps the dashboard live without polling.