Skip to content

Best practices

Lifecycle events can be retried (on_install, on_uninstall, on_user_added). Inbound messages can be redelivered if a partner channel does so. Schedules don’t retry but can race across worker restarts. Write every handler so re-running it is safe.

# ❌ Re-running creates duplicates
@app.on_install
async def on_install(ctx) -> None:
await ctx.tools.call("odoo_create",
model="res.partner", values={"name": "My App", ...})
# ✓ Use a dedup marker
@app.on_install
async def on_install(ctx) -> None:
existing = await ctx.tools.call("odoo_search_read",
model="res.partner",
domain=[["x_my_app_marker", "=", True]], limit=1)
if existing.get("records"):
return
await ctx.tools.call("odoo_create",
model="res.partner",
values={"name": "My App", "x_my_app_marker": True})

For inbound messages, the message_id is your dedup key. Persist a small set of recently-handled IDs (in your own DB or an Odoo custom field) and short-circuit on repeats.

Tenants don’t always grant every scope. Catch ToolCallError(decision='scope_denied') and degrade:

try {
await ctx.tools.call('odoo_create', { model, values })
} catch (err) {
if (err instanceof ToolCallError && err.decision === 'scope_denied') {
console.log('[my-app] odoo.write not granted; skipping')
return
}
throw err // re-raise everything else
}

Network errors are platform-side, not tenant-side

Section titled “Network errors are platform-side, not tenant-side”

ToolCallError(decision='network_error') and decision='platform_error' are transient — usually a worker restart or a brief Redis hiccup. Don’t surface “the tenant did something wrong” to the user. Log it, let the natural retry mechanism handle it.

The platform deliberately strips stack traces from the HTTP response that flows back to the tenant agent. Your stderr is the source of truth for debugging — visible to you in the Logs tab of the dev console.

  • Your stderr: log generously. Visible only to you. Includes stack traces, partner-internal data, debug strings.
  • HTTP response body: nothing partner-internal. The platform may surface this to the tenant agent or relay it across the trust boundary.
  • tool_input is logged to linkworld_skill_audit_log for every call.
  • Don’t put secrets in tool_input. The platform never logs the value field of a secret.get call, but it does log other args verbatim.
  • Use ctx.secrets.get() for credentials, not function args.
  • The platform enforces per-(tenant, app) isolation at the DB layer. Two tenants of the same app run in different containers and never see each other’s data.
  • Don’t try to share state across tenants in your container’s memory. The container is per-tenant and may be torn down at any moment.
  • Treat rotation as routine, not an emergency. The Secrets UI has a “Rotate” action that’s a single-shot replace.
  • Watch linkworld_app_secret_audit_failures_total in your dashboard — non-zero = silent gaps in your audit trail. Alert on this.
  • If the underlying API rejects the secret (rotated by the upstream vendor), fail loudly to the tenant — don’t retry indefinitely.
  • Cron precision is 1 minute. Don’t write * * * * * and expect reliable per-minute execution under load — write */5 * * * * if you need sub-hour but don’t need precision.
  • Schedules are at-most-once. If the tick gets missed (worker restart), it doesn’t replay. Use idempotent semantics: “ensure today’s digest was sent” not “send today’s digest now”.
  • A schedule handler that takes >60s might get the next tick before finishing. Track in-flight state if that matters.
  • Pin your dependencies. The platform builds your container from your Dockerfile; floating versions = surprise breakage at deploy.
  • Keep cold-start short. Containers wake on-demand and 99th-percentile cold-start latency directly hits the tenant agent’s UX.

Every LLM call you make through ctx.tools.call(...) is automatically attributed to your app via Prometheus labels. Your dev console’s Usage tab shows token counts, compute seconds, per-tenant breakdown. Don’t pay for something you didn’t ship — review weekly, especially during early deploys, to catch runaway loops.

If your app’s compute-seconds spikes without a corresponding tool-call spike, you have a runaway in your handler. Check the Logs tab.