Best practices
Idempotency
Section titled “Idempotency”Lifecycle events can be retried (on_install, on_uninstall,
on_user_added). Inbound messages can be redelivered if a partner
channel does so. Schedules don’t retry but can race across worker
restarts. Write every handler so re-running it is safe.
# ❌ Re-running creates duplicates@app.on_installasync def on_install(ctx) -> None: await ctx.tools.call("odoo_create", model="res.partner", values={"name": "My App", ...})
# ✓ Use a dedup marker@app.on_installasync def on_install(ctx) -> None: existing = await ctx.tools.call("odoo_search_read", model="res.partner", domain=[["x_my_app_marker", "=", True]], limit=1) if existing.get("records"): return await ctx.tools.call("odoo_create", model="res.partner", values={"name": "My App", "x_my_app_marker": True})For inbound messages, the message_id is your dedup key. Persist a
small set of recently-handled IDs (in your own DB or an Odoo
custom field) and short-circuit on repeats.
Error handling
Section titled “Error handling”Graceful scope-deny
Section titled “Graceful scope-deny”Tenants don’t always grant every scope. Catch ToolCallError(decision='scope_denied') and degrade:
try { await ctx.tools.call('odoo_create', { model, values })} catch (err) { if (err instanceof ToolCallError && err.decision === 'scope_denied') { console.log('[my-app] odoo.write not granted; skipping') return } throw err // re-raise everything else}Network errors are platform-side, not tenant-side
Section titled “Network errors are platform-side, not tenant-side”ToolCallError(decision='network_error') and decision='platform_error'
are transient — usually a worker restart or a brief Redis hiccup.
Don’t surface “the tenant did something wrong” to the user. Log it,
let the natural retry mechanism handle it.
Partner exceptions stay in your container
Section titled “Partner exceptions stay in your container”The platform deliberately strips stack traces from the HTTP response that flows back to the tenant agent. Your stderr is the source of truth for debugging — visible to you in the Logs tab of the dev console.
Trust boundaries
Section titled “Trust boundaries”Your container ↔ the platform
Section titled “Your container ↔ the platform”- Your stderr: log generously. Visible only to you. Includes stack traces, partner-internal data, debug strings.
- HTTP response body: nothing partner-internal. The platform may surface this to the tenant agent or relay it across the trust boundary.
Your tool input ↔ the audit log
Section titled “Your tool input ↔ the audit log”tool_inputis logged tolinkworld_skill_audit_logfor every call.- Don’t put secrets in
tool_input. The platform never logs the value field of asecret.getcall, but it does log other args verbatim. - Use
ctx.secrets.get()for credentials, not function args.
Tenant data ↔ other tenants
Section titled “Tenant data ↔ other tenants”- The platform enforces per-(tenant, app) isolation at the DB layer. Two tenants of the same app run in different containers and never see each other’s data.
- Don’t try to share state across tenants in your container’s memory. The container is per-tenant and may be torn down at any moment.
Secret rotation
Section titled “Secret rotation”- Treat rotation as routine, not an emergency. The Secrets UI has a “Rotate” action that’s a single-shot replace.
- Watch
linkworld_app_secret_audit_failures_totalin your dashboard — non-zero = silent gaps in your audit trail. Alert on this. - If the underlying API rejects the secret (rotated by the upstream vendor), fail loudly to the tenant — don’t retry indefinitely.
Schedule design
Section titled “Schedule design”- Cron precision is 1 minute. Don’t write
* * * * *and expect reliable per-minute execution under load — write*/5 * * * *if you need sub-hour but don’t need precision. - Schedules are at-most-once. If the tick gets missed (worker restart), it doesn’t replay. Use idempotent semantics: “ensure today’s digest was sent” not “send today’s digest now”.
- A schedule handler that takes >60s might get the next tick before finishing. Track in-flight state if that matters.
Dependencies
Section titled “Dependencies”- Pin your dependencies. The platform builds your container from your
Dockerfile; floating versions = surprise breakage at deploy. - Keep cold-start short. Containers wake on-demand and 99th-percentile cold-start latency directly hits the tenant agent’s UX.
Observability
Section titled “Observability”Every LLM call you make through ctx.tools.call(...) is automatically
attributed to your app via Prometheus labels. Your dev console’s Usage
tab shows token counts, compute seconds, per-tenant breakdown.
Don’t pay for something you didn’t ship — review weekly, especially
during early deploys, to catch runaway loops.
If your app’s compute-seconds spikes without a corresponding tool-call spike, you have a runaway in your handler. Check the Logs tab.