AI agents that interact with the real world through tool calls pose fundamental safety challenges: agents might leak private information, cause unintended side effects, or be manipulated through prompt injection. We present tacit (Tracked Agent Capabilities In Types), an open-source MCP server in Scala that addresses these challenges with a programming-language-based “safety harness”: instead of calling tools directly, agents generate typed programs in a capability-safe subset of Scala 3. Capabilities are program variables that regulate access to effects and resources; Scala’s type system tracks them statically, providing fine-grained control over what an agent can do. In particular, it enforces local purity, preventing information leakage when agents process classified data. Our experiments show that agents generate capability-safe code with no significant loss in task performance, while the type system reliably prevents unsafe behaviors such as information leakage and adversarial side effects.