AI Literacy for Platform Engineers
From Hype to Understanding to Better Decisions
(Available in single and double session formats; in-person and remote)
AI is no longer a novelty on your platform — it’s already inside it.
From code copilots to incident bots to half-finished automations someone wired up at 2 a.m., AI is creeping into every corner of the Internal Developer Platform. The danger isn’t misuse — it’s unconscious use: agents with no domain model, assistants with undefined permissions, and automations you can’t audit, observe or roll back. This workshop gives platform engineers the literacy to take back control.
Because AI on a platform isn’t magic — it’s architecture, and opportunity
Real AI literacy for platform teams isn’t about prompting tricks or model trivia. It’s about DICE: Domain clarity, Inputs, Constraints and Environment. It’s about designing agents with explicit goals, tuned toolboxes, explicit specifications and crisp boundaries so they can’t wander off into misaligned behaviour. It’s about understanding GOAP — how agents plan — so you can constrain what is thinkable in the first place. And it’s about engineering habitats where humans, agents and automations can collaborate without chaos.
And as soon as you have more than one agent, you have a system.
AI-to-AI collaboration demands structure: typed contracts, domain APIs, and safe delegation patterns instead of free-text improvisation. Your platform needs provenance, observability and debuggability baked in from the start. And that means adopting the modern backbone of safe agentic systems: the Model Context Protocol (MCP), with an MCP Gateway enforcing identity, tenancy, permissions, auditability and guardrails. Without that layer, you’re running an unregulated wilderness.
This workshop gives you the habits to build the opposite: an intelligent, governable platform.
In one hour you’ll learn the mental models, safety patterns and design disciplines that keep agents focused, predictable and accountable. You’ll draft an AI Agent Spec, learn how to spot unsafe behaviour, and understand exactly where to place guardrails, observability hooks and integration boundaries. If your platform is going to host AI — and it will — this is the literacy your engineers need to make that future safe, explainable and production-ready.