Context Model for Physical AI Agents
Physical AI Agents generate code based on the context available in the conversation and the Telekinesis Skill Library. This context determines what the agent understands about the task and directly influences the quality and correctness of the generated output.
The agent operates using a combination of:
- Your natural language instruction (prompt)
- Recent conversation history
- The Telekinesis Skill Library (available Skills)
The agent does not access file systems, external workspaces, or runtime environments. All reasoning and generation is constrained to the conversation and the Skill Library.
Skill Context
Physical AI Agents operate with access to the Telekinesis Skill Library. Each Skill provides structured information including:
- Function name
- Input parameters and types
- Output parameters and types
- Description of behavior
- Usage examples
Skills define the action space of the agent and serve as the interface between natural language reasoning and executable robotics code.
Context Limitations
The agent operates within a bounded context window (typically tens of thousands of tokens depending on configuration). When this limit is reached, older or less relevant information may be truncated.
The following are not available:
- File system or workspace access
- External runtime state
- Persistent memory across sessions
- Hidden environment variables
Why Context Window Matters
The agent’s outputs are determined entirely by explicit context (prompt, conversation history, and Skills). This ensures predictable, inspectable behavior within a defined reasoning space.

