Skip to content

Telekinesis Agentic Skill LibraryThe Intelligence Layer for Physical AI

Telekinesis Agentic Skill Library for Computer Vision, Robotics and Physical AI applications


What is Telekinesis?

Robotics development is fragmented across incompatible tools, APIs, and frameworks. Teams must constantly integrate perception stacks, motion libraries, hardware interfaces, and AI models—resulting in brittle systems and slow iteration.

Telekinesis replaces this fragmentation with a unified library where Physical AI Agents, perception, planning, control, and robot learning work seamlessly together.

Physical AI Agent: Prompt To Robot Code

Prompt:

I have a UR10e and an RG6 gripper, I want to do a repackaging task where the parts are placed in a rectangular grid and need to be placed into
another grid where there is a fixed offset on the x axis and the y axis and every other row is offset from the previous row. The first row has
n slots, second m third n etc. Every other row is identical. When picking up the parts do not open the gripper all the way as the parts are close.
Start and end the program at a home position.

Physical AI Agents take natural language instructions and generate executable code using the Telekinesis Skill Library to perform complex robotics tasks.

Telekinesis Physical AI Agents
Learn how to build agents that turn natural-language prompts into executable robot code.
Explore →

Getting Started

New to Telekinesis? This is the recommended setup path. Install the core Telekinesis Skill Library, run the Quickstart to verify your setup, then add Physical AI Agents or BabyROS when you are ready to build larger workflows.

  1. Install Telekinesis Skill Library
    Create your API key, set up Python environment, and install the telekinesis-ai SDK.

  2. Run the Quickstart
    Run your first vision, hardware, or robotics Skill in minutes — no clone required.

  3. Install Physical AI Agents
    Once telekinesis-ai is set up, install the agent tooling to generate executable robot code from natural-language prompts.

  4. Install BabyROS
    Additionally, install BabyROS to connect robots, cameras, sensors, and distributed services.

Start with the core SDK
Install telekinesis-ai SDK first, then run the Quickstart before setting up Agents or BabyROS.
Start →
Join our Discord community
Get help, share what you build, and connect with other Physical AI developers.
Join Discord →

Understanding the Telekinesis Ecosystem

Telekinesis is a unified robotics intelligence stack designed around 5 tightly integrated layers: Skills, Agents, Data Engine, BabyROS Middleware, and Industrial Applications.

Telekinesis Skill modulesTelekinesis Skill modules
Architecture of the Telekinesis ecosystem, showing Skills, Physical AI Agents, Data Engine and Applications. The user prompts the Agents, which generate the robot code by orchestrating Skills. The Data Engine captures all Skill executions and sensor data for Continuous Learning.

Together, they form a closed-loop system for building, executing, and improving Physical AI:

  1. Skills — production-grade atomic actions for perception, planning, control, and hardware. The executable building blocks of every robot behavior.
  2. Physical AI Agents — VLM/LLM-powered systems that decompose high-level goals into structured sequences of Skills.
  3. Data Engine — the memory layer: captures every Skill execution (inputs, outputs, trajectories, sensor data) and turns it into usable datasets.
  4. BabyROS Middleware — ultra-low-latency pub/sub and client/server messaging that runs Skills as distributed nodes across sensors, compute, and robots.
  5. Industrial Applications — real-world deployments across manufacturing, logistics, and agriculture, with the same Skills and Agents validated in simulation first.

Skills

Skills are reusable modular operations for perception, robotics, and decision-making that can be chained into workflows for Physical AI applications in manufacturing, logistics and more.

Skills Overview
Browse the full catalog of Telekinesis Skill Library across perception, planning, control, and hardware.
Explore →

Skill Example 1 - Vision - segment_image_using_sam: Segmentation on an image using SAM model.

python
from telekinesis import cornea                                # Import Cornea - Image segmentation module

# Executing a 2D image segmentation Skill
result = cornea.segment_image_using_sam(                      # Executing Skill - `segment_image_using_sam`
    image=image,
    bboxes=[[400, 150, 1200, 450]]
)
# Access results
annotations = result.to_list()

Skill Example 2 - Robot - set_joint_positions: Set robot joint positions.

python
# Example 2
from telekinesis.synapse.robots.manipulators.universal_robots import UniversalRobotsUR10E # Importing the UR10E robot interface from Synapse - Robotics module

# Create and connect the robot
robot = UniversalRobotsUR10E()
robot.connect(ip="192.168.1.2")
# Execute a motion control Skill to set joint positions
robot.set_joint_positions(
    joint_positions=[0, 90, 0, -90, 0, 90],
    speed=60,
    acceleration=80,
    asynchronous=False
)
# Disconnect the robot after execution
robot.disconnect()

Robotics Skills

Control a wide set of different industrial and mobile robots such as Universal Robots, Anybotics and others through one consistent Python interface.

Synapse Overview
Explore the full Synapse robotics stack — manipulators, humanoids, quadrupeds, mobile robots, and grippers.
Explore →
python
from telekinesis import synapse # robotics skills

Manipulators

Mobile Robots & Quadrupeds

Humanoids

Computer Vision Skills

Use production-grade computer vision Skill Groups for obstacle detection, pose estimation, point-cloud processing, and AI model training and much more.

Cornea Overview
Production-grade 2D image segmentation Skills, including SAM-based segmentation for parts, obstacles, and scenes.
Explore →
python
from telekinesis import cornea            # image segmentation skills
from telekinesis import retina            # object detection skills
from telekinesis import pupil             # image processing skills
from telekinesis import vitreous          # point cloud processing skills
from telekinesis import iris              # AI model training skills
from telekinesis.medulla import cameras   # Medulla hardware communication skills

6D Pose Estimation from Point Clouds

Parts inspection using Hough circle detection for manufacturing quality control

Object Detection

Depalletizing boxes with computer vision segmentation for warehouse and logistics robotics

Object Segmentation

Reinforcement Learning, Imitation Learning and Vision-Language-Action Model Skills

Train and deploy learned robot policies and behaviors for locomotion, manipulation, and control.

RLBotics Overview
Learn all the ways to train and deploy reinforcement learning policies, imitation learning, and Vision-Language-Action models.
Explore →
python
from telekinesis import rlbotics   # reinforcement learning skills

Training Jump

Sim-to-Sim Deployment

Training Walk

Synthetic Dataset Generation Skills

Generate photo-realistic synthetic datasets to train and validate computer vision models.

Illusion Overview
Find out more about generating photo-realistic synthetic datasets to train and validate computer vision models.
Explore →
python
from telekinesis import illusion   # synthetic data generation skills
Synthetic training dataset for computer vision and robotics - photorealistic industrial scene 7

Synthetic Image 1

Synthetic training dataset for computer vision and robotics - photorealistic industrial scene 3

Synthetic Image 2

Synthetic training dataset for computer vision and robotics - photorealistic industrial scene 9

Synthetic Image 3

Physical AI Agents

Physical AI Agents turn natural-language prompts into executable robot code by orchestrating the Telekinesis Skill Library. Use Tzara — our VS Code extension — to write a prompt and get a runnable Python program back.

Telekinesis Physical AI Agents Overview
Install Physical AI Agent, write your first prompt, and generate Skill-based robot code.
Explore →

Prompt:

I have a UR10e and an RG6 gripper. I have parts vertically placed in a grid that need to be picked and placed horizontally
in a new grid (requires a -90-degree flip between pick and place around the y axis). Add an optional intermediate joint
pose before the flip. Add logging at each step.

Physical AI Agent takes a natural-language instruction and generates executable Telekinesis Skill code that performs the full pick-flip-place task on a real UR10e.

Building something with Agents?
Share your prompts and results on Discord. We love seeing what the community builds with our Physical AI Agents.
Join Discord →

Data Engine

The memory layer of Telekinesis — automatically captures sensor streams, robot states, transforms, and full Skill I/O from every run. Replay failures against successful runs to debug, then turn it all into production-grade datasets for training, evaluation, and continual learning.

Telekinesis Data EngineTelekinesis Data Engine
The Data Engine ingests unstructured, event-driven data from multiple sources and fuses it into structured tabular datasets optimized for training Physical AI models.
Data Engine Overview
Capture sensor data, robot states, transforms, and Skill outputs for debugging, evaluation, and continual learning.
Explore →

BabyROS Middleware

Ultra-low-latency pub/sub and client/server messaging built on Zenoh that connects sensors, actuators, AI modules, and control loops across microcontrollers, edge devices, and the cloud. ROS-style ergonomics with a single pip install babyros — no system-wide dependencies, workspace overlays, or custom .msg files required.

Telekinesis Skill modulesTelekinesis Skill modules
BabyROS uses ROS-style publish/subscribe and client/server architectures for inter-device communication in distributed robotics systems, but with a lightweight, ultra-low latency middleware built on Zenoh.
BabyROS Overview
Lightweight, ultra-low-latency pub/sub and client/server messaging built on Zenoh — ROS-style ergonomics, no full ROS install required.
Explore →
bash
pip install babyros

Example Publisher:

python
import babyros

publisher = babyros.node.Publisher(topic="data/topic")
publisher.publish({"data": 123})

Example Subscriber:

python
import babyros

def callback(message):
    print("Received:", message)

subscriber = babyros.node.Subscriber(topic="data/topic", callback=callback)

5. Industrial Applications

Build real-world robotics and Physical AI applications for industries such as manufacturing, automotive, aerospace and others.

Relay Soldering

Laser Engraving

Assembly

Carton Palletizing

Quality Control (Panda)

Gear Assembly


Join our Discord Community to Add your Own Skills

The Telekinesis Agentic Skill Library is the beginning of a vibrant ecosystem. Whether you are a researcher, a hobbyist, or an industrial engineer, your work belongs here. Release your Skill, let others improve it, and see it deployed in real-world systems.

Be part of the Physical AI revolution!
Ask questions, contribute Skills, share your projects, and connect with researchers, hobbyists, and industrial engineers building the next generation of robotics.
Join Discord →