Get Link Transforms
SUMMARY
Get Link Transforms returns a 4×4 homogeneous matrix per frame in the kinematic model. Useful for visualization, attaching coordinate axes, or computing relative transforms between arbitrary frames without calling Forward Kinematics per frame.
SUPPORTED ROBOTS
Available on all supported manipulators.
UNITS
Returns a dict of link name → 4×4 SE(3) homogeneous transform.
The Skill
transforms = robot.get_link_transforms()The Code
"""
Per-link forward kinematics example for the Synapse SDK.
``get_link_transforms`` returns ``{frame_name: world_T_link}`` — a 4x4
homogeneous matrix per frame in the kinematic chain at the current joint
configuration.
Universal Robots (UR10e) is used here purely for illustration. It supports
all robots.
Usage:
python get_link_transforms.py
"""
from loguru import logger
from telekinesis.synapse.robots.manipulators import universal_robots
from telekinesis.synapse import utils
def main():
"""Compute every frame's world pose at a chosen joint configuration."""
# Create the robot (no connect required — runs on the kinematic model)
robot = universal_robots.UniversalRobotsUR10E()
# Set a non-trivial joint configuration [deg]
q = [0, -90, 90, 0, 90, 0]
robot.set_default_joint_configuration(q=q)
# Read world transforms for every frame in the kinematic model
transforms = robot.get_link_transforms()
logger.info(f"Number of frames at q={q}: {len(transforms)}")
# Print each frame's pose [x, y, z, rx, ry, rz] (m, deg)
for name, T in transforms.items():
pose = utils.transformation_matrix_to_pose(T, rot_type="deg")
logger.success(f"{name}: pose [m, deg] = {pose}")
if __name__ == "__main__":
main()The Explanation of the Code
from telekinesis.synapse.robots.manipulators import universal_robots
from telekinesis.synapse import utils
robot = universal_robots.UniversalRobotsUR10E()Create the robot. UR10e is used here for illustration; the same pattern works for every supported brand. No connect() is required - the call runs against the kinematic model.
q = [0, -90, 90, 0, 90, 0]
robot.set_default_joint_configuration(q=q)Set the joint configuration the transforms are evaluated at. get_link_transforms reads from the current commanded state, so any set_* skill that updates state.joint_positions (or a live hardware read) drives the values.
transforms = robot.get_link_transforms()Return the world transform of every frame in the URDF as a {frame_name: 4×4 ndarray} dict in one call - cheaper than calling FK per frame. The "universe" root frame is excluded.
for name, T in transforms.items():
pose = utils.transformation_matrix_to_pose(T, rot_type="deg")Each matrix is full SE(3) - convert to the Synapse 6-vector [x, y, z, rx, ry, rz] (meters, Euler XYZ in degrees) via transformation_matrix_to_pose when you need a pose rather than the raw matrix.
Return Value
| Type | Description |
|---|---|
dict[str, np.ndarray] | {frame_name: world_T_link} - each value is a 4×4 homogeneous transform at the current joint configuration. Excludes "universe". |
Where to Use the Skill
- Visualization - Stream every link's pose to a 3D visualizer without running FK per frame.
- Sensor attachment - Look up the world pose of any named frame for calibration of the sensor.
- Relative transforms - Compose
world_T_aandworld_T_bfrom the same call to geta_T_bwithout two extra FK calls.
When Not to Use the Skill
- You only need the TCP pose - call Forward Kinematics or Get Cartesian Pose; both avoid the per-frame overhead.
- You need rendered mesh poses - use Get Visual Mesh Transforms, which composes each link's URDF
<visual><origin>offset.

