Skip to content

Get Visual Mesh Transforms

SUMMARY

Get Visual Mesh Transforms is the rendering-ready version of Get Link Transforms: link frames composed with the URDF <visual><origin> mesh offset, filtered to only the links that actually have a visual mesh.

SUPPORTED ROBOTS

Available on all supported manipulators.

UNITS

Returns a dict of mesh name → 4×4 SE(3) homogeneous transform in meters (translation column).

The Skill

python
transforms = robot.get_visual_mesh_transforms()

The Code

python
"""
Read per-link visual mesh world transforms for the Synapse SDK.

``get_visual_mesh_transforms`` returns ``{link_name: world_T_visual_mesh}``
— a 4x4 homogeneous matrix per link, composed from the link frame and the
URDF ``<visual><origin>`` offset. Links without a usable visual mesh are
omitted.

Universal Robots (UR10e) is used here purely for illustration. It supports all robots.

Usage:
    python get_visual_mesh_transforms.py
"""

from loguru import logger

from telekinesis.synapse.robots.manipulators import universal_robots
from telekinesis.synapse import utils


def main():
    """Read every link's visual-mesh world transform at the current joint configuration."""

    # Create the robot (no connect required — runs on the kinematic model)
    robot = universal_robots.UniversalRobotsUR10E()

    # Read world transforms for every link's visual mesh
    transforms = robot.get_visual_mesh_transforms()
    logger.info(f"Number of links with visual meshes: {len(transforms)}")

    # Convert each 4x4 transformation matrix to a pose [x, y, z, rx, ry, rz] (m, deg)
    for name, T in transforms.items():
        pose = utils.transformation_matrix_to_pose(T, rot_type="deg")
        logger.success(f"{name}: pose [m, deg] = {pose}")


if __name__ == "__main__":
    main()

The Explanation of the Code

python
from telekinesis.synapse.robots.manipulators import universal_robots
from telekinesis.synapse import utils

robot = universal_robots.UniversalRobotsUR10E()

Create the robot. UR10e is used here for illustration; the same pattern works for every supported brand. No connect() is required - runs against the kinematic model.

python
transforms = robot.get_visual_mesh_transforms()

Return {link_name: world_T_visual_mesh} as a dict of 4×4 SE(3) homogeneous matrices, each composed from the link frame and the URDF <visual><origin> mesh offset. Links without a usable mesh are omitted.

python
for name, T in transforms.items():
    pose = utils.transformation_matrix_to_pose(T, rot_type="deg")

Each matrix is full SE(3) - convert to the Synapse 6-vector [x, y, z, rx, ry, rz] (meters, Euler XYZ in degrees) via transformation_matrix_to_pose when you need a pose rather than the raw matrix.

Return Value

TypeDescription
dict[str, np.ndarray]{link_name: world_T_visual_mesh} - each value is a 4×4 homogeneous transform at which the corresponding visual mesh should be rendered. Links without a usable mesh are omitted.

Where to Use the Skill

  • Visualization - Render the robot's URDF visual meshes at the correct world poses in any 3D visualizer. Pair with Get Visual Meshes Data for the raw vertex / triangle arrays.
  • Digital twin - Stream live mesh poses to a remote viewer for monitoring or teleoperation.
  • Synthetic data generation - Drive an offline renderer at known robot configurations.

When Not to Use the Skill

  • You only need link frames - call Get Link Transforms to skip the mesh-origin composition and keep every frame (including ones without a visual).