Skip to content

Apply Transform to Point Cloud

SUMMARY

Apply Transform to Point Cloud applies a rigid or affine transformation to a point cloud, updating the position (and optionally orientation) of all points using a transformation matrix.

This Skill is commonly used after registration, pose estimation, or motion execution, where a point cloud needs to be aligned to a reference frame, another scan, or a robot coordinate system. It is also useful when chaining multiple perception steps that operate in different coordinate frames.

Use this Skill when you need to move, rotate, or align a point cloud consistently within a pipeline. Applying transforms explicitly helps keep coordinate frames clear and makes downstream processing, such as merging, clustering, or planning, more reliable and predictable.

The Skill

python
from telekinesis import vitreous
import numpy as np

transformed_point_cloud = vitreous.apply_transform_to_point_cloud(
    point_cloud=point_cloud,
    transformation_matrix=np.eye(4),
    modify_inplace=False,
)

API Reference

Performance Note

Current Data Limits: The system currently supports up to 1 million points per request (approximately 16MB of data). We're actively optimizing data transfer performance as part of our beta program, with improvements rolling out regularly to enhance processing speed.

Example

Raw Pointcloud

Unprocessed point cloud. The origin of the point cloud corresponds with the origin of the scene.

Transformed Pointcloud

The origin of the point cloud is transformed according to the transformation matrix.

The Code

python
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib

# Optional for logging
from loguru import logger

DATA_DIR = pathlib.Path("path/to/telekinesis-data")

# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "plastic_centered.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")

# Execute operation
transformed_point_cloud = vitreous.apply_transform_to_point_cloud(
  point_cloud=point_cloud, 
  transformation_matrix= [[1, 0, 0, 15], [0, 1, 0, 15], [0, 0, 1, 5], [0, 0, 0, 1]],
  modify_inplace=False
)
logger.success(f"Applied transform to {len(transformed_point_cloud.positions)} points")

The Explanation of the Code

This example demonstrates how to use the apply_transform_to_point_cloud Skill to move, rotate, or scale a 3D point cloud in space. It starts by importing the necessary modules from Telekinesis and Python standard libraries, along with the loguru logger for optional runtime feedback.

python
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib

# Optional for logging
from loguru import logger

The code then sets the data directory and loads a single point cloud from a .ply file using the io.load_point_cloud function. The logger provides immediate confirmation of how many points were loaded, helping you verify the input before applying any transformations.

python
DATA_DIR = pathlib.Path("path/to/telekinesis-data")

# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "plastic_centered.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")

The main operation uses the apply_transform_to_point_cloud Skill. By passing a 4x4 transformation matrix, this Skill applies translation, rotation, or scaling to the point cloud. Setting modify_inplace=False ensures that the original point cloud remains unchanged and a new transformed point cloud is returned. The logger confirms the number of points in the transformed cloud, giving visual feedback that the operation completed successfully.

python
# Execute operation
transformed_point_cloud = vitreous.apply_transform_to_point_cloud(
  point_cloud=point_cloud, 
  transformation_matrix= [[1, 0, 0, 15], [0, 1, 0, 15], [0, 0, 1, 5], [0, 0, 0, 1]],
  modify_inplace=False
)
logger.success(f"Applied transform to {len(transformed_point_cloud.positions)} points")

This workflow focuses on the Skill itself: it enables precise spatial manipulation of point clouds, which is essential for tasks like aligning objects in a scene, preparing data for registration, or creating synthetic configurations for testing and simulation.

Running the Example

Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:

bash
cd telekinesis-examples
python examples/vitreous_examples.py --example apply_transform_to_point_cloud

How to Tune the Parameters

The apply_transform_to_point_cloud Skill has one parameter that controls the operation behavior:

transformation_matrix (required):

  • The 4x4 transformation matrix to apply to the point cloud
  • Can be a NumPy array, list of lists, or Mat4X4 type
  • The matrix should be in the format:
    [[r11, r12, r13, tx],
     [r21, r22, r23, ty],
     [r31, r32, r33, tz],
     [0,   0,   0,   1 ]]
    where r* are rotation/scaling components and t* are translation components
  • Use np.eye(4) for identity (no transformation)

modify_inplace (default: False):

  • Whether to modify the input point cloud in place or return a new point cloud
  • False: Returns a new point cloud, original remains unchanged (default, recommended)
  • True: Modifies the input point cloud directly (faster but destructive)

TIP

Best practice: Keep modify_inplace=False (default) to preserve the original point cloud. This allows you to compare before/after transformations and avoids unintended side effects.

Where to Use the Skill in a Pipeline

Transform application is commonly used in the following pipelines:

  • Coordinate frame alignment
  • Multi-view registration
  • Pose estimation and object alignment
  • Robot coordinate system conversion

A typical pipeline for coordinate frame alignment looks as follows:

python
# Example pipeline using transform application (parameters omitted).

from telekinesis import vitreous
import numpy as np

# 1. Load point cloud
point_cloud = vitreous.load_point_cloud(...)

# 2. Preprocess point cloud
filtered_cloud = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)

# 3. Compute or define transformation matrix
# (e.g., from pose estimation, registration, or robot calibration)
transform_matrix = np.array([
    [1, 0, 0, 0.1],
    [0, 1, 0, 0.2],
    [0, 0, 1, 0.3],
    [0, 0, 0, 1]
])

# 4. Apply transform to align point cloud
aligned_cloud = vitreous.apply_transform_to_point_cloud(
    point_cloud=filtered_cloud,
    transformation_matrix=transform_matrix,
    modify_inplace=False,
)

# 5. Optional: Combine with other aligned point clouds
combined_cloud = vitreous.add_point_clouds(
    point_cloud1=aligned_cloud,
    point_cloud2=other_aligned_cloud,
)

# 6. Process the aligned point cloud
clusters = vitreous.cluster_point_cloud_using_dbscan(...)

Related skills to build such a pipeline:

  • add_point_clouds: combine transformed point clouds
  • filter_point_cloud_using_statistical_outlier_removal: clean input before transformation
  • cluster_point_cloud_using_dbscan: process aligned point clouds
  • calculate_oriented_bounding_box: compute bounding box after alignment

Alternative Skills

There are no direct alternative skills for applying transforms. This Skill is the primary method for transforming point clouds in 3D space.

When Not to Use the Skill

Do not use apply transform to point cloud when:

  • You need to transform only a subset of points (filter or segment first, then transform)
  • The transformation matrix is invalid (ensure it's a valid 4x4 matrix)
  • You need non-rigid transformations (this Skill applies rigid/affine transforms only)
  • You want to preserve the original point cloud (ensure modify_inplace=False, which is the default)
  • The point cloud is empty (the operation will succeed but return an empty point cloud)

WARNING

Important: Ensure your transformation matrix is valid (4x4, with bottom row [0, 0, 0, 1]). Invalid matrices may cause errors or unexpected results.