Skip to content

Register Point Clouds Using Cuboid Translation Sampler ICP

SUMMARY

Register Point Clouds Using Cuboid Translation Sampler ICP aligns two point clouds by sampling initial translations on a regular 3D grid (cuboid) and running ICP from each sample to find the best match.

This Skill is useful in industrial, mobile, and humanoid robotics pipelines for 6D pose estimation and object alignment when the initial position of the object is uncertain. For example, it can align scanned parts to CAD models on a factory floor, register consecutive LIDAR frames for mobile robot localization, or align objects detected by a humanoid robot for manipulation tasks. The cuboid sampling ensures robustness against unknown starting positions.

Use this Skill when you want to perform ICP registration with multiple initial translations to improve convergence and accuracy in complex alignment scenarios.

The Skill

python
from telekinesis import vitreous
import numpy as np

registered_point_cloud = vitreous.register_point_clouds_using_cuboid_translation_sampler_icp(
    source_point_cloud=source_point_cloud,
    target_point_cloud=target_point_cloud,
    initial_transformation_matrix=np.eye(4),
    step_size=0.001,
    x_min=-0.01,
    x_max=0.01,
    y_min=-0.01,
    y_max=0.01,
    z_min=-0.01,
    z_max=0.01,
    early_stop_fitness_score=0.5,
    min_fitness_score=0.9,
    max_iterations=50,
    max_correspondence_distance=0.02,
    estimate_scaling=False,
)

API Reference

Performance Note

Current Data Limits: The system currently supports up to 1 million points per request (approximately 16MB of data). We're actively optimizing data transfer performance as part of our beta program, with improvements rolling out regularly to enhance processing speed.

Example

Source and Target Point Clouds

Raw sensor input i.e. target point cloud in green and object model in red

Registered Point Clouds

Registered source point cloud (red) aligned to target point cloud (green) using cuboid translation sampler ICP with 3D grid translation search

The Code

python
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
import numpy as np

# Optional for logging
from loguru import logger

DATA_DIR = pathlib.Path("path/to/telekinesis-data")

# Load point clouds
source_filepath = str(DATA_DIR / "point_clouds" / "weld_clamp_model_shifted.ply")
target_filepath = str(DATA_DIR / "point_clouds" / "weld_clamp_cluster_0_centroid_registered.ply")
source_point_cloud = io.load_point_cloud(filepath=source_filepath)
target_point_cloud = io.load_point_cloud(filepath=target_filepath)
logger.success(f"Loaded source point cloud with {len(source_point_cloud.positions)} points")
logger.success(f"Loaded target point cloud with {len(target_point_cloud.positions)} points")

# Execute operation
registered_point_cloud = vitreous.register_point_clouds_using_cuboid_translation_sampler_icp(
  source_point_cloud=source_point_cloud,
  target_point_cloud=target_point_cloud,
  initial_transformation_matrix=np.eye(4),
  step_size=0.001,
  x_min=-0.01,
  x_max=0.01,
  y_min=-0.01,
  y_max=0.01,
  z_min=-0.01,
  z_max=0.01,
  early_stop_fitness_score=0.5,
  min_fitness_score=0.9,
  max_iterations=50,
  max_correspondence_distance=0.02,
  estimate_scaling=False,
)
logger.success(f"Registered point clouds using cuboid translation sampler ICP")

Running the Example

Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:

bash
cd telekinesis-examples
python examples/vitreous_examples.py --example register_point_clouds_using_cuboid_translation_sampler_icp

The Explanation of the Code

The script starts by importing the required modules: vitreous for 3D operations, datatypes and io for handling point cloud data, pathlib for managing file paths, numpy for numerical operations, and loguru for optional logging.

python
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
import numpy as np

# Optional for logging
from loguru import logger

Next, two point clouds are loaded: a source and a target. Logging confirms the number of points in each, ensuring the data is correctly loaded for registration.

python

DATA_DIR = pathlib.Path("path/to/telekinesis-data")

# Load point clouds
source_filepath = str(DATA_DIR / "point_clouds" / "weld_clamp_model_shifted.ply")
target_filepath = str(DATA_DIR / "point_clouds" / "weld_clamp_cluster_0_centroid_registered.ply")
source_point_cloud = io.load_point_cloud(filepath=source_filepath)
target_point_cloud = io.load_point_cloud(filepath=target_filepath)
logger.success(f"Loaded source point cloud with {len(source_point_cloud.positions)} points")
logger.success(f"Loaded target point cloud with {len(target_point_cloud.positions)} points")

The main operation applies the register_point_clouds_using_cuboid_translation_sampler_icp Skill. This method performs ICP (Iterative Closest Point) registration while sampling initial translations uniformly in a small cuboid region. The algorithm iteratively searches for the best alignment between the source and target point clouds. Parameters like step_size, max_iterations, and max_correspondence_distance control the precision and convergence. This Skill is particularly useful in industrial 3D perception pipelines for precise part alignment, quality inspection, or object pose estimation.

python

# Execute operation
registered_point_cloud = vitreous.register_point_clouds_using_cuboid_translation_sampler_icp(
  source_point_cloud=source_point_cloud,
  target_point_cloud=target_point_cloud,
  initial_transformation_matrix=np.eye(4),
  step_size=0.001,
  x_min=-0.01,
  x_max=0.01,
  y_min=-0.01,
  y_max=0.01,
  z_min=-0.01,
  z_max=0.01,
  early_stop_fitness_score=0.5,
  min_fitness_score=0.9,
  max_iterations=50,
  max_correspondence_distance=0.02,
  estimate_scaling=False,
)
logger.success(f"Registered point clouds using cuboid translation sampler ICP")

How to Tune the Parameters

The register_point_clouds_using_cuboid_translation_sampler_icp Skill has many parameters that control the translation search and ICP refinement:

step_size (default: 0.001):

  • The step size between sampled translations
  • Units: Uses the same units as your point cloud (e.g., if point cloud is in meters, step_size is in meters; if in millimeters, step_size is in millimeters)
  • Increase (0.005-0.01) to create a coarser search grid (fewer samples, faster) but may miss the optimal alignment
  • Decrease (0.0005-0.001) to create a finer grid (more samples, slower) for more thorough search
  • Should be set to 0.5-2x the expected alignment accuracy
  • Typical range: 0.0005-0.01 in point cloud units
  • Use 0.0005-0.001 for precise search, 0.001-0.005 for balanced, 0.005-0.01 for coarse

x_min, x_max, y_min, y_max, z_min, z_max (defaults: -0.01 to 0.01):

  • Define the bounds of the search cuboid along each axis
  • Units: Uses the same units as your point cloud
  • Increase range to search over a larger translation space
  • Decrease range to focus search in a smaller region (faster)
  • Typical range: -0.1 to 0.1 in point cloud units
  • Should encompass the expected translation uncertainty

early_stop_fitness_score (default: 0.5):

  • Fitness score threshold (0-1) for early stopping during search
  • Increase (0.5-0.7) to require better alignments before stopping (slower but potentially better results)
  • Decrease (0.3-0.5) to stop earlier (faster) but may accept suboptimal alignments
  • Should be set higher than min_fitness_score to ensure early-stopped results are acceptable
  • Typical range: 0.3-0.7

min_fitness_score (default: 0.9):

  • Minimum fitness score (0-1) required to accept the final result
  • Increase (0.95-0.99) to require better alignment quality
  • Decrease (0.7-0.85) to accept lower quality alignments
  • Typical range: 0.7-0.99
  • Use 0.7-0.85 for lenient, 0.85-0.95 for balanced, 0.95-0.99 for strict

max_iterations (default: 50):

  • The maximum number of ICP iterations per translation sample
  • Increase (50-200) to allow more refinement but is slower
  • Decrease (10-30) for faster computation
  • Typical range: 10-200
  • Use 10-30 for fast, 30-50 for balanced, 50-200 for high precision

max_correspondence_distance (default: 0.02):

  • The maximum distance to consider points as correspondences
  • Units: Uses the same units as your point cloud
  • Increase to allow matching more distant points but may include incorrect matches
  • Decrease to require closer matches
  • Should be 2-5x point spacing
  • Typical range: 0.01-0.1 in point cloud units

estimate_scaling (default: False):

  • Whether to estimate and apply uniform scaling between point clouds
  • True: Allows the algorithm to find scale differences
  • False: Assumes same scale
  • Set to True if point clouds may have different scales, False for same-scale alignment

TIP

Best practice: Start with default values and adjust the cuboid bounds (x_min/max, etc.) based on your expected translation uncertainty. Adjust step_size based on the precision you need. Use register_point_clouds_using_centroid_translation first for coarse alignment, then use this for fine refinement.

Where to Use the Skill in a Pipeline

Cuboid translation sampler ICP is commonly used in the following pipelines:

  • 6D pose estimation with uncertain translation
  • Object alignment and registration
  • Part inspection and quality control
  • Multi-view registration

A typical pipeline for pose estimation looks as follows:

python
# Example pipeline using cuboid translation sampler ICP (parameters omitted).

from telekinesis import vitreous
import numpy as np

# 1. Load point clouds
source_point_cloud = vitreous.load_point_cloud(...)  # Object model
target_point_cloud = vitreous.load_point_cloud(...)  # Scene scan

# 2. Preprocess both point clouds
filtered_source = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)
filtered_target = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)
downsampled_source = vitreous.filter_point_cloud_using_voxel_downsampling(...)
downsampled_target = vitreous.filter_point_cloud_using_voxel_downsampling(...)

# 3. Optional: Coarse alignment using centroid translation
coarse_aligned = vitreous.register_point_clouds_using_centroid_translation(
    source_point_cloud=downsampled_source,
    target_point_cloud=downsampled_target,
)

# 4. Fine alignment using cuboid translation sampler ICP
registered_cloud = vitreous.register_point_clouds_using_cuboid_translation_sampler_icp(
    source_point_cloud=downsampled_source,
    target_point_cloud=downsampled_target,
    initial_transformation_matrix=np.eye(4),
    step_size=0.001,
    x_min=-0.01,
    x_max=0.01,
    y_min=-0.01,
    y_max=0.01,
    z_min=-0.01,
    z_max=0.01,
    early_stop_fitness_score=0.5,
    min_fitness_score=0.9,
    max_iterations=50,
    max_correspondence_distance=0.02,
    estimate_scaling=False,
)

# 5. Use registered point cloud for pose estimation or analysis

Related skills to build such a pipeline:

  • register_point_clouds_using_centroid_translation: coarse alignment before fine registration
  • register_point_clouds_using_fast_global_registration: alternative for initial alignment
  • register_point_clouds_using_point_to_point_icp: simpler ICP without translation sampling
  • filter_point_cloud_using_statistical_outlier_removal: clean input before registration
  • filter_point_cloud_using_voxel_downsampling: reduce point cloud density for faster processing

Alternative Skills

Skillvs. Cuboid Translation Sampler ICP
register_point_clouds_using_point_to_point_icpUse point-to-point ICP when initial alignment is good. Use cuboid translation sampler ICP when translation is uncertain.
register_point_clouds_using_rotation_sampler_icpUse rotation sampler when rotation is uncertain. Use cuboid translation sampler when translation is uncertain.

When Not to Use the Skill

Do not use cuboid translation sampler ICP when:

  • Initial alignment is already good (use register_point_clouds_using_point_to_point_icp instead)
  • Only rotation is uncertain (use register_point_clouds_using_rotation_sampler_icp instead)
  • You need fast registration (this method is slower due to multiple ICP runs)
  • Translation uncertainty is very large (the search space may be too large to sample effectively)
  • Point clouds are very different in scale (set estimate_scaling=True or pre-scale the clouds)