Register Point Clouds Using Point to Point ICP
SUMMARY
Register Point Clouds Using Point to Point ICP aligns two point clouds by iteratively minimizing the distances between corresponding points.
This Skill is useful in industrial, mobile, and humanoid robotics pipelines for fine-grained 6D pose estimation and object alignment. For example, it can refine part positioning in manufacturing, improve frame-to-frame registration in mobile robot mapping, or accurately align objects for humanoid robot manipulation. Point-to-point ICP is straightforward and effective when initial alignment is reasonably close.
Use this Skill when you want to perform precise point cloud registration where high accuracy in point correspondence is required.
The Skill
from telekinesis import vitreous
import numpy as np
registered_point_cloud = vitreous.register_point_clouds_using_point_to_point_icp(
source_point_cloud=source_point_cloud,
target_point_cloud=target_point_cloud,
initial_transformation_matrix=np.eye(4),
max_iterations=50,
max_correspondence_distance=0.05,
estimate_scaling=False,
)Performance Note
Current Data Limits: The system currently supports up to 1 million points per request (approximately 16MB of data). We're actively optimizing data transfer performance as part of our beta program, with improvements rolling out regularly to enhance processing speed.
Example
Source and Target Point Clouds
Raw sensor input i.e. target point cloud in green and object model in red
Registered Point Clouds
Registered source point cloud (red) aligned to target point cloud (green) using point-to-point ICP
The Code
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
import numpy as np
# Optional for logging
from loguru import logger
DATA_DIR = pathlib.Path("path/to/telekinesis-data")
# Load point clouds
source_filepath = str(DATA_DIR / "point_clouds" / "gusset_0_preprocessed.ply")
target_filepath = str(DATA_DIR / "point_clouds" / "gusset_0_icp_alignment.ply")
source_point_cloud = io.load_point_cloud(filepath=source_filepath)
target_point_cloud = io.load_point_cloud(filepath=target_filepath)
logger.success(f"Loaded source point cloud with {len(source_point_cloud.positions)} points")
logger.success(f"Loaded target point cloud with {len(target_point_cloud.positions)} points")
# Execute operation
registered_point_cloud = vitreous.register_point_clouds_using_point_to_point_icp(
source_point_cloud=source_point_cloud,
target_point_cloud=target_point_cloud,
initial_transformation_matrix=np.eye(4),
max_iterations=50,
max_correspondence_distance=0.05,
estimate_scaling=False,
)
logger.success("Registered point clouds using point-to-point ICP")Running the Example
Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:
cd telekinesis-examples
python examples/vitreous_examples.py --example register_point_clouds_using_point_to_point_icpThe Explanation of the Code
The script begins by importing essential modules for point cloud processing (vitreous), data handling (datatypes, io), file system management (pathlib), numerical computations (numpy), and optional logging (loguru) to track progress.
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
import numpy as np
# Optional for logging
from loguru import loggerNext, the source and target point clouds are loaded from disk. Logging confirms successful loading and shows the number of points in each cloud, helping users understand the scale and density of the data before performing registration.
DATA_DIR = pathlib.Path("path/to/telekinesis-data")
# Load point clouds
source_filepath = str(DATA_DIR / "point_clouds" / "gusset_0_preprocessed.ply")
target_filepath = str(DATA_DIR / "point_clouds" / "gusset_0_icp_alignment.ply")
source_point_cloud = io.load_point_cloud(filepath=source_filepath)
target_point_cloud = io.load_point_cloud(filepath=target_filepath)
logger.success(f"Loaded source point cloud with {len(source_point_cloud.positions)} points")
logger.success(f"Loaded target point cloud with {len(target_point_cloud.positions)} points")The main operation applies the register_point_clouds_using_point_to_point_icp Skill. This method performs point-to-point ICP (Iterative Closest Point) registration, refining the alignment between source and target point clouds by minimizing the Euclidean distances between corresponding points. Key parameters include:
max_iterations: controls the maximum number of ICP iterations; higher values can improve alignment at the cost of increased computation time.max_correspondence_distance: sets the maximum distance between point pairs considered for matching; smaller values improve precision but may reject valid correspondences if initial misalignment is large.estimate_scaling: toggles whether the algorithm estimates scaling between point clouds; disabling it assumes uniform scale.initial_transformation_matrix: provides an initial guess for the alignment, which can accelerate convergence if the point clouds are roughly pre-aligned.
# Execute operation
registered_point_cloud = vitreous.register_point_clouds_using_point_to_point_icp(
source_point_cloud=source_point_cloud,
target_point_cloud=target_point_cloud,
initial_transformation_matrix=np.eye(4),
max_iterations=50,
max_correspondence_distance=0.05,
estimate_scaling=False,
)
logger.success("Registered point clouds using point-to-point ICP")This Skill is particularly useful in industrial 3D perception and robotic pipelines, such as part inspection, object pose estimation, and robotic manipulation. Users can tune the parameters based on point cloud density, initial misalignment, and desired registration accuracy.
How to Tune the Parameters
The register_point_clouds_using_point_to_point_icp Skill has three main parameters that control the ICP refinement:
max_iterations (default: 50):
- The maximum number of ICP iterations
- Increase (50-200) to allow more refinement but takes longer
- Decrease (10-30) for faster computation
- Typical range: 10-200
- Use 10-30 for fast, 30-50 for balanced, 50-200 for high precision
max_correspondence_distance (default: 0.05):
- The maximum distance to consider points as correspondences
- Units: Uses the same units as your point cloud
- Increase to allow matching more distant points but may include incorrect matches
- Decrease to require closer matches
- Should be 2-5x the typical point spacing
- Typical range: 0.01-0.1 in point cloud units
- Use 0.01-0.02 for dense clouds, 0.02-0.05 for medium, 0.05-0.1 for sparse
estimate_scaling (default: False):
- Whether to estimate and apply uniform scaling between point clouds
True: Allows the algorithm to find scale differencesFalse: Assumes same scale- Set to
Trueif point clouds may have different scales,Falsefor same-scale alignment
TIP
Best practice: Start with default values. Adjust max_correspondence_distance based on your point cloud density and initial alignment quality. Use register_point_clouds_using_fast_global_registration first if initial alignment is poor.
Where to Use the Skill in a Pipeline
Point-to-point ICP is commonly used in the following pipelines:
- Fine alignment refinement
- Precise pose estimation
- Object registration
- Quality inspection
A typical pipeline for fine alignment looks as follows:
# Example pipeline using point-to-point ICP (parameters omitted).
from telekinesis import vitreous
import numpy as np
# 1. Load point clouds
source_point_cloud = vitreous.load_point_cloud(...) # Object model
target_point_cloud = vitreous.load_point_cloud(...) # Scene scan
# 2. Preprocess both point clouds
filtered_source = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)
filtered_target = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)
downsampled_source = vitreous.filter_point_cloud_using_voxel_downsampling(...)
downsampled_target = vitreous.filter_point_cloud_using_voxel_downsampling(...)
# 3. Optional: Initial alignment using Fast Global Registration
initial_aligned = vitreous.register_point_clouds_using_fast_global_registration(...)
# 4. Fine alignment using point-to-point ICP
registered_cloud = vitreous.register_point_clouds_using_point_to_point_icp(
source_point_cloud=downsampled_source,
target_point_cloud=downsampled_target,
initial_transformation_matrix=np.eye(4),
max_iterations=50,
max_correspondence_distance=0.05,
estimate_scaling=False,
)
# 5. Use registered point cloud for pose estimation or analysisRelated skills to build such a pipeline:
register_point_clouds_using_fast_global_registration: initial alignment before fine ICPregister_point_clouds_using_point_to_plane_icp: alternative ICP method with normalsregister_point_clouds_using_centroid_translation: very fast coarse alignmentfilter_point_cloud_using_statistical_outlier_removal: clean input before registrationfilter_point_cloud_using_voxel_downsampling: reduce point cloud density for faster processing
Alternative Skills
| Skill | vs. Point-to-Point ICP |
|---|---|
| register_point_clouds_using_point_to_plane_icp | Use point-to-plane ICP when normals are available for more accurate registration. Use point-to-point ICP when normals are not available or unreliable. |
| register_point_clouds_using_fast_global_registration | Use FGR for initial coarse alignment. Use point-to-point ICP for fine refinement or when initial alignment is good. |
When Not to Use the Skill
Do not use point-to-point ICP when:
- Initial alignment is very poor (use
register_point_clouds_using_fast_global_registrationfirst) - Point clouds have well-defined normals (use
register_point_clouds_using_point_to_plane_icpfor better accuracy) - Point clouds are very different in scale (set
estimate_scaling=Trueor pre-scale the clouds) - Point clouds are very sparse (ICP may not converge well)
- You need very fast registration (use
register_point_clouds_using_centroid_translationfor coarse alignment)

