Register Point Clouds Using Fast Global Registration
SUMMARY
Register Point Clouds Using Fast Global Registration aligns two point clouds quickly using FPFH features without relying on RANSAC, providing fast and robust initial alignment.
This Skill is useful in industrial, mobile, and humanoid robotics pipelines for initial 6D pose estimation and object alignment. For instance, it can rapidly align parts to CAD models in manufacturing, register large-scale LIDAR scans for mobile robot mapping, or provide initial alignment of objects for humanoid robot manipulation. Fast Global Registration is particularly helpful when speed is critical and rough alignment is sufficient before refinement.
Use this Skill when you want to quickly compute an initial registration for point clouds with feature-based correspondences, prior to fine ICP refinement.
The Skill
from telekinesis import vitreous
import numpy as np
registered_point_cloud = vitreous.register_point_clouds_using_fast_global_registration(
source_point_cloud=source_point_cloud,
target_point_cloud=target_point_cloud,
initial_transformation_matrix=np.eye(4),
normal_radius=0.02,
normal_max_neighbors=20,
feature_radius=0.05,
feature_max_neighbors=30,
max_correspondence_distance=0.015,
)Performance Note
Current Data Limits: The system currently supports up to 1 million points per request (approximately 16MB of data). We're actively optimizing data transfer performance as part of our beta program, with improvements rolling out regularly to enhance processing speed.
Example
Source and Target Point Clouds
Raw sensor input i.e. target point cloud in green and object model in red
Registered Point Clouds
Registered source point cloud (red) aligned to target point cloud (green) using Fast Global Registration (FGR) with FPFH features
The Code
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
import numpy as np
# Optional for logging
from loguru import logger
DATA_DIR = pathlib.Path("path/to/telekinesis-data")
# Load point clouds
source_filepath = str(DATA_DIR / "point_clouds" / "gusset_model_voxelized.ply")
target_filepath = str(DATA_DIR / "point_clouds" / "gusset_0_preprocessed_voxelized.ply")
source_point_cloud = io.load_point_cloud(filepath=source_filepath)
target_point_cloud = io.load_point_cloud(filepath=target_filepath)
logger.success(f"Loaded source point cloud with {len(source_point_cloud.positions)} points")
logger.success(f"Loaded target point cloud with {len(target_point_cloud.positions)} points")
# Execute operation
registered_point_cloud = vitreous.register_point_clouds_using_fast_global_registration(
source_point_cloud=source_point_cloud,
target_point_cloud=target_point_cloud,
initial_transformation_matrix=np.eye(4),
normal_radius=0.02,
normal_max_neighbors=20,
feature_radius=0.05,
feature_max_neighbors=30,
max_correspondence_distance=0.015,
)
logger.success(f"Registered point clouds using fast global registration")Running the Example
Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:
cd telekinesis-examples
python examples/vitreous_examples.py --example register_point_clouds_using_fast_global_registrationThe Explanation of the Code
The script begins by importing essential modules for point cloud processing (vitreous), data handling (datatypes, io), file management (pathlib), numerical operations (numpy), and optional logging (loguru).
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
import numpy as np
# Optional for logging
from loguru import loggerNext, the source and target point clouds are loaded from disk. Logging statements confirm that the data has been successfully loaded and indicate the number of points in each cloud, which is important for understanding scale and complexity before registration.
DATA_DIR = pathlib.Path("path/to/telekinesis-data")
# Load point clouds
source_filepath = str(DATA_DIR / "point_clouds" / "gusset_model_voxelized.ply")
target_filepath = str(DATA_DIR / "point_clouds" / "gusset_0_preprocessed_voxelized.ply")
source_point_cloud = io.load_point_cloud(filepath=source_filepath)
target_point_cloud = io.load_point_cloud(filepath=target_filepath)
logger.success(f"Loaded source point cloud with {len(source_point_cloud.positions)} points")
logger.success(f"Loaded target point cloud with {len(target_point_cloud.positions)} points")The main operation applies the register_point_clouds_using_fast_global_registration Skill. This method performs a fast global registration using FPFH (Fast Point Feature Histograms) features without relying on RANSAC. It estimates the best alignment between source and target point clouds by computing normals (normal_radius and normal_max_neighbors) and FPFH features (feature_radius and feature_max_neighbors) within specified local neighborhoods. The max_correspondence_distance parameter limits which point pairs are considered for matching. This Skill is particularly useful in industrial 3D perception pipelines for quickly achieving a coarse initial alignment for tasks such as part positioning, quality inspection, or object pose estimation, before applying finer registration techniques like ICP.
# Execute operation
registered_point_cloud = vitreous.register_point_clouds_using_fast_global_registration(
source_point_cloud=source_point_cloud,
target_point_cloud=target_point_cloud,
initial_transformation_matrix=np.eye(4),
normal_radius=0.02,
normal_max_neighbors=20,
feature_radius=0.05,
feature_max_neighbors=30,
max_correspondence_distance=0.015,
)
logger.success(f"Registered point clouds using fast global registration")How to Tune the Parameters
The register_point_clouds_using_fast_global_registration Skill has several parameters that control feature computation and matching:
normal_radius (default: 0.02):
- The search radius for normal estimation
- Units: Uses the same units as your point cloud
- Increase (0.05-0.1) to consider more neighbors for smoother normals but is slower
- Decrease (0.01-0.02) to use fewer neighbors, faster but more sensitive to noise
- Should be 2-5x point spacing
- Typical range: 0.01-0.1 in point cloud units
- Use 0.01-0.02 for dense clouds, 0.02-0.05 for medium, 0.05-0.1 for sparse
normal_max_neighbors (default: 20):
- Maximum number of neighbors to use for normal estimation
- Increase (30-50) to provide more stable normals but is slower
- Decrease (10-20) for faster computation but may be noisy
- Typical range: 10-50
- Use 10-20 for fast, 20-30 for balanced, 30-50 for quality
feature_radius (default: 0.05):
- The radius for computing FPFH features
- Units: Uses the same units as your point cloud
- Increase (0.1-0.2) to capture larger scale features but is slower
- Decrease (0.02-0.05) to capture finer features
- Should be 5-10x point spacing
- Typical range: 0.02-0.2 in point cloud units
- Use 0.02-0.05 for fine features, 0.05-0.1 for balanced, 0.1-0.2 for coarse
feature_max_neighbors (default: 30):
- Maximum neighbors for FPFH feature computation
- Increase (50-100) to capture more context but is slower
- Decrease (20-30) for faster computation
- Typical range: 20-100
- Use 20-30 for fast, 30-50 for balanced, 50-100 for detailed
max_correspondence_distance (default: 0.015):
- The maximum distance for feature matching
- Units: Uses the same units as your point cloud
- Increase to allow matching more distant features but may include incorrect matches
- Decrease to require closer matches
- Should be 2-5x feature_radius
- Typical range: 0.01-0.1 in point cloud units
TIP
Best practice: Start with default values. Adjust normal_radius and feature_radius based on your point cloud density. Use this for initial alignment, then refine with ICP-based methods for higher accuracy.
Where to Use the Skill in a Pipeline
Fast Global Registration is commonly used in the following pipelines:
- Initial coarse alignment
- Feature-based registration
- Multi-view registration
- Object pose estimation
A typical pipeline for initial alignment looks as follows:
# Example pipeline using fast global registration (parameters omitted).
from telekinesis import vitreous
import numpy as np
# 1. Load point clouds
source_point_cloud = vitreous.load_point_cloud(...) # Object model
target_point_cloud = vitreous.load_point_cloud(...) # Scene scan
# 2. Preprocess both point clouds
filtered_source = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)
filtered_target = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)
downsampled_source = vitreous.filter_point_cloud_using_voxel_downsampling(...)
downsampled_target = vitreous.filter_point_cloud_using_voxel_downsampling(...)
# 3. Initial alignment using Fast Global Registration
registered_cloud = vitreous.register_point_clouds_using_fast_global_registration(
source_point_cloud=downsampled_source,
target_point_cloud=downsampled_target,
initial_transformation_matrix=np.eye(4),
normal_radius=0.02,
normal_max_neighbors=20,
feature_radius=0.05,
feature_max_neighbors=30,
max_correspondence_distance=0.015,
)
# 4. Optional: Refine with ICP for higher accuracy
refined_cloud = vitreous.register_point_clouds_using_point_to_point_icp(
source_point_cloud=downsampled_source,
target_point_cloud=downsampled_target,
initial_transformation_matrix=initial_transform,
max_iterations=50,
max_correspondence_distance=0.02,
)
# 5. Use registered point cloud for pose estimation or analysisRelated skills to build such a pipeline:
register_point_clouds_using_point_to_point_icp: refine alignment after FGRregister_point_clouds_using_point_to_plane_icp: alternative refinement methodfilter_point_cloud_using_statistical_outlier_removal: clean input before registrationfilter_point_cloud_using_voxel_downsampling: reduce point cloud density for faster processing
Alternative Skills
| Skill | vs. Fast Global Registration |
|---|---|
| register_point_clouds_using_point_to_point_icp | Use FGR for initial coarse alignment. Use point-to-point ICP for fine refinement or when initial alignment is good. |
When Not to Use the Skill
Do not use fast global registration when:
- Point clouds are already well-aligned (use ICP-based methods for refinement instead)
- You need very high accuracy (use ICP-based methods for fine alignment)
- Point clouds have very different geometries (feature matching may fail)
- Point clouds are very sparse (features may not be reliable)
- You need real-time performance (FGR is faster than RANSAC but slower than simple ICP)

