Skip to content

Filter Point Cloud Using Voxel Downsampling

SUMMARY

Filter Point Cloud Using Voxel Downsampling reduces the density of a point cloud by grouping nearby points into a regular 3D grid (voxels) and replacing each group with a single representative point.

This Skill is commonly used as an early preprocessing step to reduce the size of the point cloud to make it faster to work with. By enforcing a more uniform spatial distribution, voxel downsampling improves the speed of many downstream algorithms such as registration, clustering, and normal estimation.

Use this Skill when you want to reduce data size while preserving overall geometry. The voxel size controls the trade-off: smaller voxels keep more detail, while larger voxels favor speed and simplicity.

The Skill

python
from telekinesis import vitreous

filtered_point_cloud = vitreous.filter_point_cloud_using_voxel_downsampling(
    point_cloud=point_cloud,
    voxel_size=0.01,
)

API Reference

Performance Note

Current Data Limits: The system currently supports up to 1 million points per request (approximately 16MB of data). We're actively optimizing data transfer performance as part of our beta program, with improvements rolling out regularly to enhance processing speed.

Example

Raw Sensor Input

Unprocessed point cloud captured directly from the sensor. Shows full resolution, natural noise, and uneven sampling density.

Mild Downsampling

Light voxel-based reduction that removes redundant samples while preserving nearly all fine geometric detail.
Parameters: voxel_size = 0.005 (scene units).

Moderate Downsampling

Balanced simplification that reduces noise and point density while maintaining overall shape and structure.
Parameters: voxel_size = 0.01 (scene units).

Aggressive Downsampling

Heavy simplification that merges fine structures and retains only coarse geometry, ideal for performance-oriented processing.
Parameters: voxel_size = 0.025 (scene units).

The Code

python
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib

# Optional for logging
from loguru import logger

DATA_DIR = pathlib.Path("path/to/telekinesis-data")

# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "can_vertical_1_subtracted.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")

# Execute operation
filtered_point_cloud = vitreous.filter_point_cloud_using_voxel_downsampling(
  point_cloud=point_cloud,
  voxel_size=0.01,
)
logger.success("Filtered points using voxel downsampling")

Running the Example

Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:

bash
cd telekinesis-examples
python examples/vitreous_examples.py --example filter_point_cloud_using_voxel_downsampling

The Explanation of the Code

The example first imports the required modules:

python
from telekinesis import vitreous, datatypes

The next block loads a point cloud from a .ply file, while removing common data issues such as NaN values, infinite values, and duplicated points. This ensures the input point cloud is clean and safe for downstream processing:

python
# Load point cloud
point_cloud_loader = vitreous.io.PointCloudFileLoader(
    remove_nan_points=True,
    remove_infinite_points=True,
    remove_duplicated_points=True
)
input_point_cloud = point_cloud_loader.execute("example.ply")

Next, the voxel downsampling Skill is applied using a specified voxel_size. This operation groups nearby points into a regular 3D grid and replaces each group with a single representative point. The result is a new point cloud that is smaller, more uniformly distributed, and easier to process, while preserving the overall shape of the scene:

python
# Apply voxel downsampling
output_point_cloud = vitreous.filter_point_cloud_using_voxel_downsampling(
    point_cloud=input_point_cloud,
    voxel_size=datatypes.Float(0.01),
)

How to Tune the Parameters

The filter_point_cloud_using_voxel_downsampling Skill has one parameter that controls the downsampling:

voxel_size (required):

  • The edge length of each cubic voxel
  • Units: Uses the same units as your point cloud (e.g., if point cloud is in meters, voxel_size is in meters; if in millimeters, voxel_size is in millimeters)
  • Increase (0.05-0.1) to create larger voxels, resulting in more aggressive downsampling and fewer output points
  • Decrease (0.001-0.01) to create smaller voxels, preserving more detail but with more points
  • All points within a voxel are replaced by their centroid
  • Should be set to 2-5x the typical point spacing for balanced downsampling
  • Typical range: 0.001-0.1 in point cloud units for small objects, 0.01-0.5 for medium scenes, 0.1-1.0 for large scenes
  • Use smaller values (0.001-0.01) to preserve fine details, larger values (0.05-0.1) for aggressive reduction

Common values:

Use Casevoxel_size (in point cloud units)
Robotics / manipulation0.002 - 0.005
Indoor mapping0.005 - 0.01
Large outdoor scenes0.010 - 0.050
Coarse previews0.050 - 0.100

TIP

Rule of thumb: Choose a voxel slightly larger than the sensor noise, but smaller than the smallest feature you need to preserve. Visualization helps quickly identify the right balance.

Where to Use the Skill in a Pipeline

Voxel downsampling is often used in the following pipelines:

  • Registration
  • Clustering
  • Segmentation

A general pipeline for registration, clustering or segmentation looks as the following:

python
# Example pipeline using voxel downsampling (parameters ommitted).

from telekinesis import vitreous

# 1. Load raw point cloud
... = vitreous.load_point_cloud(...)

# 2. Crop or extract region of interest
... = vitreous.crop_point_cloud(...)  

# 3. Downsample the point cloud
... = vitreous.filter_point_cloud_using_voxel_downsampling(...)

# 4. Remove outliers
... = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)

# 5. Estimate normals
... = vitreous.estimate_point_cloud_normals(...)

# 6. Apply registration, clustering, or segmentation
... = vitreous.cluster_point_cloud_using_dbscan(...)

# or
# registered_pc = vitreous.register_point_clouds_using_point_to_point_icp(...)

# or
# planes = vitreous.segment_point_cloud_using_plane(...)

Related skills to build such a pipeline:

  • filter_point_cloud_using_statistical_outlier_removal: remove noise after downsampling
  • filter_point_cloud_using_bounding_box: crop region of interest before downsampling
  • cluster_point_cloud_using_dbscan: clustering on uniform density
  • register_point_clouds_using_point_to_point_icp: registration on downsampled clouds
  • segment_point_cloud_using_plane: plane segmentation on downsampled clouds

Alternative Skills

Skillvs. Filter Point Cloud Using Voxel Downsampling
filter_point_cloud_using_uniform_downsamplingUse voxel downsampling when you want uniform spatial distribution (points replaced by voxel centroids). Use uniform downsampling when you want to simply select every Nth point.

When Not to Use the Skill

Do not use voxel downsampling when:

  • Small details matter (edges, thin parts, holes)
  • You are doing precision measurement
  • The cloud is already sparse

WARNING

Voxel downsampling permanently removes detail below the voxel size. This is irreversible.

The Math (Advanced and Optional)

Click to view math

1. Problem Setup

A point cloud is a finite set of samples:

P={piR3i=1,,N}

2. Voxel Grid Definition

Choose a voxel size h>0.

Define a regular grid over R3:

Vijk=[ih,(i+1)h)×[jh,(j+1)h)×[kh,(k+1)h)

Each voxel is indexed by integer coordinates (i,j,k).

3. Partitioning the Point Cloud

Each point p=(x,y,z) is assigned to exactly one voxel:

(i,j,k)=(xh,yh,zh)

This induces a partition of the cloud:

P=vVPv

where Pv are the points inside voxel v.

4. Representative Point

For each non-empty voxel v, compute a representative point (centroid):

cv=1|Pv|pPvp

The downsampled cloud is:

P={cvPv}

This guarantees:

|P|number of occupied voxels