Filter Point Cloud Using Bounding Box
SUMMARY
Filter Point Cloud Using Bounding Box extracts points that lie within a specified 3D axis-aligned bounding box (AABB), removing all points outside the defined region.
This Skill is commonly used in robotics pipelines for region-of-interest extraction, such as isolating objects on a table, focusing on a workspace, or filtering sensor data to a relevant area. By cropping point clouds to a specific bounding box, robots can reduce computational load and improve the accuracy of downstream tasks like segmentation, clustering, and grasp planning.
Use this Skill when you want to focus on a specific spatial region of a point cloud to streamline processing and enable targeted perception or manipulation.
The Skill
from telekinesis import vitreous
from datatypes import datatypes
import numpy as np
# Create bounding box
center = np.array([[0.0, 0.0, 0.0]], dtype=np.float32)
half_size = np.array([[0.1, 0.1, 0.1]], dtype=np.float32)
bbox = datatypes.Boxes3D(half_size=half_size, center=center)
# Filter point cloud
filtered_point_cloud = vitreous.filter_point_cloud_using_bounding_box(
point_cloud=point_cloud,
bbox=bbox,
)Performance Note
Current Data Limits: The system currently supports up to 1 million points per request (approximately 16MB of data). We're actively optimizing data transfer performance as part of our beta program, with improvements rolling out regularly to enhance processing speed.
Example
Raw Sensor Input
Unprocessed point cloud captured directly from the sensor.
Axis-Aligned Bounding Box
Axis-aligned bounding box in red overlayed with the unprocessed point cloud.
Filtered Points
Only the points that fall within the specified 3D box defined by min/max coordinates along each axis are kept.
The Code
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
import numpy as np
# Optional for logging
from loguru import logger
DATA_DIR = pathlib.Path("path/to/telekinesis-data")
# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "plastic_2_raw.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")
# Execute operation
# Create Box
x_min, y_min, z_min, x_max, y_max, z_max = np.array([-163, -100, 470, 150, 100, 544])
center = np.array(
[[(x_min + x_max) / 2, (y_min + y_max) / 2, (z_min + z_max) / 2]],
dtype=np.float32,
)
half_size = np.array(
[[(x_max - x_min) / 2, (y_max - y_min) / 2, (z_max - z_min) / 2]],
dtype=np.float32,
)
colors = [(255, 0, 0)]
bbox = datatypes.Boxes3D(half_size=half_size, center=center, colors=colors)
# Filter point cloud using bounding box
filtered_point_cloud = vitreous.filter_point_cloud_using_bounding_box(
point_cloud=point_cloud, bbox=bbox
)
logger.success(
f"Filtered {len(filtered_point_cloud.positions)} points using bounding box"
)Running the Example
Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:
cd telekinesis-examples
python examples/vitreous_examples.py --example filter_point_cloud_using_bounding_boxThe Explanation of the Code
First, the required modules are imported: vitreous for point cloud operations, datatypes and io for data handling, numpy for numerical calculations, pathlib for file paths, and loguru for optional logging.
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
import numpy as np
# Optional for logging
from loguru import loggerNext, a point cloud is loaded from a .ply file. This represents the 3D scene or object to be processed. Logging confirms the number of points loaded, which is useful for verifying that the data is intact.
DATA_DIR = pathlib.Path("path/to/telekinesis-data")
# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "plastic_2_raw.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")Finally, the filter_point_cloud_using_bounding_box Skill is applied. A 3D axis-aligned bounding box (AABB) is created using minimum and maximum coordinates, calculating its center and half-size. This bounding box is then used to extract only the points inside the defined volume. This Skill is particularly useful in robotics pipelines to isolate objects of interest, focus processing on a specific region, or remove irrelevant background points, such as in industrial pick-and-place, inspection, or mobile robotics navigation tasks. Logging confirms how many points remain after filtering.
# Execute operation
# Create Box
x_min, y_min, z_min, x_max, y_max, z_max = np.array([-163, -100, 470, 150, 100, 544])
center = np.array(
[[(x_min + x_max) / 2, (y_min + y_max) / 2, (z_min + z_max) / 2]],
dtype=np.float32,
)
half_size = np.array(
[[(x_max - x_min) / 2, (y_max - y_min) / 2, (z_max - z_min) / 2]],
dtype=np.float32,
)
colors = [(255, 0, 0)]
bbox = datatypes.Boxes3D(half_size=half_size, center=center, colors=colors)
# Filter point cloud using bounding box
filtered_point_cloud = vitreous.filter_point_cloud_using_bounding_box(
point_cloud=point_cloud, bbox=bbox
)
logger.success(
f"Filtered {len(filtered_point_cloud.positions)} points using bounding box"
)How to Tune the Parameters
The filter_point_cloud_using_bounding_box Skill has one parameter that controls the filtering:
bbox (required):
- The bounding box to use for filtering, specified as a
Boxes3Dobject - The bounding box is defined by:
center: The center point of the box (3D coordinates)half_size: Half the size of the box along each axis (width/2, height/2, depth/2)
- Units: The center and half_size use the same units as your point cloud (e.g., if point cloud is in meters, bbox is in meters; if in millimeters, bbox is in millimeters)
- Increase
half_sizeto include more points (larger region) - Decrease
half_sizeto include fewer points (smaller region) - Adjust
centerto position the bounding box around the region of interest
TIP
Best practice: Use calculate_axis_aligned_bounding_box or calculate_oriented_bounding_box to automatically compute bounding boxes from point clouds. Manually create bounding boxes when you know the exact region you want to extract.
Where to Use the Skill in a Pipeline
Bounding box filtering is commonly used in the following pipelines:
- Region-of-interest extraction
- Object isolation
- Workspace focusing
- Background removal
A typical pipeline for object isolation looks as follows:
# Example pipeline using bounding box filtering (parameters omitted).
from telekinesis import vitreous
from datatypes import datatypes
import numpy as np
# 1. Load point cloud
point_cloud = vitreous.load_point_cloud(...)
# 2. Preprocess: remove outliers
filtered_cloud = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)
# 3. Optional: Compute bounding box from object
object_bbox = vitreous.calculate_axis_aligned_bounding_box(point_cloud=filtered_cloud)
# 4. Filter point cloud using bounding box
# Option A: Use computed bounding box
cropped_cloud = vitreous.filter_point_cloud_using_bounding_box(
point_cloud=filtered_cloud,
bbox=object_bbox,
)
# Option B: Create custom bounding box
center = np.array([[0.0, 0.0, 0.5]], dtype=np.float32)
half_size = np.array([[0.2, 0.2, 0.1]], dtype=np.float32)
custom_bbox = datatypes.Boxes3D(half_size=half_size, center=center)
cropped_cloud = vitreous.filter_point_cloud_using_bounding_box(
point_cloud=filtered_cloud,
bbox=custom_bbox,
)
# 5. Process the cropped point cloud
clusters = vitreous.cluster_point_cloud_using_dbscan(...)Related skills to build such a pipeline:
calculate_axis_aligned_bounding_box: compute bounding box from point cloudcalculate_oriented_bounding_box: compute oriented bounding boxfilter_point_cloud_using_statistical_outlier_removal: clean input before filteringfilter_point_cloud_using_passthrough_filter: alternative axis-aligned filtering methodcluster_point_cloud_using_dbscan: process filtered point clouds
Alternative Skills
| Skill | vs. Filter Point Cloud Using Bounding Box |
|---|---|
| filter_point_cloud_using_passthrough_filter | Use passthrough filter for simple axis-aligned min/max filtering. Use bounding box when you need to use a pre-computed Boxes3D object or oriented bounding boxes. |
| filter_point_cloud_using_oriented_bounding_box | Use oriented bounding box when you need a rotated bounding box. Use axis-aligned bounding box for simpler, faster filtering. |
| filter_point_cloud_using_plane_proximity | Use plane proximity when you want to filter based on distance from a plane. Use bounding box when you want to filter based on a 3D box region. |
When Not to Use the Skill
Do not use filter point cloud using bounding box when:
- You need a rotated/oriented bounding box (use
filter_point_cloud_using_oriented_bounding_boxinstead) - You want simple axis-aligned min/max filtering (use
filter_point_cloud_using_passthrough_filterinstead, which is simpler) - You need to filter based on distance from a plane (use
filter_point_cloud_using_plane_proximityinstead) - The bounding box is invalid (ensure the box is properly defined with valid center and half_size)
- You need to filter based on other criteria (use other filtering methods like radius outlier removal, statistical outlier removal, etc.)

