Calculate Axis-Aligned Bounding Box
SUMMARY
Calculate Axis-Aligned Bounding Box (AABB) computes the smallest 3D box, aligned with the world coordinate axes, that fully contains a point cloud.
This Skill is commonly used for spatial reasoning and scene understanding, such as estimating object extents, defining regions of interest (ROIs), or performing quick collision and containment checks. Because the box is axis-aligned, it is fast to compute and easy to interpret.
Use this Skill when you need a simple, efficient representation of the size and position of a point cloud. While an AABB does not capture object orientation, it is ideal for early filtering, coarse reasoning, and as a building block for more advanced geometric analysis.
The Skill
from telekinesis import vitreous
axis_aligned_bounding_box = vitreous.calculate_axis_aligned_bounding_box(point_cloud=point_cloud)Performance Note
Current Data Limits: The system currently supports up to 1 million points per request (approximately 16MB of data). We're actively optimizing data transfer performance as part of our beta program, with improvements rolling out regularly to enhance processing speed.
Example
Note: For Bounding Box comparison look at Calculate Oriented Bounding Box .
Raw Sensor Input
Unprocessed point cloud captured directly from the sensor. Shows full resolution, natural noise, and uneven sampling density.
Calculated Axis Aligned Oriented Bounding Box
Point cloud with axis aligned bounding box
The Code
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
# Optional for logging
from loguru import logger
DATA_DIR = pathlib.Path("path/to/telekinesis-data")
# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "can_vertical_1_raw_preprocessed.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")
# Execute operation
axis_aligned_bounding_box = vitreous.calculate_axis_aligned_bounding_box(point_cloud=point_cloud)
logger.success(
f"Calculated axis-aligned bounding box for {len(point_cloud.positions)} points: with half-size: {axis_aligned_bounding_box.half_size} and center: {axis_aligned_bounding_box.center}"
)The Explanation of the Code
This example shows how to use the calculate_axis_aligned_bounding_box Skill to quickly obtain a simple geometric representation of a point cloud’s spatial extent. The code begins by importing the necessary modules from Telekinesis and Python, and optionally sets up logging with loguru to provide feedback during execution.
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
# Optional for logging
from loguru import loggerThe point cloud is loaded from a .ply file using io.load_point_cloud. The logger immediately reports the number of points loaded, helping confirm the input is correct and ready for processing.
DATA_DIR = pathlib.Path("path/to/telekinesis-data")
# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "can_vertical_1_raw_preprocessed.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")The core operation calls the calculate_axis_aligned_bounding_box Skill, which computes the smallest 3D box aligned with the world axes that fully contains all points in the cloud. The returned bounding box includes key properties such as half_size and center, which can be used for downstream tasks like collision checking, coarse spatial reasoning, or defining regions of interest. The logger outputs these values, giving instant feedback on the bounding box dimensions and position.
# Execute operation
axis_aligned_bounding_box = vitreous.calculate_axis_aligned_bounding_box(point_cloud=point_cloud)
logger.success(
f"Calculated axis-aligned bounding box for {len(point_cloud.positions)} points: with half-size: {axis_aligned_bounding_box.half_size} and center: {axis_aligned_bounding_box.center}"
)This workflow focuses on the Skill itself: it provides a fast, easy-to-use representation of point cloud geometry, useful in robotics pipelines for object localization, grasp planning, and spatial reasoning.
Running the Example
Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:
cd telekinesis-examples
python examples/vitreous_examples.py --example calculate_axis_aligned_bounding_boxHow to Tune the Parameters
The calculate_axis_aligned_bounding_box Skill has no parameters, it only requires a point cloud as input. However, the quality and characteristics of the input point cloud directly affect the resulting bounding box.
TIP
Best practice: Clean and preprocess your point cloud before computing the bounding box to get more accurate and meaningful results. Use filter_point_cloud_using_statistical_outlier_removal or filter_point_cloud_using_voxel_downsampling as preprocessing steps.
Where to Use the Skill in a Pipeline
Axis-aligned bounding boxes are commonly used in the following pipelines:
- Object detection and localization
- Collision checking and spatial reasoning
- Region of interest (ROI) definition
- Scene understanding and object tracking
A typical pipeline for object detection and localization looks as follows:
# Example pipeline using axis-aligned bounding box (parameters omitted).
from telekinesis import vitreous
# 1. Load raw point cloud
... = vitreous.load_point_cloud(...)
# 2. Preprocess: remove outliers and downsample
... = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)
... = vitreous.filter_point_cloud_using_voxel_downsampling(...)
# 3. Cluster to separate objects
... = vitreous.cluster_point_cloud_using_dbscan(...)
# 4. For each cluster, compute bounding box
for cluster in clusters:
aabb = vitreous.calculate_axis_aligned_bounding_box(point_cloud=cluster)
# Use aabb.center and aabb.half_size for object localization
# 5. Optional: Use bounding box for collision checking or ROI definitionRelated skills to build such a pipeline:
filter_point_cloud_using_statistical_outlier_removal: clean input before bounding box computationfilter_point_cloud_using_voxel_downsampling: reduce point cloud density for faster processingcluster_point_cloud_using_dbscan: separate objects before computing individual bounding boxescalculate_oriented_bounding_box: alternative when object orientation is importantcalculate_point_cloud_centroid: simpler alternative when only position is needed
Alternative Skills
| Skill | vs. Axis-Aligned Bounding Box |
|---|---|
| calculate_oriented_bounding_box | Use AABB when you need fast computation and don't care about object orientation. Use OBB when you need a tighter fit and orientation information is important. |
| calculate_point_cloud_centroid | Use centroid when you only need the center position. Use AABB when you also need size/extent information. |
When Not to Use the Skill
Do not use axis-aligned bounding box when:
- Object orientation matters (e.g., for grasp planning, pose estimation, or when objects are rotated)
- You need a tight fit around elongated or rotated objects (AABB can be much larger than the actual object)
- You require precise spatial representation for collision detection with oriented objects
- The object is significantly rotated relative to world axes (the box will include large empty regions)
WARNING
Axis-aligned bounding boxes can be significantly larger than the actual object when the object is rotated or elongated. For rotated objects, consider using calculate_oriented_bounding_box instead.

