Calculate Oriented Bounding Box
SUMMARY
Calculate Oriented Bounding Box (OBB) computes a bounding box that tightly encloses a point cloud while allowing the box to rotate and align with the object’s principal axes.
This Skill is commonly used when object orientation matters, such as grasp planning, pose estimation, or fitting geometry around rotated objects. Compared to an axis-aligned bounding box, an OBB provides a tighter and more meaningful spatial representation for elongated or tilted structures.
Use this Skill when you need a compact, orientation-aware description of a point cloud’s shape. While OBB computation is slightly more expensive than AABB, it is often worth the cost when accuracy and alignment are important for downstream tasks.
The Skill
from telekinesis import vitreous
oriented_bounding_box = vitreous.calculate_oriented_bounding_box(
point_cloud=point_cloud,
minimize_bbox_volume=True,
use_robust_fitting=True,
)Performance Note
Current Data Limits: The system currently supports up to 1 million points per request (approximately 16MB of data). We're actively optimizing data transfer performance as part of our beta program, with improvements rolling out regularly to enhance processing speed.
Example
Note: For Bounding Box comparison look at Calculate Axis Aligned Oriented Bounding Box .
Raw Sensor Input
Unprocessed point cloud captured directly from the sensor. Shows full resolution, natural noise, and uneven sampling density.
Calculated Oriented Bounding Box
Point cloud with oriented bounding box
The Code
# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "can_vertical_1_raw_obb_preprocessed.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")
# Execute operation
result_bbox = vitreous.calculate_oriented_bounding_box(point_cloud=point_cloud)
logger.success(
f"Calculated oriented bounding box for {len(point_cloud.positions)} points with half-size: {result_bbox.half_size}, center: {result_bbox.center}, rotation_in_euler_angles: {result_bbox.rotation_in_euler_angles}"
)The Explanation of the Code
This example demonstrates how to use the calculate_oriented_bounding_box Skill to get a bounding box that tightly fits a point cloud while respecting its orientation. After importing the required modules and setting up optional logging, the point cloud is loaded from a .ply file. The logger confirms the number of points, ensuring that the data is loaded correctly.
# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "can_vertical_1_raw_obb_preprocessed.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")The main operation calls calculate_oriented_bounding_box, which computes a bounding box oriented along the principal axes of the point cloud. This is more precise than an axis-aligned box, especially for objects that are rotated or not aligned with world axes. The resulting bounding box includes properties such as half_size, center, and rotation_in_euler_angles, which provide a complete geometric description of the object’s pose.
# Execute operation
result_bbox = vitreous.calculate_oriented_bounding_box(point_cloud=point_cloud)
logger.success(
f"Calculated oriented bounding box for {len(point_cloud.positions)} points with half-size: {result_bbox.half_size}, center: {result_bbox.center}, rotation_in_euler_angles: {result_bbox.rotation_in_euler_angles}"
)This Skill is particularly useful in robotics pipelines for pose estimation, grasp planning, collision checking, and object detection, where understanding the orientation and precise extents of an object is critical.
Running the Example
Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:
cd telekinesis-examples
python examples/vitreous_examples.py --example calculate_oriented_bounding_boxHow to Tune the Parameters
The calculate_oriented_bounding_box Skill has two parameters that control the computation method and robustness:
minimize_bbox_volume (default: True):
True: Finds the OBB with the smallest possible volume that contains all points, providing the tightest fit. Use this for accurate measurements and when precision matters.False: Uses a faster but less optimal method. Use this when speed is more important than getting the absolute tightest fit.
use_robust_fitting (default: True):
True: Uses robust fitting that is less sensitive to outliers. Outliers have less influence on the OBB calculation, resulting in a more stable box that better represents the main structure. Use this when the point cloud contains noise or outliers.False: All points have equal weight, which may cause the box to be skewed by outliers. Use this only when you have clean data without outliers.
Common configurations:
| Use Case | minimize_bbox_volume | use_robust_fitting |
|---|---|---|
| Precision measurement | True | True |
| Real-time processing | False | True |
| Clean, preprocessed data | True | False |
| Noisy sensor data | True | True |
TIP
Best practice: For most use cases, keep both parameters at their default values (True). This provides the best balance of accuracy and robustness. Only set minimize_bbox_volume=False if you need faster computation and can accept a slightly less tight fit.
Where to Use the Skill in a Pipeline
Oriented bounding boxes are commonly used in the following pipelines:
- Pose estimation and object orientation
- Grasp planning and manipulation
- Collision checking with oriented objects
- Object detection and tracking with orientation
A typical pipeline for pose estimation and grasp planning looks as follows:
# Example pipeline using oriented bounding box (parameters omitted).
from telekinesis import vitreous
# 1. Load raw point cloud
... = vitreous.load_point_cloud(...)
# 2. Preprocess: remove outliers and downsample
... = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)
... = vitreous.filter_point_cloud_using_voxel_downsampling(...)
# 3. Cluster to separate objects
... = vitreous.cluster_point_cloud_using_dbscan(...)
# 4. For each cluster, compute oriented bounding box
for cluster in clusters:
obb = vitreous.calculate_oriented_bounding_box(point_cloud=cluster)
# Use obb.center, obb.half_size, and obb.rotation_in_euler_angles for pose estimation
# 5. Use bounding box for grasp planning or collision checkingRelated skills to build such a pipeline:
filter_point_cloud_using_statistical_outlier_removal: clean input before bounding box computationfilter_point_cloud_using_voxel_downsampling: reduce point cloud density for faster processingcluster_point_cloud_using_dbscan: separate objects before computing individual bounding boxescalculate_axis_aligned_bounding_box: faster alternative when orientation is not importantcalculate_point_cloud_centroid: simpler alternative when only position is needed
Alternative Skills
| Skill | vs. Oriented Bounding Box |
|---|---|
| calculate_axis_aligned_bounding_box | Use AABB when you need fast computation and don't care about object orientation. Use OBB when you need a tighter fit and orientation information is important. |
| calculate_point_cloud_centroid | Use centroid when you only need the center position. Use OBB when you also need size/extent and orientation information. |
When Not to Use the Skill
Do not use oriented bounding box when:
- Speed is critical and orientation information is not needed (AABB is faster)
- You only need the center position (centroid is simpler and faster)
- The point cloud is very noisy or sparse (OBB orientation may be unreliable)
- Objects are already axis-aligned (AABB provides the same result with less computation)
WARNING
Oriented bounding box computation is more expensive than axis-aligned bounding box. If you don't need orientation information, consider using calculate_axis_aligned_bounding_box for better performance.

