Calculate Points in Point Cloud
SUMMARY
Calculate Points in Point Cloud returns the total number of points contained in a point cloud.
This Skill is commonly used for sanity checks and pipeline validation, for example to verify that filtering or downsampling steps are behaving as expected. Comparing point counts before and after a processing step provides immediate feedback on how aggressively data is being reduced.
Use this Skill when you need a lightweight metric to monitor data size, debug pipelines, or make conditional decisions (e.g., skipping downstream steps if too few points remain).
The Skill
from telekinesis import vitreous
num_points = vitreous.calculate_points_in_point_cloud(point_cloud=point_cloud)Performance Note
Current Data Limits: The system currently supports up to 1 million points per request (approximately 16MB of data). We're actively optimizing data transfer performance as part of our beta program, with improvements rolling out regularly to enhance processing speed.
The Code
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
# Optional for logging
from loguru import logger
DATA_DIR = pathlib.Path("path/to/telekinesis-data")
# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "can_vertical_1_raw.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")
# Execute operation
num_points = vitreous.calculate_points_in_point_cloud(point_cloud=point_cloud)
logger.success(f"Counted {num_points.value} points in point cloud")The Explanation of the Code
This example shows how to use the calculate_points_in_point_cloud Skill to count the number of points in a 3D point cloud. After importing the necessary modules and optionally setting up logging, the point cloud is loaded from a .ply file.
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib
# Optional for logging
from loguru import logger
DATA_DIR = pathlib.Path("path/to/telekinesis-data")
# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "can_vertical_1_raw.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")The Skill simply returns the total number of points in the cloud, which is logged for reference. This is useful in robotics pipelines for quality control, point cloud validation, downsampling checks, or preprocessing, where knowing the size of the point cloud helps decide which algorithms or parameters to apply next.
# Execute operation
num_points = vitreous.calculate_points_in_point_cloud(point_cloud=point_cloud)
logger.success(f"Counted {num_points.value} points in point cloud")Running the Example
Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:
cd telekinesis-examples
python examples/vitreous_examples.py --example calculate_points_in_point_cloudHow to Tune the Parameters
The calculate_points_in_point_cloud Skill has no parameters to tune, it only requires a point cloud as input. The function simply counts and returns the total number of points in the cloud.
Where to Use the Skill in a Pipeline
Point counting is commonly used in the following pipelines:
- Quality control and validation
- Pipeline debugging and monitoring
- Conditional processing based on point count
- Downsampling verification
A typical pipeline using point counting for validation looks as follows:
# Example pipeline using point counting for validation (parameters omitted).
from telekinesis import vitreous
# 1. Load raw point cloud
point_cloud = vitreous.load_point_cloud(...)
initial_count = vitreous.calculate_points_in_point_cloud(point_cloud=point_cloud)
logger.info(f"Initial point count: {initial_count.value}")
# 2. Preprocess: remove outliers
filtered_cloud = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)
filtered_count = vitreous.calculate_points_in_point_cloud(point_cloud=filtered_cloud)
logger.info(f"After outlier removal: {filtered_count.value}")
# 3. Downsample
downsampled_cloud = vitreous.filter_point_cloud_using_voxel_downsampling(...)
final_count = vitreous.calculate_points_in_point_cloud(point_cloud=downsampled_cloud)
logger.info(f"After downsampling: {final_count.value}")
# 4. Conditional processing based on point count
if final_count.value < 100:
logger.warning("Too few points remaining, skipping downstream processing")
else:
# Continue with clustering, segmentation, etc.
clusters = vitreous.cluster_point_cloud_using_dbscan(...)Related skills to build such a pipeline:
filter_point_cloud_using_statistical_outlier_removal: reduce point count by removing outliersfilter_point_cloud_using_voxel_downsampling: reduce point count by downsamplingcluster_point_cloud_using_dbscan: process point clouds after validation
Alternative Skills
There are no direct alternative skills for counting points. However, you can access the point count directly from the point cloud object using len(point_cloud.positions) in Python, though using this Skill provides a consistent interface and returns a typed Int object.
When Not to Use the Skill
Do not use calculate points in point cloud when:
- You need to access the count frequently in a tight loop (directly accessing
len(point_cloud.positions)is faster) - You only need a quick check during development (using
len()is more convenient) - The point cloud is empty or invalid (the Skill will still return 0, but you may want to validate the input first)
TIP
Note: While you can use len(point_cloud.positions) directly in Python, using this Skill provides a consistent API and returns a typed Int object that integrates better with the Telekinesis pipeline system.

