Skip to content

Segment Point Cloud Using Color

SUMMARY

Segment Point Cloud Using Color extracts points whose colors match a target RGB value within a specified tolerance, effectively isolating objects or regions based on color.

This Skill is useful in industrial, mobile, and humanoid robotics pipelines for object identification and segmentation based on color cues. For example, it can isolate painted parts on a conveyor in manufacturing, identify color-coded landmarks in mobile robot mapping, or detect colored objects for humanoid robot manipulation. Color-based segmentation provides a simple yet effective way to separate regions of interest in 3D scenes.

Use this Skill when you want to segment point clouds using color information for downstream processing, detection, or manipulation.

The Skill

python
from telekinesis import vitreous

segmented_point_cloud = vitreous.segment_point_cloud_using_color(
    point_cloud=point_cloud,
    target_color=[255, 0, 0],  # Red in 8-bit RGB
    color_distance_threshold=60.0,
)

API Reference

Performance Note

Current Data Limits: The system currently supports up to 1 million points per request (approximately 16MB of data). We're actively optimizing data transfer performance as part of our beta program, with improvements rolling out regularly to enhance processing speed.

Example

Raw Pointcloud

Unprocessed point cloud.

Segmented Pointcloud

Segmented pointcloud.

The Code

python
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib

# Optional for logging
from loguru import logger

DATA_DIR = pathlib.Path("path/to/telekinesis-data")

# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "engine_parts_0.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")

# Execute operation
segmented_point_cloud = vitreous.segment_point_cloud_using_color(
  point_cloud=point_cloud,
  target_color=[50, 75, 200],
  color_distance_threshold=60.0,
)
logger.success(f"Segmented {len(segmented_point_cloud.positions)} points using color")

The Explanation of the Code

The script begins by importing the essential modules for point cloud processing (vitreous), data handling (datatypes, io), file management (pathlib), and optional logging (loguru).

python
from telekinesis import vitreous
from datatypes import datatypes, io
import pathlib

# Optional for logging
from loguru import logger

Next, a point cloud is loaded from disk. Logging confirms successful loading and reports the number of points, giving an overview of the dataset size before segmentation.

python

DATA_DIR = pathlib.Path("path/to/telekinesis-data")

# Load point cloud
filepath = str(DATA_DIR / "point_clouds" / "engine_parts_0.ply")
point_cloud = io.load_point_cloud(filepath=filepath)
logger.success(f"Loaded point cloud with {len(point_cloud.positions)} points")

The main operation applies the segment_point_cloud_using_color Skill. This Skill extracts points from the cloud based on color similarity. Key parameters include:

  • target_color: the RGB color value to match against points in the cloud.
  • color_distance_threshold: the maximum allowed distance in RGB space for a point to be considered a match.
  • point_cloud: the input point cloud to be segmented.
python
# Execute operation
segmented_point_cloud = vitreous.segment_point_cloud_using_color(
  point_cloud=point_cloud,
  target_color=[50, 75, 200],
  color_distance_threshold=60.0,
)
logger.success(f"Segmented {len(segmented_point_cloud.positions)} points using color")

This operation is particularly useful in robotic perception and industrial pipelines where objects or regions of interest have distinct colors. Applications include isolating specific components for inspection, manipulation, or pose estimation in cluttered scenes.

Running the Example

Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:

bash
cd telekinesis-examples
python examples/vitreous_examples.py --example segment_point_cloud_using_color

How to Tune the Parameters

The segment_point_cloud_using_color Skill has two parameters that control color-based segmentation:

target_color (required):

  • The target color to match as a numpy array [R, G, B]
  • Color space: Values should be in range [0, 255] for 8-bit colors
  • The color space should match the point cloud's color space
  • Example: [255, 0, 0] for red

color_distance_threshold (required):

  • The maximum color distance (in color space) to consider a point as matching the target color
  • Increase to keep points with more color variation, including points that are less similar to the target
  • Decrease to keep only points very close to the target color
  • Typical range for 8-bit colors [0-255]: 5-100
    • Use 5-20 for strict matching
    • Use 20-60 for moderate matching
    • Use 60-100 for lenient matching

TIP

Best practice: Start with a moderate threshold and adjust based on results. If too many points are included, decrease the threshold. If too few points match, increase the threshold. Consider the lighting conditions and color accuracy of your sensor when choosing the threshold.

Where to Use the Skill in a Pipeline

Color-based segmentation is commonly used in the following pipelines:

  • Object identification by color
  • Multi-object segmentation
  • Quality control and inspection
  • Color-coded landmark detection

A typical pipeline for object identification looks as follows:

python
# Example pipeline using color segmentation (parameters omitted).

from telekinesis import vitreous

# 1. Load point cloud with color information
point_cloud = vitreous.load_point_cloud(...)

# 2. Preprocess: remove outliers
filtered_cloud = vitreous.filter_point_cloud_using_statistical_outlier_removal(...)

# 3. Segment by color (e.g., red objects)
red_objects = vitreous.segment_point_cloud_using_color(
    point_cloud=filtered_cloud,
    target_color=[255, 0, 0],  # Red in 8-bit RGB
    color_distance_threshold=60.0,
)

# 4. Cluster segmented objects
clusters = vitreous.cluster_point_cloud_using_dbscan(
    point_cloud=red_objects,
    max_distance=0.5,
    min_points=10,
)

# 5. Process each colored object
for cluster in clusters:
    bbox = vitreous.calculate_oriented_bounding_box(point_cloud=cluster)
    centroid = vitreous.calculate_point_cloud_centroid(point_cloud=cluster)

Related skills to build such a pipeline:

  • filter_point_cloud_using_statistical_outlier_removal: clean input before color segmentation
  • cluster_point_cloud_using_dbscan: cluster segmented objects
  • calculate_oriented_bounding_box: compute bounding box for segmented objects
  • segment_point_cloud_using_plane: alternative segmentation method based on geometry

Alternative Skills

Skillvs. Segment Pointcloud Using Color
segment_point_cloud_using_planeUse plane segmentation when objects are distinguished by geometry. Use color segmentation when objects are distinguished by color.
cluster_point_cloud_using_dbscanUse DBSCAN when objects are spatially separated. Use color segmentation when objects have distinct colors but may be touching.

When Not to Use the Skill

Do not use segment using color when:

  • The point cloud has no color information (the operation will fail or return empty results)
  • Objects have similar colors (color segmentation may not distinguish them effectively)
  • Lighting conditions vary significantly (color values may be inconsistent)
  • You need to segment based on geometry (use segment_point_cloud_using_plane or clustering instead)

WARNING

Important: This Skill requires the point cloud to have color information. If your point cloud doesn't have colors, this operation will not work. Ensure your point cloud includes RGB color data before using this Skill.