Stream Point Cloud
SUMMARY
Stream Point Cloud continuously fuses depth and colour frames into colour-mapped 3D point clouds, returning each one as an object with .positions (Nx3 float XYZ) and .colors (Nx3 uint8 RGB) arrays. Configure resolution and frame rate at connect time, then log each cloud directly with rr.Points3D.
Available on: Intel RealSense.
The Skill
point_cloud = camera.capture_video_pointcloud()Call repeatedly inside a loop. Stream parameters are configured via connect:
camera.connect(width=640, height=480, fps=30)Example
The Code
"""
Stream a continuous colour-mapped point cloud from a RealSense camera.
Run from a terminal to avoid issues with Rerun's spawn mode.
"""
import time
from loguru import logger
import rerun as rr
from telekinesis.medulla.cameras import realsense
def main():
camera = realsense.RealSense(name="my_realsense")
try:
rr.init("RealSense_Pointcloud_Video_Example", spawn=True)
rr.log("/", rr.ViewCoordinates.RDF, static=True)
camera.connect(width=640, height=480, fps=30)
end_time = time.time() + 10
while time.time() < end_time:
point_cloud = camera.capture_video_pointcloud()
if point_cloud is None:
continue
rr.log(
"Continuous_Pointcloud_Capture",
rr.Points3D(positions=point_cloud.positions, colors=point_cloud.colors),
)
except Exception as e:
logger.error(f"Unable to stream point cloud. {type(e).__name__}: {e}")
finally:
camera.disconnect()
if __name__ == "__main__":
main()The Explanation of the Code
Before connecting, the Rerun viewer is opened and a world coordinate frame is set. rr.ViewCoordinates.RDF declares Right-Down-Forward axes — a convention that matches the RealSense depth sensor's native orientation and ensures Rerun renders the point cloud the right way up. Passing static=True logs this once rather than repeating it on every frame.
rr.init("RealSense_Pointcloud_Video_Example", spawn=True)
rr.log("/", rr.ViewCoordinates.RDF, static=True)Stream configuration is passed directly to connect rather than set via parameters after the fact. width, height, and fps control both the colour and depth streams simultaneously, so both are aligned and ready from the first frame.
camera.connect(width=640, height=480, fps=30)The streaming loop runs for 10 seconds using a wall-clock deadline rather than a fixed frame counter. Each call to capture_video_pointcloud returns a point-cloud object with a .positions attribute (Nx3 float XYZ) and a .colors attribute (Nx3 uint8 RGB), or None if no frame is ready yet. The None check skips logging until a valid frame arrives rather than crashing.
end_time = time.time() + 10
while time.time() < end_time:
point_cloud = camera.capture_video_pointcloud()
if point_cloud is None:
continueUnlike the 2D video examples, no JPEG encoding step is needed — the data is already numeric arrays. rr.Points3D accepts positions and per-point colours directly, sending the full coloured cloud to Rerun in a single log call.
rr.log(
"Continuous_Pointcloud_Capture",
rr.Points3D(positions=point_cloud.positions, colors=point_cloud.colors),
)The disconnect call lives in a finally block so the RealSense pipeline and USB resources are released regardless of how the loop exits.
Running the Example
python realsense_pointcloud_example.pyA Rerun viewer opens automatically. The 3D point cloud streams under Continuous_Pointcloud_Capture for 10 seconds, then the camera disconnects cleanly.
Where to Use the Skill
- Live 3D monitoring — observe how the scene's 3D structure changes during robot motion.
- Continuous grasp planning — feed a steady stream of point clouds into pose-estimation pipelines.
- Calibration sanity checks — verify depth-colour alignment in real time.

