Project Camera Point to Pixel
SUMMARY
Project Camera Point to Pixel projects a 3D point in camera coordinates to pixel coordinates.
Projects a 3D point in the camera frame to 2D image coordinates using camera intrinsics and distortion coefficients. Essential for overlaying 3D data on images, verifying 3D estimates, and rendering.
Use this Skill when you want to convert 3D camera point to 2D pixel.
The Skill
from telekinesis import pupil
import numpy as np
pixel = pupil.project_camera_point_to_pixel(
camera_intrinsics=camera_intrinsics,
distortion_coefficients=distortion_coefficients,
point=point,
)
positions = pixel.to_numpy().reshape(-1, 2)Example
Projects 3D point (0, 0, 1.0) in camera coordinates to pixel coordinates. The result can be visualized by drawing a point at the projected (u, v) location on the image.
The Code
from telekinesis import pupil
import numpy as np
from loguru import logger
camera_intrinsics = np.array(
[[500.0, 0, 320.0], [0, 500.0, 240.0], [0, 0, 1.0]],
dtype=np.float64,
)
distortion_coefficients = np.array([0.0, 0.0, 0.0, 0.0, 0.0], dtype=np.float64)
point = np.array([0.0, 0.0, 1.0], dtype=np.float64)
pixel = pupil.project_camera_point_to_pixel(
camera_intrinsics=camera_intrinsics,
distortion_coefficients=distortion_coefficients,
point=point,
)
positions = pixel.to_numpy().reshape(-1, 2)
logger.success("Projected camera point to pixel. Pixel: {}", positions)The Explanation of the Code
The code begins by importing the necessary modules: pupil for camera projection operations, numpy for numerical operations, and loguru for logging.
from telekinesis import pupil
import numpy as np
from loguru import loggerNext, camera intrinsics, distortion coefficients, and the 3D point are configured. The camera_intrinsics matrix contains focal lengths and principal point; distortion_coefficients corrects lens distortion; point is (x, y, z) in camera coordinates.
camera_intrinsics = np.array(
[[500.0, 0, 320.0], [0, 500.0, 240.0], [0, 0, 1.0]],
dtype=np.float64,
)
distortion_coefficients = np.array([0.0, 0.0, 0.0, 0.0, 0.0], dtype=np.float64)
point = np.array([0.0, 0.0, 1.0], dtype=np.float64)The main operation uses the project_camera_point_to_pixel Skill from the pupil module. This Skill projects a 3D point in camera coordinates to 2D pixel coordinates using the pinhole model and distortion correction. The parameters are determined by camera calibration.
pixel = pupil.project_camera_point_to_pixel(
camera_intrinsics=camera_intrinsics,
distortion_coefficients=distortion_coefficients,
point=point,
)Finally, the projected pixel coordinates are converted to a NumPy array using to_numpy() for further processing, visualization, or downstream tasks.
positions = pixel.to_numpy().reshape(-1, 2)
logger.success(f"Projected pixel: {positions}")This operation is particularly useful in robotics and vision pipelines for 3D overlay on images, verification of 3D estimates, and rendering, where converting 3D camera points to 2D pixels is required.
Running the Example
Runnable examples are available in the Telekinesis examples repository. Follow the README in that repository to set up the environment. Once set up, you can run this specific example with:
cd telekinesis-examples
python examples/pupil_examples.py --example project_camera_point_to_pixelHow to Tune the Parameters
The project_camera_point_to_pixel Skill has no tunable parameters in the traditional sense. It requires camera calibration data and a 3D point:
camera_intrinsics (no default—required): 3x3 matrix with fx, fy, cx, cy. Obtain from camera calibration.
distortion_coefficients (default: np.array([0.0, 0.0, 0.0, 0.0, 0.0])): Lens distortion coefficients (typically 5 or 8 values). Use zeros for undistorted or pinhole models.
point (no default—required): 3D point (x, y, z) in camera coordinates. Ensure z > 0 for points in front of the camera.
Where to Use the Skill in a Pipeline
Project Camera Point to Pixel is commonly used in the following pipelines:
- 3D visualization - Overlay 3D points on images
- Pose verification - Project 3D model to image
- Rendering - Convert 3D to 2D for display
- Annotation - Draw 3D-derived annotations
Related skills to build such a pipeline:
project_pixel_to_camera_point: Inverse operationproject_world_point_to_pixel: Project from world frameproject_pixel_to_world_point: Pixel to world
Alternative Skills
| Skill | vs. Project Camera Point to Pixel |
|---|---|
| project_world_point_to_pixel | Use when the point is in world coordinates (requires world_T_camera). |
| project_pixel_to_camera_point | Inverse: pixel + depth → 3D camera point. |
When Not to Use the Skill
Do not use Project Camera Point to Pixel when:
- Point is in world coordinates (Use project_world_point_to_pixel)
- You need 3D from pixel (Use project_pixel_to_camera_point)
- Point is behind camera (Check z > 0 before projection)

