Get Intrinsics
SUMMARY
Get Intrinsics returns the camera's intrinsic calibration parameters — focal lengths, principal point, and distortion coefficients. These are essential for projecting 3D points into pixel coordinates, undistorting images, and building accurate camera models for calibration workflows.
Available on: Intel RealSense, Zivid.
The Skill
The return signature differs between cameras:
RealSense — requires a sensor argument ("depth" or "color") and returns 8 values:
f_x, f_y, height, width, ppx, ppy, model, coeffs = camera.get_intrinsics("color")| Value | Type | Description |
|---|---|---|
f_x | float | Focal length in x, as a multiple of pixel width |
f_y | float | Focal length in y, as a multiple of pixel height |
height | int | Image height in pixels |
width | int | Image width in pixels |
ppx | float | Principal point x offset from left edge (pixels) |
ppy | float | Principal point y offset from top edge (pixels) |
model | str | Distortion model name (e.g. "brown_conrady") |
coeffs | list[float] | Distortion coefficients for the reported model |
Zivid — takes no arguments and returns 5 values:
f_x, f_y, c_x, c_y, coeffs = camera.get_intrinsics()| Value | Type | Description |
|---|---|---|
f_x | float | Focal length in x (pixels) |
f_y | float | Focal length in y (pixels) |
c_x | float | Principal point x offset from top-left corner (pixels) |
c_y | float | Principal point y offset from top-left corner (pixels) |
coeffs | list[float] | Distortion coefficients [k1, k2, k3, p1, p2] (radial + tangential) |
The Code
"""
Connect to a RealSense camera and retrieve the intrinsic parameters
for both the depth and colour sensors.
"""
import numpy as np
from loguru import logger
from telekinesis.medulla.cameras import realsense
def main():
camera = realsense.RealSense(name="my_realsense_camera")
try:
camera.connect()
# Query intrinsics for the colour sensor
f_x, f_y, height, width, ppx, ppy, model, coeffs = camera.get_intrinsics("color")
logger.info(f"Color sensor intrinsics:")
logger.info(f" Resolution: {width}x{height}")
logger.info(f" Focal length: fx={f_x:.2f}, fy={f_y:.2f}")
logger.info(f" Principal point: ppx={ppx:.2f}, ppy={ppy:.2f}")
logger.info(f" Distortion model: {model}")
logger.info(f" Distortion coeffs: {coeffs}")
# Build OpenCV-compatible camera matrix
camera_matrix = np.array([
[f_x, 0, ppx],
[0, f_y, ppy],
[0, 0, 1 ],
], dtype=np.float64)
dist_coeffs = np.array(coeffs, dtype=np.float64)
logger.info(f"Camera matrix:\n{camera_matrix}")
# Query intrinsics for the depth sensor
f_x_d, f_y_d, h_d, w_d, ppx_d, ppy_d, model_d, coeffs_d = camera.get_intrinsics("depth")
logger.info(f"Depth sensor intrinsics:")
logger.info(f" Resolution: {w_d}x{h_d}")
logger.info(f" Focal length: fx={f_x_d:.2f}, fy={f_y_d:.2f}")
logger.info(f" Distortion model: {model_d}")
finally:
camera.disconnect()
if __name__ == "__main__":
main()"""
Connect to a Zivid camera and retrieve its intrinsic parameters.
"""
import numpy as np
from loguru import logger
from telekinesis.medulla.cameras import zivid
def main():
camera = zivid.Zivid(name="my_zivid_camera")
try:
camera.connect()
f_x, f_y, c_x, c_y, coeffs = camera.get_intrinsics()
logger.info(f"Focal length: fx={f_x:.2f}, fy={f_y:.2f}")
logger.info(f"Principal point: cx={c_x:.2f}, cy={c_y:.2f}")
logger.info(f"Distortion: {coeffs}")
# Build OpenCV-compatible camera matrix
camera_matrix = np.array([
[f_x, 0, c_x],
[0, f_y, c_y],
[0, 0, 1 ],
], dtype=np.float64)
dist_coeffs = np.array(coeffs, dtype=np.float64)
logger.info(f"Camera matrix:\n{camera_matrix}")
logger.info(f"Distortion coefficients: {dist_coeffs}")
finally:
camera.disconnect()
if __name__ == "__main__":
main()The Explanation of the Code
After connect has initialised the camera, get_intrinsics queries the underlying SDK for the factory-calibrated intrinsic parameters of the connected device.
Per-camera behavior:
- RealSense — takes a
sensorargument ("depth"or"color") and reads the intrinsic parameters from the active librealsense stream profile for that sensor. Returns 8 values including the image resolution and the distortion model name (e.g."brown_conrady"). Since the depth and colour sensors have different optics and may run at different resolutions, their intrinsics differ — query both when you need to map between them. - Zivid — takes no arguments and queries the Zivid SDK's experimental calibration API. Returns 5 values with radial (
k1,k2,k3) and tangential (p1,p2) distortion coefficients.
The focal lengths (f_x, f_y) and principal point (ppx/ppy on RealSense, c_x/c_y on Zivid) describe the ideal pinhole projection from 3D world coordinates to 2D pixel coordinates. Both cameras return distortion coefficients compatible with OpenCV's distortion model.
The examples show how to assemble these values into the standard OpenCV camera matrix and distortion vector, which can be passed directly to functions like cv2.undistort, cv2.projectPoints, or cv2.solvePnP.
camera_matrix = np.array([
[f_x, 0, ppx],
[0, f_y, ppy],
[0, 0, 1 ],
], dtype=np.float64)Where to Use the Skill
- Hand-eye calibration — feed the intrinsic parameters into a calibration routine that computes the transform between the camera and robot flange.
- Image undistortion — remove lens distortion from captured images before running computer vision algorithms.
- 3D-to-2D projection — project point cloud coordinates onto the image plane for overlay visualisation or region-of-interest selection.
- Pose estimation — use with
cv2.solvePnPto estimate the pose of known objects from 2D-3D correspondences.

