Skip to content

PCB Region Segmentation From RGB Image

SUMMARY

Segment PCB or component regions from one RGB image for inspection, defect detection, or automated handling. Uses SAM with a bounding box prompt; outputs masks and boxes, with Rerun visualization.

Overview

PCB inspection, defect detection, alignment, and automated assembly often require segmenting the board or specific components from a single RGB image. This example shows how to segment a PCB region (or component) in an RGB image using a bounding box prompt: you provide the image and an ROI around the board or component, and the pipeline returns the instance mask and bounding box for inspection, alignment, or handling.

Inputs

  • Single RGB image of a PCB or panel (board or component region)
  • Bounding box around the board or component of interest as [x_min, y_min, x_max, y_max]

Required Telekinesis Skills

Optional: Rerun for visualization.

Use Cases

This pipeline segments PCB or component regions in RGB images using SAM with a bounding box prompt.

Typical applications include:

  • Inspection — Isolate the board or component for defect detection or quality checks.
  • Defect detection — Use the mask to focus analysis on the region of interest.
  • Alignment — Get a precise mask and box for fiducial or placement alignment.
  • Automated handling — Segment the board for pick-and-place or handling workflows.

Input-Output

Raw Sensor Input
PCB Segmentation Input
Raw image of a PCB.
Segmentation and Boxes
PCB Segmentation Output
Segmented image with mask and bounding box for the PCB region.

The Pipeline

The pipeline loads an RGB image, defines a bounding box around the PCB or component, runs SAM for instance segmentation, then extracts the mask and bounding box and visualizes with Rerun.

text
Load RGB Image

Define ROI (Bounding Box Prompt)

Segment Image Using SAM

Postprocess Masks

Extract Bounding Boxes

Visualize with Rerun
  • Segment Image Using SAM — Instance segmentation from a bounding box prompt; outputs mask and bbox for the PCB region.

The Code

The script loads an image, defines a bounding box for the PCB (or component), runs SAM, extracts the mask and bounding box from the annotations, and visualizes with Rerun. Image path and ROI are set at the top; the pipeline runs in the main block with no function arguments.

python
# Load image
image_path = DATA_DIR / "images/pcb.jpg"
image = io.load_image(image_path)
logger.info(f"Loaded image shape: {image.to_numpy().shape}")

# Define a bounding box: (x_min, y_min, x_max, y_max)
bounding_box = [1185, 1407, 1645, 1690]

# Segment using SAM
result = cornea.segment_image_using_sam(
    image=image,
    bboxes=[bounding_box],
)
annotations = result.to_list()

# Rerun visualization
rr.init("pcb_segmentation_using_sam", spawn=False)
try:
    rr.connect()
except Exception as e:
    rr.spawn()

rr.send_blueprint(
    rrb.Blueprint(
            rrb.Horizontal(
                rrb.Spatial2DView(name="Input", origin="input"),
                rrb.Spatial2DView(name="Bboxes & Segments", origin="segmented"),
            ),
        rrb.SelectionPanel(),
        rrb.TimePanel(),
    ),
    make_active=True,
)

image = image.to_numpy()
rr.log("input/image", rr.Image(image=image))
rr.log("segmented/image", rr.Image(image=image))

h, w = image.shape[:2]
segmentation_img = np.zeros((h, w), dtype=np.uint16)
ann_bboxes = []
class_ids = []

for idx, ann in enumerate(annotations):
    label = idx + 1
    mask_i = np.zeros((h, w), dtype=np.uint8)
    if "mask" in ann and isinstance(ann["mask"], np.ndarray):
        m = ann["mask"]
        if m.dtype.kind in ("f", "b"):
            mask_i = (m > 0.5).astype(np.uint8)
        else:
            mask_i = (m > 0).astype(np.uint8)
    elif "segmentation" in ann and ann["segmentation"]:
        seg = ann["segmentation"]
        if isinstance(seg, dict):
            mask_dec = mask_utils.decode(seg)
            if mask_dec.ndim == 3:
                mask_dec = mask_dec[:, :, 0]
            mask_i = (mask_dec > 0).astype(np.uint8)
        elif isinstance(seg, list) and len(seg) > 0:
            temp = np.zeros((h, w), dtype=np.uint8)
            polys = seg if isinstance(seg[0], list) else [seg]
            for poly in polys:
                pts = np.array(poly).reshape(-1, 2).astype(np.int32)
                cv2.fillPoly(temp, [pts], 1)
            mask_i = (temp > 0).astype(np.uint8)
    if mask_i.sum() == 0:
        continue
    segmentation_img[mask_i > 0] = label
    bbox = ann.get("bbox", None)
    if bbox is None:
        continue
    ann_bboxes.append(list(bbox))
    class_ids.append(label)

rr.log("segmented/masks", rr.SegmentationImage(segmentation_img))
if ann_bboxes:
    rr.log(
        "segmented/boxes",
        rr.Boxes2D(
            array=np.asarray(ann_bboxes, dtype=np.float32),
            array_format=rr.Box2DFormat.XYWH,
            class_ids=np.asarray(class_ids, dtype=np.int32),
        ),
    )