github roboflow/supervision 0.27.0
supervision-0.27.0

10 hours ago

Description

🚀 Added

  • Added sv.filter_segments_by_distance to keep the largest connected component and any nearby components within an absolute or relative distance threshold. This helps you clean up predictions from segmentation models like SAM, SAM2, YOLO segmentation, and RF-DETR segmentation. (#2008)
supervision-0.27.0-filter-segments-by-distance.mp4
  • Added sv.edit_distance for Levenshtein distance between two strings. Supports insert, delete, substitute. (#1912)

    import supervision as sv
    
    sv.edit_distance("hello", "hello")
    # 0
    
    sv.edit_distance("hello world", "helloworld")
    # 1
    
    sv.edit_distance("YOLO", "yolo", case_sensitive=True)
    # 4
  • Added sv.fuzzy_match_index to find the first close match in a list using edit distance. (#1912)

    import supervision as sv
    
    sv.fuzzy_match_index(["cat", "dog", "rat"], "dat", threshold=1)
    # 0
    
    sv.fuzzy_match_index(["alpha", "beta", "gamma"], "bata", threshold=1)
    # 1
    
    sv.fuzzy_match_index(["one", "two", "three"], "ten", threshold=2)
    # None
  • Added sv.get_image_resolution_wh as a unified way to read image width and height from NumPy and PIL inputs. (#2014)

  • Added sv.tint_image to apply a solid color overlay to an image at a specified opacity. Works with both NumPy and PIL inputs. (#1943)

  • Added sv.grayscale_image to convert an image to 3-channel grayscale for compatibility with color-based drawing utilities. (#1943)

  • Added sv.xyxy_to_mask to convert bounding boxes into 2D boolean masks. Each mask corresponds to one bounding box. (#2006)

🌱 Changed

  • Added Qwen3-VL support in sv.Detections.from_vlm and legacy from_lmm mapping. Use vlm=sv.QWEN_3_VL. (#2015)

    import supervision as sv
    
    response = """```json
    [
        {"bbox_2d": [220, 102, 341, 206], "label": "taxi"},
        {"bbox_2d": [30, 606, 171, 743], "label": "taxi"},
        {"bbox_2d": [192, 451, 318, 581], "label": "taxi"},
        {"bbox_2d": [358, 908, 506, 1000], "label": "taxi"},
        {"bbox_2d": [735, 359, 873, 480], "label": "taxi"},
        {"bbox_2d": [758, 508, 885, 617], "label": "taxi"},
        {"bbox_2d": [857, 263, 988, 374], "label": "taxi"},
        {"bbox_2d": [735, 243, 838, 351], "label": "taxi"},
        {"bbox_2d": [303, 291, 434, 417], "label": "taxi"},
        {"bbox_2d": [426, 273, 552, 382], "label": "taxi"}
    ]
    ```"""
    
    detections = sv.Detections.from_vlm(
        vlm=sv.VLM.QWEN_3_VL,
        result=response,
        resolution_wh=(1023, 682)
    )
    
    detections.xyxy
    # array([[ 225.06 ,   69.564,  348.843,  140.492],
    #        [  30.69 ,  413.292,  174.933,  506.726],
    #        [ 196.416,  307.582,  325.314,  396.242],
    #        [ 366.234,  619.256,  517.638,  682.   ],
    #        [ 751.905,  244.838,  893.079,  327.36 ],
    #        [ 775.434,  346.456,  905.355,  420.794],
    #        [ 876.711,  179.366, 1010.724,  255.068],
    #        [ 751.905,  165.726,  857.274,  239.382],
    #        [ 309.969,  198.462,  443.982,  284.394],
    #        [ 435.798,  186.186,  564.696,  260.524]])
supervision-0 27 0-promo-from-qwen-3-vl
  • Added DeepSeek-VL2 support in sv.Detections.from_vlm and legacy from_lmm mapping. Use vlm=sv.VLM.DEEPSEEK_VL_2. (#1884)

  • Improved sv.Detections.from_vlm parsing for Qwen 2.5 VL outputs. The function now handles incomplete or truncated JSON responses. (#2015)

  • sv.InferenceSlicer now uses a new offset generation logic that removes redundant tiles and ensures clean border aligned slicing. This reduces the number of tiles processed, lowering inference time without hurting detection quality. (#2014)

supervision-0.27.0-new-offset-generation-logic.mp4
import supervision as sv
from PIL import Image
from rfdetr import RFDETRMedium

model = RFDETRMedium()

def callback(tile):
    return model.predict(tile)

slicer = sv.InferenceSlicer(callback, slice_wh=512, overlap_wh=128)

image = Image.open("example.png")
detections = slicer(image)
  • sv.Detections now includes a box_aspect_ratio property for vectorized aspect ratio computation. You use it to filter detections based on box shape. (#2016)
import numpy as np
import supervision as sv

xyxy = np.array([
    [10, 10, 50, 50],
    [60, 10, 180, 50],
    [10, 60, 50, 180],
])

detections = sv.Detections(xyxy=xyxy)

ar = detections.box_aspect_ratio
# array([1.0, 3.0, 0.33333333])

detections[(ar < 2.0) & (ar > 0.5)].xyxy
# array([[10., 10., 50., 50.]])
  • Improved the performance of sv.box_iou_batch. Processing runs about 2x to 5x faster. (#2001)

  • sv.process_video now uses a threaded reader, processor, and writer pipeline. This removes I/O stalls and improves throughput while keeping the callback single threaded and safe for stateful models. (#1997)

  • sv.denormalize_boxes now supports batch conversion of bounding boxes. The function now accepts arrays of shape (N, 4) and returns a batch of absolute pixel coordinates.

  • sv.LabelAnnotator and sv.RichLabelAnnotator now accepts text_offset=(x, y) to shift the label relative to text_position. Works with smart label position and line wrapping. (#1917)

❌ Removed

  • Removed the deprecated overlap_ratio_wh argument from sv.InferenceSlicer. Use the pixel based overlap_wh argument to control slice overlap. (#2014)

Tip

Convert your old ratio based overlap to pixel based overlap. Multiply each ratio by the slice dimensions.

# before

slice_wh = (640, 640)
overlap_ratio_wh = (0.25, 0.25)

slicer = sv.InferenceSlicer(
    callback=callback,
    slice_wh=slice_wh,
    overlap_ratio_wh=overlap_ratio_wh,
    overlap_filter=sv.OverlapFilter.NON_MAX_SUPPRESSION,
)

# after

overlap_wh = (
    int(overlap_ratio_wh[0] * slice_wh[0]),
    int(overlap_ratio_wh[1] * slice_wh[1]),
)

slicer = sv.InferenceSlicer(
    callback=callback,
    slice_wh=slice_wh,
    overlap_wh=overlap_wh,
    overlap_filter=sv.OverlapFilter.NON_MAX_SUPPRESSION,
)

🏆 Contributors

@SkalskiP (Piotr Skalski), @onuralpszr (Onuralp SEZER), @soumik12345 (Soumik Rakshit), @rcvsq, @AlexBodner (Alex Bodner), @Ashp116, @kshitijaucharmal (Kshitij Aucharmal), @ernestlwt, @AnonymDevOSS, @jackiehimel (Jackie Himel ), @dominikWin (Dominik Winecki)

Don't miss a new supervision release

NewReleases is sending notifications on new releases.