Data Usage

CODA is split into two subsets including the CODA Base set, consisting of 309 scenes from KITTI, 134 scenes from nuScenes and 1057 scenes from ONCE, and the CODA2022 set, consisting of 9768 camera images collected from SODA10M and ONCE (including the 1057 ONCE images from CODA Base), which is further utilized to hold the ECCV2022 2nd SSLAD chanllege.

CODA Base Val

The exact subset we utilized in our ECCV2022 paper. Annotations are stored in "base/corner_case.json" in COCO format. Due to license issues, for this subset, only corner case annotations and the correponding sample indices/tokens of original datasets are provided in "base/kitti_indices.json" and "base/nuscenes_sample_tokens.json".

CODA2022

CODA2022 consists of 80180 annotated objects spanning 43 object categories, with the first 7 categories (pedestrian, cyclist, car, truck, tram, tricycle, bus) regarded as common categories, while the rest considered as novel categories. The common-category objects are fully annotated, while for objects of novel categories, only those that obstruct or have a potential to obstruct the road are annotated. CODA2022 is further separated into a validation set and a test set, each containing 4884 images. The validation set covers only 29 of the 43 categories, while the test set covers all of the 43 categories, simulating the real-world scenario where brand-new categories are encountered after deployment. Check the CODA2022 Evaluator for more details.

CODA-LM

CODA-LM is an image-text multi-modal dataset constructed on CODA2022 for systematic evaluation of LVLMs on road corner cases, including three distinct tasks including general perception, regional perception, and driving suggestions. Check the CODA-LM page for more details.

Data Format

The annotation file keeps consistent with the COCO format and contains three keys: "images", "categories" and "annotations".

"images": {
        "file_name": <str>        -- File name.
        "id": <int>               -- Unique image id.
        "height": <float>         -- Height of the image.
        "width": <float>          -- Width of the image.
        "period": <str>           -- Period tag.
        "weather": <str>          -- Weather tag.
}
"annotations": {
        "image_id": <int>         -- The image id for this annotation.
        "category_id": <int>      -- The category id.
        "bbox": <list>            -- Coordinate of boundingbox [x, y, w, h].
        "area": <float>           -- Area of this annotation (w * h).
        "id": <int>               -- Unique annotation id.
        "iscrowd": <int>          -- Whether this annotation is crowd.
        "corner_case": <bool>     -- Whether this annotation is a corner case.       
}
"categories": {
        "name": <str>             -- Unique category name.
        "id": <int>               -- Unique category id.
        "supercategory": <str>    -- The supercategory for this category.
}

Data Annotation

Image domain tags (i.e., periods and weather conditions) and 2D bounding boxes with classes for all CODA images.

Semantic Labels

CODA annotation can be grouped into 7 super-categories including pedestrian, cyclist, vehicle, animal, traffic facility, obstruction and misc, which can be further divided into 43 fine-grained categories. Moreover, these categories can also be divided into two collections, namely 1) instances of novel classes and 2) novel instances of common classes. As the names suggest, common classes stand for common object categories annotated by existing autonomous driving benchmarks, such as cars and pedestrians, whereas novel classes stand for the opposites, such as dogs and strollers.

Domain Tags

CODA also provides domain tags for all images including the periods and weather conditions. Specifically, we annotate the period tags to be either day or night and select the weather condition tags from sunny, cloudy and rainy. We hope the image domain tags can help researchers dig into the underlying reasons of corner cases for reliable object detection.