Improving the Generalization of Segmentation Foundation Model under Distribution Shift via Weakly Supervised Adaptation
CVPR 2024

1South China University of Technology    2Institute for Infocomm Research, A*STAR
3School of Data Science, The Chinese University of Hong Kong, Shenzhen
Paper
Code
Abstract

The success of large language models has inspired the computer vision community to explore image segmentation foundation model that is able to zero/few-shot generalize through prompt engineering. Segment-Anything~(SAM), among others, is the state-of-the-art image segmentation foundation model demonstrating strong zero/few-shot generalization. Despite the success, recent studies reveal the weakness of SAM under strong distribution shift. In particular, SAM performs awkwardly on corrupted natural images, camouflaged images, medical images, etc. Motivated by the observations, we aim to develop a self-training based strategy to adapt SAM to target distribution. Given the unique challenges of large source dataset, high computation cost and incorrect pseudo label, we propose a weakly supervised self-training architecture with anchor regularization and low-rank finetuning to improve the robustness and computation efficiency of adaptation. We validate the effectiveness on 5 types of downstream segmentation tasks including natural clean/corrupted images, medical images, camouflaged images and robotic images. Our proposed method is task-agnostic in nature and outperforms pre-trained SAM and state-of-the-art domain adaptation methods on almost all downstream tasks with the same testing prompt inputs.
Contributions & Method

We summarize the contributions of this work as follows:
  • We are motivated by the generalization issue of segment-anything (SAM) model to diverse downstream segmentation tasks and propose a task-agnostic solution to adapt SAM through self-training with no access to source dataset.
  • We exploit weak supervisions, including bounding box, point-wise annotation and coarse segmentation masks, to improve the effectiveness of adaptation. The weak supervision are fully compatible with the prompt encoder of SAM.
  • Extensive experiments on 5 types of downstream instance segmentation tasks demonstrate the effectiveness of the proposed weakly supervised adaptation approach.

The proposed self-training architecture with anchor network regularization and contrastive loss regularization. Red arrows indicates the backpropagation flow.

Quantitative Evaluations

Dataset used in this work.

Natural Images Corrupted Images Medical Images Camouflaged Objects Robotic Images
Dataset COCO Pascal VOC COCO-C kvasir-SEG ISIC CHAMELEON CAMO COD10K OCID OSD

Table 1: Adaptation results on COCO-C dataset using bounding box prompt.

MethodBritContrDefocElastFogFrostGaussGlassImpulJpegMotnPixelShotSnowZoomAvg
Direct72.8357.3464.4769.3672.3970.5067.2064.4367.6568.2362.7268.6067.4469.0258.8066.73
TENT76.0261.5167.4870.8874.8973.8869.0167.1069.2870.2565.4570.8169.9672.3762.5969.43
SHOT73.8459.0965.9169.5773.9872.5168.3066.0968.6169.4564.5670.4868.7771.0360.1768.16
Soft Teacher73.9062.1265.4171.3272.1673.2768.8467.4968.7370.1866.8869.7970.0873.3364.8869.23
TRIBE76.4060.8666.1972.7275.0875.1470.3466.6670.8372.4265.9470.2470.6674.2264.5670.15
DePT69.1557.2659.0866.8058.7366.7566.7862.7465.6566.3961.6666.6567.5766.6258.2164.42
WDASS76.2160.5767.0772.3475.9774.6369.8467.8869.9271.3666.2571.9970.3272.2563.6170.01
OURS78.5061.0566.9973.9377.0976.1072.0268.2171.2972.7766.3370.9070.2875.0765.3371.05
Supervised78.8674.8172.0474.3278.0177.1473.4372.1274.0875.3071.3975.1574.2576.3468.0474.35

Table 2: Adaptation results on natural clean image datasets.

MethodCOCO 2017Pascal VOC
boxpointpolyboxpointpoly
Direct74.2955.0665.6469.2169.2160.79
TENT78.2152.9971.5180.2474.9765.03
SHOT75.1858.4669.2679.8074.2663.38
Soft Teacher75.9443.3668.2772.9356.0962.20
TRIBE77.5649.5670.9978.8769.2165.39
DePT71.0037.3563.2774.0942.9959.94
WDASS77.2960.5570.1980.1276.1566.98
OURS80.1262.0972.3380.2774.1566.72
Supervised81.5069.7773.3981.2376.9871.32

Table 3: Adaptation results on medical image segmentation datasets.

Methodkvasir-SEGISIC
boxpointpolyboxpointpoly
Direct81.5962.3054.0366.7453.4262.82
TENT82.4761.8462.9771.7653.4667.12
SHOT82.3063.7661.3471.9955.9966.86
Soft Teacher84.1273.5358.1575.7454.9572.29
TRIBE85.0573.0364.6172.6150.3667.99
DePT81.9152.0661.5578.4346.7972.75
WDASS84.0163.7864.7874.2355.6367.84
OURS85.4775.2367.4080.0162.1275.36
Supervised85.8977.5481.6481.6279.8180.26

Table 4: Adaptation results on camouflaged object datasets.

MethodCHAMELEONCAMOCOD10K
boxpointpolyboxpointpolyboxpointpoly
Direct51.3239.3745.7862.7257.4350.8566.3263.6140.04
TENT65.4854.5353.0671.2459.5960.2969.3661.9443.36
SHOT68.6062.4754.3671.6162.7858.7269.0965.2542.38
Soft Teacher65.9244.1746.7262.3048.6451.2666.3250.0432.27
TRIBE71.0052.8054.9966.0061.9760.5467.8463.6242.75
DePT54.4833.4642.4755.4433.0748.6359.3234.0635.51
WDASS71.9162.4056.8071.2563.3962.2971.4265.6143.93
OURS75.9474.0066.8373.4265.5562.9071.9370.5545.87
Supervised78.0585.8668.3879.1777.0167.1278.0678.4464.90

Table 5: Adaptation results on robotic image datasets.

MethodOCIDOSD
boxpointpolyboxpointpoly
Direct86.3571.4172.8187.6278.8680.77
TENT87.7766.6177.5388.1080.5387.85
SHOT88.0674.3976.2588.0980.5287.86
Soft Teacher84.9868.4673.7590.4180.4987.00
TRIBE86.7767.8676.5090.4280.5487.84
DePT82.0056.5270.9281.8469.0682.50
WDASS87.6877.1376.7088.0780.5288.19
OURS88.0980.1477.4192.1180.5189.72
Supervised91.2489.2279.2392.1482.4190.83

Ablation studies of the proposed weakly supervised adaptation method on COCO dataset.

Weak Sup. Self-Train. Anchor Cont. Loss box point poly
original SAM74.2956.3665.42
58.8832.5155.03
79.6557.2570.49
62.9522.8756.91
80.1262.0972.33
76.1847.6370.44
Qualitative Evaluations






Citation

  
    @article{zhang2023improving,
        title={Improving the Generalization of Segmentation Foundation Model under Distribution Shift via Weakly Supervised Adaptation},
        author={Zhang, Haojie and Su, Yongyi and Xu, Xun and Jia, Kui},
        journal={arXiv preprint arXiv:2312.03502},
        year={2023}
    }