動作確認済みモデル

  • 下記のモデルはiinferで推論実行できることを確認しています。

  • なお、以下に示すサイトの事前学習モデルは、配布元のライセンスに従って使用してください。

Segmentation

header:

"base"

frameWork

input

config

model

predict_type

PSPNet_r18(mmseg)

mmsegmentation

512x1024

pspnet_r18-d8_4xb2-80k_cityscapes-512x1024.py

pspnet_r18-d8

mmseg_seg_PSPNet

PSPNet_r50(mmseg)

mmsegmentation

512x1024

pspnet_r50-d8_4xb2-80k_cityscapes-512x1024.py

pspnet_r50-d8

mmseg_seg_PSPNet

PSPNet_r101(mmseg)

mmsegmentation

512x1024

pspnet_r101-d8_4xb2-80k_cityscapes-512x1024.py

pspnet_r101-d8

mmseg_seg_PSPNet

Swin-T(mmseg)

mmsegmentation

512x512

swin-tiny-patch4-window7-in1k-pre_upernet_8xb2-160k_ade20k-512x512.py

upernet_swin_tiny

mmseg_seg_SwinUpernet

Swin-S(mmseg)

mmsegmentation

512x512

swin-small-patch4-window7-in1k-pre_upernet_8xb2-160k_ade20k-512x512.py

upernet_swin_small

mmseg_seg_SwinUpernet

Swin-B(mmseg)

mmsegmentation

512x512

swin-base-patch4-window12-in22k-384x384-pre_upernet_8xb2-160k_ade20k-512x512.py

upernet_swin_base

mmseg_seg_SwinUpernet

SAN(mmseg)

mmsegmentation

640x640

san-vit-b16_coco-stuff164k-640x640.py

san-vit-b16

mmseg_seg_San

SAN(mmseg)

mmsegmentation

640x640

san-vit-l14_coco-stuff164k-640x640.py

san-vit-l14

mmseg_seg_San

Object Detection

header:

"base"

frameWork

input

config

model

predict_type

YOLOX(mmdet)

mmdetection

416x416

yolox_tiny_8xb8-300e_coco.py

YOLOX-tiny

mmdet_det_YoloX_Lite

YOLOX(mmdet)

mmdetection

640x640

yolox_s_8xb8-300e_coco.py

YOLOX-s

mmdet_det_YoloX

YOLOX(mmdet)

mmdetection

640x640

yolox_l_8xb8-300e_coco.py

YOLOX-l

mmdet_det_YoloX

YOLOX(mmdet)

mmdetection

640x640

yolox_x_8xb8-300e_coco.py

YOLOX-x

mmdet_det_YoloX

YOLOX

onnx

416x416

yolox_nano.py

ONNX-YOLOX-Nano※1

onnx_det_YoloX_Lite

YOLOX

onnx

416x416

yolox_tiny.py

ONNX-YOLOX-Tiny※1

onnx_det_YoloX_Lite

YOLOX

onnx

640x640

yolox_s.py

ONNX-YOLOX-s※1

onnx_det_YoloX

YOLOX

onnx

640x640

yolox_m.py

ONNX-YOLOX-m※1

onnx_det_YoloX

YOLOX

onnx

640x640

yolox_l.p

ONNX-YOLOX-l※1

onnx_det_YoloX

YOLOX

onnx

640x640

yolox_x.py

ONNX-YOLOX-x※1

onnx_det_YoloX

YOLOv3

onnx

416x416

ONNX-YOLOv3-10

onnx_det_YoloV3

YOLOv3

onnx

416x416

ONNX-YOLOv3-12

onnx_det_YoloV3

YOLOv3

onnx

416x416

ONNX-YOLOv3-12-int8

onnx_det_YoloV3

TinyYOLOv3

onnx

416x416

ONNX-TinyYOLOv3

onnx_det_TinyYoloV3

  • ※1 : pth2onnx を使用してONNX形式に変換して使用*

Image Classification

header:

"base"

frameWork

input

config

model

predict_type

Swin Transformer

mmpretrain

224x224

swin-tiny_16xb64_in1k.py

swin-tiny_16xb64_in1k

mmpretrain_cls_swin_Lite

Swin Transformer

mmpretrain

224x224

swin-small_16xb64_in1k.py

swin-small_16xb64_in1k

mmpretrain_cls_swin_Lite

Swin Transformer

mmpretrain

384x384

swin-base_16xb64_in1k-384px.py

swin-base_16xb64_in1k-384px

mmpretrain_cls_swin

Swin Transformer

mmpretrain

384x384

swin-large_16xb64_in1k-384px.py

swin-large_16xb64_in1k-384px

mmpretrain_cls_swin

EfficientNet-Lite4

onnx

224x224

EfficientNet-Lite4-11

onnx_cls_EfficientNet_Lite4

EfficientNet-Lite4

onnx

224x224

EfficientNet-Lite4-11-int8

onnx_cls_EfficientNet_Lite4

EfficientNet-Lite4

onnx

224x224

EfficientNet-Lite4-11-qdq

onnx_cls_EfficientNet_Lite4

Face Detection and Recognition

header:

"base"

frameWork

input

config

model

predict_type

insightface

insightface

640x640

antelopev2

insightface_det

insightface

insightface

640x640

buffalo_l

insightface_det

insightface

insightface

640x640

buffalo_m

insightface_det

insightface

insightface

640x640

buffalo_s

insightface_det

insightface

insightface

640x640

buffalo_sc

insightface_det