论文地址https://arxiv.org/abs/2206.02424

目录

背景

一、概述 

 二、改进YOLOv8详细教程

三、验证是否成功

完整代码分享

启动命令

可论文指导 v ---------- > jiabei-545

背景

目标检测是计算机视觉中重要的下游任务。对于车载边缘计算平台来说,巨大的模型很难达到实时检测的要求。而且,由大量深度可分离卷积层构建的轻量级模型无法达到足够的精度。本次介绍引入了一种新的轻量级卷积技术 GSConv,该模型可以减轻重量但保持准确性。 GSConv 在模型的准确性和速度之间实现了出色的权衡。并且,我们提供了一种设计范例,细颈,以实现探测器更高的计算成本效益。我们的方法的有效性在二十多组比较实验中得到了强有力的证明。特别是,与原始检测器相比,通过该方法改进的检测器获得了最先进的结果(例如,SODA10M 在 Tesla T4 GPU 上以约 100FPS 的速度获得 70.9% mAP0.5)

一、概述 

Slim Neck结构图

GSConv结构图

 主要的贡献可以总结为以下三点:

1.引入了一种新的轻量级卷积方法,GSConv。该方法使得卷积计算的输出尽可能接近SC的输出,降低了计算成本;

2.我们为自动驾驶汽车的探测器架构提供了一种设计范例,即具有标准主干的细颈;

3.我们广泛验证了不同方法的有效性

实验结果:

与原始网络相比,改进方法获得了最优秀的检测结果。 

实验结果

 二、改进YOLOv8详细教程

1.修改yaml文件(ultralytics\cfg\models\v8\yolov8.yaml)

# Ultralytics YOLO , AGPL-3.0 license

# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters

nc: 80 # number of classes

scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'

# [depth, width, max_channels]

n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs

s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs

m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs

l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs

x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs

# YOLOv8.0n backbone

backbone:

# [from, repeats, module, args]

- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2

- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4

- [-1, 3, C2f, [128, True]]

- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8

- [-1, 6, C2f, [256, True]]

- [-1, 1,GSConv, [512, 3, 2]] # 5-P4/16

- [-1, 6, C2f, [512, True]]

- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32

- [-1, 3, C2f, [1024, True]]

- [-1, 1, SPPF, [1024, 5]] # 9

# YOLOv8.0n head

head:

- [-1, 1, nn.Upsample, [None, 2, 'nearest']]

- [[-1, 6], 1, Concat, [1]] # cat backbone P4

- [-1, 3, VoVGSCSP, [512]] # 12

- [-1, 1, nn.Upsample, [None, 2, 'nearest']]

- [[-1, 4], 1, Concat, [1]] # cat backbone P3

- [-1, 3, VoVGSCSP, [256]] # 15 (P3/8-small)

- [-1, 1, Conv, [256, 3, 2]]

- [[-1, 12], 1, Concat, [1]] # cat head P4

- [-1, 3, VoVGSCSP, [512]] # 18 (P4/16-medium)

- [-1, 1, Conv, [512, 3, 2]]

- [[-1, 9], 1, Concat, [1]] # cat head P5

- [-1, 3, VoVGSCSP, [1024]] # 21 (P5/32-large)

- [[15, 18, 21], 1, Detect, [nc]] # Detect(P3, P4, P5)

在backbone部分可以添加多个GSConv模块,根据自己实验的需求看是否增加多个模块,增加以后每个模块的层数会发生变化,请注意修改。head部分也是一样的道理

 2. 新建SlimNeck.py

class GSConv(nn.Module):

# GSConv https://github.com/AlanLi1997/slim-neck-by-gsconv

def __init__(self, c1, c2, k=1, s=1, g=1, act=True):

super().__init__()

c_ = c2 // 2

self.cv1 = Conv(c1, c_, k, s, None, g, 1, act)

self.cv2 = Conv(c_, c_, 5, 1, None, c_, 1 , act)

def forward(self, x):

x1 = self.cv1(x)

x2 = torch.cat((x1, self.cv2(x1)), 1)

# shuffle

# y = x2.reshape(x2.shape[0], 2, x2.shape[1] // 2, x2.shape[2], x2.shape[3])

# y = y.permute(0, 2, 1, 3, 4)

# return y.reshape(y.shape[0], -1, y.shape[3], y.shape[4])

b, n, h, w = x2.data.size()

b_n = b * n // 2

y = x2.reshape(b_n, 2, h * w)

y = y.permute(1, 0, 2)

y = y.reshape(2, -1, n // 2, h, w)

return torch.cat((y[0], y[1]), 1)

class GSConvns(GSConv):

# GSConv with a normative-shuffle https://github.com/AlanLi1997/slim-neck-by-gsconv

def __init__(self, c1, c2, k=1, s=1, g=1, act=True):

super().__init__(c1, c2, k=1, s=1, g=1, act=True)

c_ = c2 // 2

self.shuf = nn.Conv2d(c_ * 2, c2, 1, 1, 0, bias=False)

def forward(self, x):

x1 = self.cv1(x)

x2 = torch.cat((x1, self.cv2(x1)), 1)

# normative-shuffle, TRT supported

return nn.ReLU(self.shuf(x2))

class GSBottleneck(nn.Module):

# GS Bottleneck https://github.com/AlanLi1997/slim-neck-by-gsconv

def __init__(self, c1, c2, k=3, s=1, e=0.5):

super().__init__()

c_ = int(c2*e)

# for lighting

self.conv_lighting = nn.Sequential(

GSConv(c1, c_, 1, 1),

GSConv(c_, c2, 3, 1, act=False))

self.shortcut = Conv(c1, c2, 1, 1, act=False)

def forward(self, x):

return self.conv_lighting(x) + self.shortcut(x)

class DWConv(Conv):

# Depth-wise convolution class

def __init__(self, c1, c2, k=1, s=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups

super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), act=act)

class VoVGSCSP(nn.Module):

# VoVGSCSP module with GSBottleneck

def __init__(self, cx, c2, n=1, shortcut=True, g=1, e=0.5):

super().__init__()

c_ = int(c2 * e) # hidden channels

self.cv1 = Conv(c1, c_, 1, 1)

self.cv2 = Conv(c1, c_, 1, 1)

# self.gc1 = GSConv(c_, c_, 1, 1)

# self.gc2 = GSConv(c_, c_, 1, 1)

# self.gsb = GSBottleneck(c_, c_, 1, 1)

self.gsb = nn.Sequential(*(GSBottleneck(c_, c_, e=1.0) for _ in range(n)))

self.res = Conv(c_, c_, 3, 1, act=False)

self.cv3 = Conv(2 * c_, c2, 1) #

def forward(self, x):

x1 = self.gsb(self.cv1(x))

y = self.cv2(x)

return self.cv3(torch.cat((y, x1), dim=1))

class VoVGSCSPC(VoVGSCSP):

# cheap VoVGSCSP module with GSBottleneck

def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):

super().__init__(c1, c2)

c_ = int(c2 * 0.5) # hidden channels

self.gsb = GSBottleneckC(c_, c_, 1, 1)

3.配置文件

在tasks.py文件中(ultralytics\nn\tasks.py)

from ultralytics.nn. SlimNeck import VoVGSCSP, VoVGSCSPC, GSConv

注册模块 

if m in (Classify, Conv, GGhostRegNet, ConvTranspose, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, Focus,BottleneckCSP, C1, C2, C2f, C3, C3TR, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x, RepC3, SEAttention,ContextAggregation, BoTNet, CBAM,LightConv,RepConv, SpatialAttention,Involution, CARAFE, VoVGSCSP, VoVGSCSPC,GSConv,HorBlock, SwinTransformer):

三、验证是否成功

如果出现这样的结构就算改进成功 

完整代码分享

链接: https://pan.baidu.com/s/1eT_JY83XstSQ-Ufoa9PSUg?pwd=bih4 提取码: bih4 

启动命令

python train.py --cfg models/sm-yolov5s.yaml

可论文指导 v ---------- > jiabei-545

相关阅读

评论可见,请评论后查看内容,谢谢!!!
 您阅读本篇文章共花了: