描述一下背景,和遇到的问题:

我在做一个超大数据集的多分类,设备Ubuntu 22.04+i9 13900K+Nvidia 4090+64GB RAM,第一次的训练的训练集有700万张,训练成功。后面收集到更多数据集,数据增强后达到了1000万张。但第二次训练4个小时后,就被系统杀掉进程了,原因是Out of Memory。找了很久的原因,发现内存随着训练step的增加而线性增加,猜测是内存泄露,最后定位到了DataLoader的num_workers参数(只要num_workers=0就没有问题)。

真正原因:

Python(Pytorch)中的list转换成tensor时,会发生内存泄漏,要避免list的使用,可以通过使用np.array来代替list。

解决办法:

自定义DataLoader中的Dataset类,然后Dataset类中的list全部用np.array来代替。这样的话,DataLoader将np.array转换成Tensor的过程就不会发生内存泄露。

下面给两个错误的示例代码和一个正确的代码:(都是我自己犯过的错误)

1.错误的DataLoader加载数据集方法1

# 加载数据

train_data = datasets.ImageFolder(root=TRAIN_DIR_ARG, transform=transform)

valid_data = datasets.ImageFolder(root=VALIDATION_DIR, transform=transform)

test_data = datasets.ImageFolder(root=TEST_DIR, transform=transform)

train_loader = DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=8)

valid_loader = DataLoader(valid_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8)

test_loader = DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8)

2.错误的DataLoader加载数据集方法2(重写了Dataset方法)

class CustomDataset(Dataset):

def __init__(self, data_dir, transform=None):

self.data_dir = data_dir

self.transform = transform

self.image_paths = []

self.labels = []

# 遍历数据目录并收集图像文件路径和对应的标签

classes = os.listdir(data_dir)

for i, class_name in enumerate(classes):

class_dir = os.path.join(data_dir, class_name)

if os.path.isdir(class_dir):

for image_name in os.listdir(class_dir):

image_path = os.path.join(class_dir, image_name)

self.image_paths.append(image_path)

self.labels.append(i)

def __len__(self):

return len(self.image_paths)

def __getitem__(self, idx):

image_path = self.image_paths[idx]

label = self.labels[idx]

# # 在需要时加载图像

image = Image.open(image_path)

if self.transform:

image = self.transform(image)

return image, label

train_data = CustomDataset(data_dir=TRAIN_DIR_ARG, transform=transform)

valid_data = CustomDataset(data_dir=VALIDATION_DIR, transform=transform)

test_data = CustomDataset(data_dir=TEST_DIR, transform=transform)

train_loader = DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=8)

valid_loader = DataLoader(valid_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8)

test_loader = DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8)

3.重写Dataset的正确方法(重写了Dataset方法,list全部转成np.array)

class CustomDataset(Dataset):

def __init__(self, data_dir, transform=None):

self.data_dir = data_dir

self.transform = transform

self.image_paths = [] # 使用Python列表

self.labels = [] # 使用Python列表

# 遍历数据目录并收集图像文件路径和对应的标签

classes = os.listdir(data_dir)

for i, class_name in enumerate(classes):

class_dir = os.path.join(data_dir, class_name)

if os.path.isdir(class_dir):

for image_name in os.listdir(class_dir):

image_path = os.path.join(class_dir, image_name)

self.image_paths.append(image_path) # 添加到Python列表

self.labels.append(i) # 添加到Python列表

# 转换为NumPy数组,这里就是解决内存泄露的关键代码

self.image_paths = np.array(self.image_paths)

self.labels = np.array(self.labels)

def __len__(self):

return len(self.image_paths)

def __getitem__(self, idx):

image_path = self.image_paths[idx]

label = self.labels[idx]

# 在需要时加载图像

image = Image.open(image_path)

if self.transform:

image = self.transform(image)

# 将图像数据转换为NumPy数组

image = np.array(image)

return image, label

train_data = CustomDataset(data_dir=TRAIN_DIR_ARG, transform=transform)

valid_data = CustomDataset(data_dir=VALIDATION_DIR, transform=transform)

test_data = CustomDataset(data_dir=TEST_DIR, transform=transform)

train_loader = DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=8)

valid_loader = DataLoader(valid_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8)

test_loader = DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=False, num_workers=8)

相关文章

评论可见,请评论后查看内容,谢谢!!!
 您阅读本篇文章共花了: