跳转至

多卡训练

It is recommended to use DistributedDataParallel, instead of DataParallel to do multi-GPU training, even if there is only a single node.

The difference between DistributedDataParallel and DataParallel is: DistributedDataParallel uses multiprocessing where a process is created for each GPU, while DataParallel uses multithreading. By using multiprocessing, each GPU has its dedicated process, this avoids the performance overhead caused by GIL of Python interpreter.

TBD:pytorch 多卡训练


最后更新: 2024-04-03 20:02:11
创建日期: 2024-03-07 22:47:33

评论