• 首页 首页 icon
  • 工具库 工具库 icon
    • IP查询 IP查询 icon
  • 内容库 内容库 icon
    • 快讯库 快讯库 icon
    • 精品库 精品库 icon
    • 问答库 问答库 icon
  • 更多 更多 icon
    • 服务条款 服务条款 icon

Openpcdet训练自己的数据集

武飞扬头像
SEU-侯亮平
帮助1

一. Openpcdet的安装以及使用

* Openpcdet详细内容请看以下链接:

GitHub - open-mmlab/OpenPCDet: OpenPCDet Toolbox for LiDAR-based 3D Object Detection.

1.首先gitclone原文代码

2. 这里我建议自己按照作者github上的docs/install文件夹下指示一步步安装,(之前根据csdn上教程一直有报错),然后下载spconv,以及cumm, github链接如下:

GitHub - traveller59/spconv: Spatial Sparse Convolution Library

GitHub - FindDefinition/cumm: CUda Matrix Multiply library.

3. 打开spconv中的readme,并且严格按照readme步骤安装,一般需要编译一段时间。

4. 打开cumm中readme,严格按照上面指示安装。

5. 安装完成之后按照测试数据跑通检验一下。

二. Openpcdet训练自己的数据集

* 本人移植其他的数据集,由于我有自己的image数据,已经按照kitti数据集的格式转换为velodyne, calib, label, image四个文件,并且实现了评估,以及最终的检测结果,所以可能和其他博主不一样。

* 如果你只有velodyne,label,或者数据集格式还不知道如何转换,文件建议参考以下这几个博主的链接:

Training using our own dataset · Issue #771 · open-mmlab/OpenPCDet · GitHub

3D目标检测(4):OpenPCDet训练篇--自定义数据集 - 知乎

这里首先总结以下主要涉及到以下三个文件的修改

* pcdet/datasets/custom/custom_dataset.py

* tools/cfgs/custom_models/pointpillar.yaml (也可以是其他模型)

* tools/cfgs/dataset_configs/custom_dataset.yaml

* demo.py

1.pcdet/datasets/custom/custom_dataset.py

其实custom_dataset.py只需要大家去模仿kitti_dataset.py去删改就可以了,而且大部分内容不需要用户修改,这里我修改了:

1)get_lidar函数

* 获取激光雷达数据,其他的get_image也类似

2) __getitem__函数

* 这个函数最重要,是获取数据字典并更新的关键

* 如果有些字典不需要可以删改,如calib,image等

3)get_infos函数

* 生成字典信息infos

infos={'image':xxx,

           'calib': xxx,

           'annos': xxx}

annos = {'name': xxx,

                'truncated': xxx,

                'alpha':xxx,

                .............}

其中annos就是解析你的label文件生成的字典, 如类别名,是否被遮挡,bbox的角度

同理有些字典信息不需要可以增删

3) create_custom_infos函数

这个函数主要用来生成你的数据字典,一般以.pkl后缀,如果你不需要评估,可以将其中的评估部分删除,原理也很简单。

4) main函数中的类别信息

修改后的代码如下:

  1.  
    import copy
  2.  
    import pickle
  3.  
    import os
  4.  
    from skimage import io
  5.  
    import numpy as np
  6.  
     
  7.  
    from ..kitti import kitti_utils
  8.  
    from ...ops.roiaware_pool3d import roiaware_pool3d_utils
  9.  
    from ...utils import box_utils, common_utils, calibration_kitti, object3d_custom
  10.  
    from ..dataset import DatasetTemplate
  11.  
     
  12.  
     
  13.  
    class CustomDataset(DatasetTemplate):
  14.  
    def __init__(self, dataset_cfg, class_names, training=True, root_path=None, logger=None, ext='.bin'):
  15.  
    """
  16.  
    Args:
  17.  
    root_path:
  18.  
    dataset_cfg:
  19.  
    class_names:
  20.  
    training:
  21.  
    logger:
  22.  
    """
  23.  
    super().__init__(
  24.  
    dataset_cfg=dataset_cfg, class_names=class_names, training=training, root_path=root_path, logger=logger
  25.  
    )
  26.  
    self.split = self.dataset_cfg.DATA_SPLIT[self.mode]
  27.  
    self.root_split_path = self.root_path / ('training' if self.split != 'test' else 'testing')
  28.  
     
  29.  
    split_dir = os.path.join(self.root_path, 'ImageSets', (self.split '.txt')) # custom/ImagSets/xxx.txt
  30.  
    self.sample_id_list = [x.strip() for x in open(split_dir).readlines()] if os.path.exists(split_dir) else None # xxx.txt内的内容
  31.  
     
  32.  
    self.custom_infos = []
  33.  
    self.include_data(self.mode) # train/val
  34.  
    self.map_class_to_kitti = self.dataset_cfg.MAP_CLASS_TO_KITTI
  35.  
    self.ext = ext
  36.  
     
  37.  
     
  38.  
    def include_data(self, mode):
  39.  
    self.logger.info('Loading Custom dataset.')
  40.  
    custom_infos = []
  41.  
     
  42.  
    for info_path in self.dataset_cfg.INFO_PATH[mode]:
  43.  
    info_path = self.root_path / info_path
  44.  
    if not info_path.exists():
  45.  
    continue
  46.  
    with open(info_path, 'rb') as f:
  47.  
    infos = pickle.load(f)
  48.  
     
  49.  
    def get_label(self, idx):
  50.  
    label_file = self.root_split_path / 'label_2' / ('%s.txt' % idx)
  51.  
    assert label_file.exists()
  52.  
    return object3d_custom.get_objects_from_label(label_file)
  53.  
     
  54.  
    def get_lidar(self, idx, getitem=True):
  55.  
    if getitem == True:
  56.  
    lidar_file = self.root_split_path '/velodyne/' ('%s.bin' % idx)
  57.  
    else:
  58.  
    lidar_file = self.root_split_path / 'velodyne' / ('%s.bin' % idx)
  59.  
    return np.fromfile(str(lidar_file), dtype=np.float32).reshape(-1, 4)
  60.  
     
  61.  
    def get_image(self, idx):
  62.  
    """
  63.  
    Loads image for a sample
  64.  
    Args:
  65.  
    idx: int, Sample index
  66.  
    Returns:
  67.  
    image: (H, W, 3), RGB Image
  68.  
    """
  69.  
    img_file = self.root_split_path / 'image_2' / ('%s.png' % idx)
  70.  
    assert img_file.exists()
  71.  
    image = io.imread(img_file)
  72.  
    image = image.astype(np.float32)
  73.  
    image /= 255.0
  74.  
    return image
  75.  
     
  76.  
    def get_image_shape(self, idx):
  77.  
    img_file = self.root_split_path / 'image_2' / ('%s.png' % idx)
  78.  
    assert img_file.exists()
  79.  
    return np.array(io.imread(img_file).shape[:2], dtype=np.int32)
  80.  
     
  81.  
    def get_fov_flag(self, pts_rect, img_shape, calib):
  82.  
    """
  83.  
    Args:
  84.  
    pts_rect:
  85.  
    img_shape:
  86.  
    calib:
  87.  
     
  88.  
    Returns:
  89.  
     
  90.  
    """
  91.  
    pts_img, pts_rect_depth = calib.rect_to_img(pts_rect)
  92.  
    val_flag_1 = np.logical_and(pts_img[:, 0] >= 0, pts_img[:, 0] < img_shape[1])
  93.  
    val_flag_2 = np.logical_and(pts_img[:, 1] >= 0, pts_img[:, 1] < img_shape[0])
  94.  
    val_flag_merge = np.logical_and(val_flag_1, val_flag_2)
  95.  
    pts_valid_flag = np.logical_and(val_flag_merge, pts_rect_depth >= 0)
  96.  
     
  97.  
    return pts_valid_flag
  98.  
     
  99.  
    def set_split(self, split):
  100.  
    super().__init__(
  101.  
    dataset_cfg=self.dataset_cfg, class_names=self.class_names, training=self.training,
  102.  
    root_path=self.root_path, logger=self.logger
  103.  
    )
  104.  
    self.split = split
  105.  
     
  106.  
    split_dir = self.root_path / 'ImageSets' / (self.split '.txt')
  107.  
    self.sample_id_list = [x.strip() for x in open(split_dir).readlines()] if split_dir.exists() else None
  108.  
    custom_infos.extend(infos)
  109.  
     
  110.  
    self.custom_infos.extend(custom_infos)
  111.  
    self.logger.info('Total samples for CUSTOM dataset: %d' % (len(custom_infos)))
  112.  
     
  113.  
     
  114.  
    def __len__(self):
  115.  
    if self._merge_all_iters_to_one_epoch:
  116.  
    return len(self.sample_id_list) * self.total_epochs
  117.  
     
  118.  
    return len(self.custom_infos)
  119.  
     
  120.  
    def __getitem__(self, index):
  121.  
    if self._merge_all_iters_to_one_epoch:
  122.  
    index = index % len(self.custom_infos)
  123.  
     
  124.  
    info = copy.deepcopy(self.custom_infos[index])
  125.  
     
  126.  
    sample_idx = info['point_cloud']['lidar_idx']
  127.  
    img_shape = info['image']['image_shape']
  128.  
    calib = self.get_calib(sample_idx)
  129.  
    get_item_list = self.dataset_cfg.get('GET_ITEM_LIST', ['points'])
  130.  
    input_dict = {
  131.  
    'frame_id': self.sample_id_list[index],
  132.  
    'calib': calib,
  133.  
    }
  134.  
     
  135.  
    # 如果annos标签存在info的字典里
  136.  
    if 'annos' in info:
  137.  
    annos = info['annos']
  138.  
    annos = common_utils.drop_info_with_name(annos, name='DontCare')
  139.  
    loc, dims, rots = annos['location'], annos['dimensions'], annos['rotation_y']
  140.  
    gt_names = annos['name']
  141.  
    gt_boxes_camera = np.concatenate([loc, dims, rots[..., np.newaxis]], axis=1).astype(np.float32)
  142.  
    gt_boxes_lidar = box_utils.boxes3d_kitti_camera_to_lidar(gt_boxes_camera, calib)
  143.  
     
  144.  
    # 更新gtbox
  145.  
    input_dict.update({
  146.  
    'gt_names': gt_names,
  147.  
    'gt_boxes': gt_boxes_lidar
  148.  
    })
  149.  
    if "gt_boxes2d" in get_item_list:
  150.  
    input_dict['gt_boxes2d'] = annos["bbox"]
  151.  
     
  152.  
    # 获取fov视角的points
  153.  
    if "points" in get_item_list:
  154.  
    points = self.get_lidar(sample_idx, False)
  155.  
    if self.dataset_cfg.FOV_POINTS_ONLY:
  156.  
    pts_rect = calib.lidar_to_rect(points[:, 0:3])
  157.  
    fov_flag = self.get_fov_flag(pts_rect, img_shape, calib)
  158.  
    points = points[fov_flag]
  159.  
    input_dict['points'] = points
  160.  
     
  161.  
    input_dict['calib'] = calib
  162.  
    data_dict = self.prepare_data(data_dict=input_dict)
  163.  
     
  164.  
    data_dict['image_shape'] = img_shape
  165.  
    return data_dict
  166.  
     
  167.  
    def evaluation(self, det_annos, class_names, **kwargs):
  168.  
    if 'annos' not in self.custom_infos[0].keys():
  169.  
    return 'No ground-truth boxes for evaluation', {}
  170.  
     
  171.  
    def kitti_eval(eval_det_annos, eval_gt_annos, map_name_to_kitti):
  172.  
    from ..kitti.kitti_object_eval_python import eval as kitti_eval
  173.  
    from ..kitti import kitti_utils
  174.  
     
  175.  
    kitti_utils.transform_annotations_to_kitti_format(eval_det_annos, map_name_to_kitti=map_name_to_kitti)
  176.  
    kitti_utils.transform_annotations_to_kitti_format(
  177.  
    eval_gt_annos, map_name_to_kitti=map_name_to_kitti,
  178.  
    info_with_fakelidar=self.dataset_cfg.get('INFO_WITH_FAKELIDAR', False)
  179.  
    )
  180.  
    kitti_class_names = [map_name_to_kitti[x] for x in class_names]
  181.  
    ap_result_str, ap_dict = kitti_eval.get_official_eval_result(
  182.  
    gt_annos=eval_gt_annos, dt_annos=eval_det_annos, current_classes=kitti_class_names
  183.  
    )
  184.  
    return ap_result_str, ap_dict
  185.  
     
  186.  
    eval_det_annos = copy.deepcopy(det_annos)
  187.  
    eval_gt_annos = [copy.deepcopy(info['annos']) for info in self.custom_infos]
  188.  
     
  189.  
    if kwargs['eval_metric'] == 'kitti':
  190.  
    ap_result_str, ap_dict = kitti_eval(eval_det_annos, eval_gt_annos, self.map_class_to_kitti)
  191.  
    else:
  192.  
    raise NotImplementedError
  193.  
     
  194.  
    return ap_result_str, ap_dict
  195.  
     
  196.  
    def get_calib(self, idx):
  197.  
    calib_file = self.root_split_path / 'calib' / ('%s.txt' % idx)
  198.  
    assert calib_file.exists()
  199.  
    return calibration_kitti.Calibration(calib_file)
  200.  
     
  201.  
    def get_infos(self, num_workers=4, has_label=True, count_inside_pts=True, sample_id_list=None):
  202.  
    import concurrent.futures as futures
  203.  
     
  204.  
    def process_single_scene(sample_idx):
  205.  
     
  206.  
    # 生成point_cloud字典
  207.  
    print('%s sample_idx: %s' % (self.split, sample_idx))
  208.  
    info = {}
  209.  
    pc_info = {'num_features': 4, 'lidar_idx': sample_idx}
  210.  
    info['point_cloud'] = pc_info
  211.  
     
  212.  
    # 生成image字典
  213.  
    image_info = {'image_idx': sample_idx, 'image_shape': self.get_image_shape(sample_idx)}
  214.  
    info['image'] = image_info
  215.  
     
  216.  
    # 生成calib字典
  217.  
    calib = self.get_calib(sample_idx)
  218.  
    P2 = np.concatenate([calib.P2, np.array([[0., 0., 0., 1.]])], axis=0)
  219.  
    R0_4x4 = np.zeros([4, 4], dtype=calib.R0.dtype)
  220.  
    R0_4x4[3, 3] = 1.
  221.  
    R0_4x4[:3, :3] = calib.R0
  222.  
    V2C_4x4 = np.concatenate([calib.V2C, np.array([[0., 0., 0., 1.]])], axis=0)
  223.  
    calib_info = {'P2': P2, 'R0_rect': R0_4x4, 'Tr_velo_to_cam': V2C_4x4}
  224.  
    info['calib'] = calib_info
  225.  
     
  226.  
    if has_label:
  227.  
    # 生成annos字典
  228.  
    obj_list = self.get_label(sample_idx)
  229.  
    annotations = {}
  230.  
    annotations['name'] = np.array([obj.cls_type for obj in obj_list])
  231.  
    annotations['truncated'] = np.array([obj.truncation for obj in obj_list])
  232.  
    annotations['occluded'] = np.array([obj.occlusion for obj in obj_list])
  233.  
    annotations['alpha'] = np.array([obj.alpha for obj in obj_list])
  234.  
    annotations['bbox'] = np.concatenate([obj.box2d.reshape(1, 4) for obj in obj_list], axis=0)
  235.  
    annotations['dimensions'] = np.array([[obj.l, obj.h, obj.w] for obj in obj_list]) # lhw(camera) format
  236.  
    annotations['location'] = np.concatenate([obj.loc.reshape(1, 3) for obj in obj_list], axis=0)
  237.  
    annotations['rotation_y'] = np.array([obj.ry for obj in obj_list])
  238.  
    annotations['score'] = np.array([obj.score for obj in obj_list])
  239.  
    annotations['difficulty'] = np.array([obj.level for obj in obj_list], np.int32)
  240.  
     
  241.  
    num_objects = len([obj.cls_type for obj in obj_list if obj.cls_type != 'DontCare'])
  242.  
    num_gt = len(annotations['name'])
  243.  
    index = list(range(num_objects)) [-1] * (num_gt - num_objects)
  244.  
    annotations['index'] = np.array(index, dtype=np.int32)
  245.  
     
  246.  
    loc = annotations['location'][:num_objects]
  247.  
    dims = annotations['dimensions'][:num_objects]
  248.  
    rots = annotations['rotation_y'][:num_objects]
  249.  
    loc_lidar = calib.rect_to_lidar(loc)
  250.  
    l, h, w = dims[:, 0:1], dims[:, 1:2], dims[:, 2:3]
  251.  
    loc_lidar[:, 2] = h[:, 0] / 2
  252.  
    gt_boxes_lidar = np.concatenate([loc_lidar, l, w, h, -(np.pi / 2 rots[..., np.newaxis])], axis=1)
  253.  
    annotations['gt_boxes_lidar'] = gt_boxes_lidar
  254.  
     
  255.  
    info['annos'] = annotations
  256.  
     
  257.  
    if count_inside_pts:
  258.  
    points = self.get_lidar(sample_idx, False)
  259.  
    calib = self.get_calib(sample_idx)
  260.  
    pts_rect = calib.lidar_to_rect(points[:, 0:3])
  261.  
     
  262.  
    fov_flag = self.get_fov_flag(pts_rect, info['image']['image_shape'], calib)
  263.  
    pts_fov = points[fov_flag]
  264.  
    corners_lidar = box_utils.boxes_to_corners_3d(gt_boxes_lidar)
  265.  
    num_points_in_gt = -np.ones(num_gt, dtype=np.int32)
  266.  
     
  267.  
    for k in range(num_objects):
  268.  
    flag = box_utils.in_hull(pts_fov[:, 0:3], corners_lidar[k])
  269.  
    num_points_in_gt[k] = flag.sum()
  270.  
    annotations['num_points_in_gt'] = num_points_in_gt
  271.  
     
  272.  
    return info
  273.  
     
  274.  
    sample_id_list = sample_id_list if sample_id_list is not None else self.sample_id_list
  275.  
    with futures.ThreadPoolExecutor(num_workers) as executor:
  276.  
    infos = executor.map(process_single_scene, sample_id_list)
  277.  
    return list(infos)
  278.  
     
  279.  
    def create_groundtruth_database(self, info_path=None, used_classes=None, split='train'):
  280.  
    import torch
  281.  
     
  282.  
    database_save_path = Path(self.root_path) / ('gt_database' if split == 'train' else ('gt_database_%s' % split))
  283.  
    db_info_save_path = Path(self.root_path) / ('custom_dbinfos_%s.pkl' % split)
  284.  
     
  285.  
    database_save_path.mkdir(parents=True, exist_ok=True)
  286.  
    all_db_infos = {}
  287.  
     
  288.  
    with open(info_path, 'rb') as f:
  289.  
    infos = pickle.load(f)
  290.  
     
  291.  
    for k in range(len(infos)):
  292.  
    print('gt_database sample: %d/%d' % (k 1, len(infos)))
  293.  
    info = infos[k]
  294.  
    sample_idx = info['point_cloud']['lidar_idx']
  295.  
    points = self.get_lidar(sample_idx, False)
  296.  
    annos = info['annos']
  297.  
    names = annos['name']
  298.  
    difficulty = annos['difficulty']
  299.  
    bbox = annos['bbox']
  300.  
    gt_boxes = annos['gt_boxes_lidar']
  301.  
     
  302.  
    num_obj = gt_boxes.shape[0]
  303.  
    point_indices = roiaware_pool3d_utils.points_in_boxes_cpu(
  304.  
    torch.from_numpy(points[:, 0:3]), torch.from_numpy(gt_boxes)
  305.  
    ).numpy() # (nboxes, npoints)
  306.  
     
  307.  
    for i in range(num_obj):
  308.  
    filename = '%s_%s_%d.bin' % (sample_idx, names[i], i)
  309.  
    filepath = database_save_path / filename
  310.  
    gt_points = points[point_indices[i] > 0]
  311.  
     
  312.  
    gt_points[:, :3] -= gt_boxes[i, :3]
  313.  
    with open(filepath, 'w') as f:
  314.  
    gt_points.tofile(f)
  315.  
     
  316.  
    if (used_classes is None) or names[i] in used_classes:
  317.  
    db_path = str(filepath.relative_to(self.root_path)) # gt_database/xxxxx.bin
  318.  
    db_info = {'name': names[i], 'path': db_path, 'image_idx': sample_idx, 'gt_idx': i,
  319.  
    'box3d_lidar': gt_boxes[i], 'num_points_in_gt': gt_points.shape[0],
  320.  
    'difficulty': difficulty[i], 'bbox': bbox[i], 'score': annos['score'][i]}
  321.  
    if names[i] in all_db_infos:
  322.  
    all_db_infos[names[i]].append(db_info)
  323.  
    else:
  324.  
    all_db_infos[names[i]] = [db_info]
  325.  
     
  326.  
    # Output the num of all classes in database
  327.  
    for k, v in all_db_infos.items():
  328.  
    print('Database %s: %d' % (k, len(v)))
  329.  
     
  330.  
    with open(db_info_save_path, 'wb') as f:
  331.  
    pickle.dump(all_db_infos, f)
  332.  
     
  333.  
    @staticmethod
  334.  
    def create_label_file_with_name_and_box(class_names, gt_names, gt_boxes, save_label_path):
  335.  
    with open(save_label_path, 'w') as f:
  336.  
    for idx in range(gt_boxes.shape[0]):
  337.  
    boxes = gt_boxes[idx]
  338.  
    name = gt_names[idx]
  339.  
    if name not in class_names:
  340.  
    continue
  341.  
    line = "{x} {y} {z} {l} {w} {h} {angle} {name}\n".format(
  342.  
    x=boxes[0], y=boxes[1], z=(boxes[2]), l=boxes[3],
  343.  
    w=boxes[4], h=boxes[5], angle=boxes[6], name=name
  344.  
    )
  345.  
    f.write(line)
  346.  
    @staticmethod
  347.  
    def generate_prediction_dicts(batch_dict, pred_dicts, class_names, output_path=None):
  348.  
    """
  349.  
    Args:
  350.  
    batch_dict:
  351.  
    frame_id:
  352.  
    pred_dicts: list of pred_dicts
  353.  
    pred_boxes: (N, 7), Tensor
  354.  
    pred_scores: (N), Tensor
  355.  
    pred_labels: (N), Tensor
  356.  
    class_names:
  357.  
    output_path:
  358.  
     
  359.  
    Returns:
  360.  
     
  361.  
    """
  362.  
    def get_template_prediction(num_samples):
  363.  
    ret_dict = {
  364.  
    'name': np.zeros(num_samples), 'truncated': np.zeros(num_samples),
  365.  
    'occluded': np.zeros(num_samples), 'alpha': np.zeros(num_samples),
  366.  
    'bbox': np.zeros([num_samples, 4]), 'dimensions': np.zeros([num_samples, 3]),
  367.  
    'location': np.zeros([num_samples, 3]), 'rotation_y': np.zeros(num_samples),
  368.  
    'score': np.zeros(num_samples), 'boxes_lidar': np.zeros([num_samples, 7])
  369.  
    }
  370.  
    return ret_dict
  371.  
     
  372.  
    def generate_single_sample_dict(batch_index, box_dict):
  373.  
    pred_scores = box_dict['pred_scores'].cpu().numpy()
  374.  
    pred_boxes = box_dict['pred_boxes'].cpu().numpy()
  375.  
    pred_labels = box_dict['pred_labels'].cpu().numpy()
  376.  
    pred_dict = get_template_prediction(pred_scores.shape[0])
  377.  
    if pred_scores.shape[0] == 0:
  378.  
    return pred_dict
  379.  
     
  380.  
    calib = batch_dict['calib'][batch_index]
  381.  
    image_shape = batch_dict['image_shape'][batch_index].cpu().numpy()
  382.  
    pred_boxes_camera = box_utils.boxes3d_lidar_to_kitti_camera(pred_boxes, calib)
  383.  
    pred_boxes_img = box_utils.boxes3d_kitti_camera_to_imageboxes(
  384.  
    pred_boxes_camera, calib, image_shape=image_shape
  385.  
    )
  386.  
     
  387.  
    pred_dict['name'] = np.array(class_names)[pred_labels - 1]
  388.  
    pred_dict['alpha'] = -np.arctan2(-pred_boxes[:, 1], pred_boxes[:, 0]) pred_boxes_camera[:, 6]
  389.  
    pred_dict['bbox'] = pred_boxes_img
  390.  
    pred_dict['dimensions'] = pred_boxes_camera[:, 3:6]
  391.  
    pred_dict['location'] = pred_boxes_camera[:, 0:3]
  392.  
    pred_dict['rotation_y'] = pred_boxes_camera[:, 6]
  393.  
    pred_dict['score'] = pred_scores
  394.  
    pred_dict['boxes_lidar'] = pred_boxes
  395.  
     
  396.  
    return pred_dict
  397.  
     
  398.  
    annos = []
  399.  
    for index, box_dict in enumerate(pred_dicts):
  400.  
    frame_id = batch_dict['frame_id'][index]
  401.  
     
  402.  
    single_pred_dict = generate_single_sample_dict(index, box_dict)
  403.  
    single_pred_dict['frame_id'] = frame_id
  404.  
    annos.append(single_pred_dict)
  405.  
     
  406.  
    if output_path is not None:
  407.  
    cur_det_file = output_path / ('%s.txt' % frame_id)
  408.  
    with open(cur_det_file, 'w') as f:
  409.  
    bbox = single_pred_dict['bbox']
  410.  
    loc = single_pred_dict['location']
  411.  
    dims = single_pred_dict['dimensions'] # lhw -> hwl
  412.  
     
  413.  
    for idx in range(len(bbox)):
  414.  
    print('%s -1 -1 %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f %.4f'
  415.  
    % (single_pred_dict['name'][idx], single_pred_dict['alpha'][idx],
  416.  
    bbox[idx][0], bbox[idx][1], bbox[idx][2], bbox[idx][3],
  417.  
    dims[idx][1], dims[idx][2], dims[idx][0], loc[idx][0],
  418.  
    loc[idx][1], loc[idx][2], single_pred_dict['rotation_y'][idx],
  419.  
    single_pred_dict['score'][idx]), file=f)
  420.  
     
  421.  
    return annos
  422.  
     
  423.  
     
  424.  
    def create_custom_infos(dataset_cfg, class_names, data_path, save_path, workers=4):
  425.  
    dataset = CustomDataset(
  426.  
    dataset_cfg=dataset_cfg, class_names=class_names, root_path=data_path,
  427.  
    training=False, logger=common_utils.create_logger()
  428.  
    )
  429.  
    train_split, val_split = 'train', 'val'
  430.  
    num_features = len(dataset_cfg.POINT_FEATURE_ENCODING.src_feature_list)
  431.  
     
  432.  
    train_filename = save_path / ('custom_infos_%s.pkl' % train_split)
  433.  
    val_filename = save_path / ('custom_infos_%s.pkl' % val_split)
  434.  
     
  435.  
    print('------------------------Start to generate data infos------------------------')
  436.  
     
  437.  
    dataset.set_split(train_split)
  438.  
    custom_infos_train = dataset.get_infos(
  439.  
    num_workers=workers, has_label=True, count_inside_pts=True
  440.  
    )
  441.  
    with open(train_filename, 'wb') as f:
  442.  
    pickle.dump(custom_infos_train, f)
  443.  
    print('Custom info train file is saved to %s' % train_filename)
  444.  
     
  445.  
    dataset.set_split(val_split)
  446.  
    custom_infos_val = dataset.get_infos(
  447.  
    num_workers=workers, has_label=True, count_inside_pts=True
  448.  
    )
  449.  
    with open(val_filename, 'wb') as f:
  450.  
    pickle.dump(custom_infos_val, f)
  451.  
    print('Custom info train file is saved to %s' % val_filename)
  452.  
     
  453.  
    print('------------------------Start create groundtruth database for data augmentation------------------------')
  454.  
    dataset.set_split(train_split)
  455.  
    dataset.create_groundtruth_database(train_filename, split=train_split)
  456.  
    print('------------------------Data preparation done------------------------')
  457.  
     
  458.  
     
  459.  
    if __name__ == '__main__':
  460.  
    import sys
  461.  
     
  462.  
    if sys.argv.__len__() > 1 and sys.argv[1] == 'create_custom_infos':
  463.  
    import yaml
  464.  
    from pathlib import Path
  465.  
    from easydict import EasyDict
  466.  
     
  467.  
    dataset_cfg = EasyDict(yaml.safe_load(open(sys.argv[2])))
  468.  
    ROOT_DIR = (Path(__file__).resolve().parent / '../../../').resolve()
  469.  
    create_custom_infos(
  470.  
    dataset_cfg=dataset_cfg,
  471.  
    class_names=['Car', 'Pedestrian', 'Van'],
  472.  
    data_path=ROOT_DIR / 'data' / 'custom',
  473.  
    save_path=ROOT_DIR / 'data' / 'custom',
  474.  
    )
学新通

2. tools/cfgs/custom_models/pointpillar.yaml

这个函数主要是网络模型参数的配置

我主要修改了以下几个点:

1) CLASS_NAMES(替换成你自己的类别信息)

2) _BASE_CONFIFG(custom_dataset.yaml的路径,建议用详细的绝对路径)

3) POINT_CLOUD_RANGE和VOXEL_SIZE

这两者很重要,直接影响后面模型的传播,如果设置不对很容易报错

官方建议 Voxel设置:X,Y方向个数是16的倍数。Z方向为40。

之前尝试设置了一些还是不行,这个我也没太明白到底怎么回事,索性我就不修改

4) ANCHOR_GENERATOR_CONFIG

我修改了自己的类别属性以及feature_map_stride,去除了gt_sampling

完整的代码如下:

  1.  
    CLASS_NAMES: ['Car', 'Pedestrian', 'Van']
  2.  
     
  3.  
    DATA_CONFIG:
  4.  
    _BASE_CONFIG_: /home/gmm/下载/OpenPCDet/tools/cfgs/dataset_configs/custom_dataset.yaml
  5.  
    POINT_CLOUD_RANGE: [0, -39.68, -3, 69.12, 39.68, 1]
  6.  
    DATA_PROCESSOR:
  7.  
    - NAME: mask_points_and_boxes_outside_range
  8.  
    REMOVE_OUTSIDE_BOXES: True
  9.  
     
  10.  
    - NAME: shuffle_points
  11.  
    SHUFFLE_ENABLED: {
  12.  
    'train': True,
  13.  
    'test': False
  14.  
    }
  15.  
     
  16.  
    - NAME: transform_points_to_voxels
  17.  
    VOXEL_SIZE: [0.16, 0.16, 4]
  18.  
    MAX_POINTS_PER_VOXEL: 32
  19.  
    MAX_NUMBER_OF_VOXELS: {
  20.  
    'train': 16000,
  21.  
    'test': 40000
  22.  
    }
  23.  
    DATA_AUGMENTOR:
  24.  
    DISABLE_AUG_LIST: ['placeholder']
  25.  
    AUG_CONFIG_LIST:
  26.  
    # - NAME: gt_sampling
  27.  
    # USE_ROAD_PLANE: True
  28.  
    # DB_INFO_PATH:
  29.  
    # - custom_dbinfos_train.pkl
  30.  
    # PREPARE: {
  31.  
    # filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Van:5']
  32.  
    # }
  33.  
    #
  34.  
    # SAMPLE_GROUPS: ['Car:15', 'Pedestrian:15', 'Van:15']
  35.  
    # NUM_POINT_FEATURES: 4
  36.  
    # DATABASE_WITH_FAKELIDAR: False
  37.  
    # REMOVE_EXTRA_WIDTH: [0.0, 0.0, 0.0]
  38.  
    # LIMIT_WHOLE_SCENE: False
  39.  
     
  40.  
    - NAME: random_world_flip
  41.  
    ALONG_AXIS_LIST: ['x']
  42.  
     
  43.  
    - NAME: random_world_rotation
  44.  
    WORLD_ROT_ANGLE: [-0.78539816, 0.78539816]
  45.  
     
  46.  
    - NAME: random_world_scaling
  47.  
    WORLD_SCALE_RANGE: [0.95, 1.05]
  48.  
     
  49.  
    MODEL:
  50.  
    NAME: PointPillar
  51.  
     
  52.  
    VFE:
  53.  
    NAME: PillarVFE
  54.  
    WITH_DISTANCE: False
  55.  
    USE_ABSLOTE_XYZ: True
  56.  
    USE_NORM: True
  57.  
    NUM_FILTERS: [64]
  58.  
     
  59.  
    MAP_TO_BEV:
  60.  
    NAME: PointPillarScatter
  61.  
    NUM_BEV_FEATURES: 64
  62.  
     
  63.  
    BACKBONE_2D:
  64.  
    NAME: BaseBEVBackbone
  65.  
    LAYER_NUMS: [3, 5, 5]
  66.  
    LAYER_STRIDES: [2, 2, 2]
  67.  
    NUM_FILTERS: [64, 128, 256]
  68.  
    UPSAMPLE_STRIDES: [1, 2, 4]
  69.  
    NUM_UPSAMPLE_FILTERS: [128, 128, 128]
  70.  
     
  71.  
    DENSE_HEAD:
  72.  
    NAME: AnchorHeadSingle
  73.  
    CLASS_AGNOSTIC: False
  74.  
     
  75.  
    USE_DIRECTION_CLASSIFIER: True
  76.  
    DIR_OFFSET: 0.78539
  77.  
    DIR_LIMIT_OFFSET: 0.0
  78.  
    NUM_DIR_BINS: 2
  79.  
     
  80.  
    ANCHOR_GENERATOR_CONFIG: [
  81.  
    {
  82.  
    'class_name': 'Car',
  83.  
    'anchor_sizes': [[1.8, 4.7, 1.8]],
  84.  
    'anchor_rotations': [0, 1.57],
  85.  
    'anchor_bottom_heights': [0],
  86.  
    'align_center': False,
  87.  
    'feature_map_stride': 2,
  88.  
    'matched_threshold': 0.55,
  89.  
    'unmatched_threshold': 0.45
  90.  
    },
  91.  
    {
  92.  
    'class_name': 'Pedestrian',
  93.  
    'anchor_sizes': [[0.77, 0.92, 1.83]],
  94.  
    'anchor_rotations': [0, 1.57],
  95.  
    'anchor_bottom_heights': [0],
  96.  
    'align_center': False,
  97.  
    'feature_map_stride': 2,
  98.  
    'matched_threshold': 0.5,
  99.  
    'unmatched_threshold': 0.45
  100.  
    },
  101.  
    {
  102.  
    'class_name': 'Van',
  103.  
    'anchor_sizes': [[2.5, 5.7, 1.9]],
  104.  
    'anchor_rotations': [0, 1.57],
  105.  
    'anchor_bottom_heights': [0],
  106.  
    'align_center': False,
  107.  
    'feature_map_stride': 2,
  108.  
    'matched_threshold': 0.5,
  109.  
    'unmatched_threshold': 0.45
  110.  
    },
  111.  
    ]
  112.  
     
  113.  
    TARGET_ASSIGNER_CONFIG:
  114.  
    NAME: AxisAlignedTargetAssigner
  115.  
    POS_FRACTION: -1.0
  116.  
    SAMPLE_SIZE: 512
  117.  
    NORM_BY_NUM_EXAMPLES: False
  118.  
    MATCH_HEIGHT: False
  119.  
    BOX_CODER: ResidualCoder
  120.  
     
  121.  
    LOSS_CONFIG:
  122.  
    LOSS_WEIGHTS: {
  123.  
    'cls_weight': 1.0,
  124.  
    'loc_weight': 2.0,
  125.  
    'dir_weight': 0.2,
  126.  
    'code_weights': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
  127.  
    }
  128.  
     
  129.  
    POST_PROCESSING:
  130.  
    RECALL_THRESH_LIST: [0.3, 0.5, 0.7]
  131.  
    SCORE_THRESH: 0.1
  132.  
    OUTPUT_RAW_SCORE: False
  133.  
     
  134.  
    EVAL_METRIC: kitti
  135.  
     
  136.  
    NMS_CONFIG:
  137.  
    MULTI_CLASSES_NMS: False
  138.  
    NMS_TYPE: nms_gpu
  139.  
    NMS_THRESH: 0.01
  140.  
    NMS_PRE_MAXSIZE: 4096
  141.  
    NMS_POST_MAXSIZE: 500
  142.  
     
  143.  
     
  144.  
    OPTIMIZATION:
  145.  
    BATCH_SIZE_PER_GPU: 4
  146.  
    NUM_EPOCHS: 80
  147.  
     
  148.  
    OPTIMIZER: adam_onecycle
  149.  
    LR: 0.003
  150.  
    WEIGHT_DECAY: 0.01
  151.  
    MOMENTUM: 0.9
  152.  
     
  153.  
    MOMS: [0.95, 0.85]
  154.  
    PCT_START: 0.4
  155.  
    DIV_FACTOR: 10
  156.  
    DECAY_STEP_LIST: [35, 45]
  157.  
    LR_DECAY: 0.1
  158.  
    LR_CLIP: 0.0000001
  159.  
     
  160.  
    LR_WARMUP: False
  161.  
    WARMUP_EPOCH: 1
  162.  
     
  163.  
    GRAD_NORM_CLIP: 10
学新通

3. tools/cfgs/dataset_configs/custom_dataset.yaml

修改了DATA_PATH, POINT_CLOUD_RANGE和MAP_CLASS_TO_KITTI还有其他的一些类别属性。

修改后的代码如下:

  1.  
    DATASET: 'CustomDataset'
  2.  
    DATA_PATH: '/home/gmm/下载/OpenPCDet/data/custom'
  3.  
     
  4.  
    POINT_CLOUD_RANGE: [0, -40, -3, 70.4, 40, 1]
  5.  
     
  6.  
    DATA_SPLIT: {
  7.  
    'train': train,
  8.  
    'test': val
  9.  
    }
  10.  
     
  11.  
    INFO_PATH: {
  12.  
    'train': [custom_infos_train.pkl],
  13.  
    'test': [custom_infos_val.pkl],
  14.  
    }
  15.  
     
  16.  
    GET_ITEM_LIST: ["points"]
  17.  
    FOV_POINTS_ONLY: True
  18.  
     
  19.  
    MAP_CLASS_TO_KITTI: {
  20.  
    'Car': 'Car',
  21.  
    'Pedestrian': 'Pedestrian',
  22.  
    'Van': 'Cyclist',
  23.  
    }
  24.  
     
  25.  
     
  26.  
    DATA_AUGMENTOR:
  27.  
    DISABLE_AUG_LIST: ['placeholder']
  28.  
    AUG_CONFIG_LIST:
  29.  
    - NAME: gt_sampling
  30.  
    USE_ROAD_PLANE: False
  31.  
    DB_INFO_PATH:
  32.  
    - custom_dbinfos_train.pkl
  33.  
    PREPARE: {
  34.  
    filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Van:5'],
  35.  
    }
  36.  
     
  37.  
    SAMPLE_GROUPS: ['Car:20', 'Pedestrian:15', 'Van:20']
  38.  
    NUM_POINT_FEATURES: 4
  39.  
    DATABASE_WITH_FAKELIDAR: False
  40.  
    REMOVE_EXTRA_WIDTH: [0.0, 0.0, 0.0]
  41.  
    LIMIT_WHOLE_SCENE: True
  42.  
     
  43.  
    - NAME: random_world_flip
  44.  
    ALONG_AXIS_LIST: ['x']
  45.  
     
  46.  
    - NAME: random_world_rotation
  47.  
    WORLD_ROT_ANGLE: [-0.78539816, 0.78539816]
  48.  
     
  49.  
    - NAME: random_world_scaling
  50.  
    WORLD_SCALE_RANGE: [0.95, 1.05]
  51.  
     
  52.  
     
  53.  
    POINT_FEATURE_ENCODING: {
  54.  
    encoding_type: absolute_coordinates_encoding,
  55.  
    used_feature_list: ['x', 'y', 'z', 'intensity'],
  56.  
    src_feature_list: ['x', 'y', 'z', 'intensity'],
  57.  
    }
  58.  
     
  59.  
    DATA_PROCESSOR:
  60.  
    - NAME: mask_points_and_boxes_outside_range
  61.  
    REMOVE_OUTSIDE_BOXES: True
  62.  
     
  63.  
    - NAME: shuffle_points
  64.  
    SHUFFLE_ENABLED: {
  65.  
    'train': True,
  66.  
    'test': False
  67.  
    }
  68.  
     
  69.  
    - NAME: transform_points_to_voxels
  70.  
    VOXEL_SIZE: [0.05, 0.05, 0.1]
  71.  
    MAX_POINTS_PER_VOXEL: 5
  72.  
    MAX_NUMBER_OF_VOXELS: {
  73.  
    'train': 16000,
  74.  
    'test': 40000
  75.  
    }
学新通

4. demo.py

之前训练之后检测框并没有出来,后来我才发现可能是自己的数据集太少,出来的检测框精度太低,于是我在V.draw_scenes部分作了一点修改,并在之前加入一个mask限制条件,结果果然出来检测框了。

demo.py修改部分的代码:

  1.  
    with torch.no_grad():
  2.  
    for idx, data_dict in enumerate(demo_dataset):
  3.  
    logger.info(f'Visualized sample index: \t{idx 1}')
  4.  
    data_dict = demo_dataset.collate_batch([data_dict])
  5.  
    load_data_to_gpu(data_dict)
  6.  
    pred_dicts, _ = model.forward(data_dict)
  7.  
     
  8.  
    scores = pred_dicts[0]['pred_scores'].detach().cpu().numpy()
  9.  
    mask = scores > 0.3
  10.  
     
  11.  
    V.draw_scenes(
  12.  
    points=data_dict['points'][:, 1:], ref_boxes=pred_dicts[0]['pred_boxes'][mask],
  13.  
    ref_scores=pred_dicts[0]['pred_scores'], ref_labels=pred_dicts[0]['pred_labels'],
  14.  
    )
  15.  
     
  16.  
    if not OPEN3D_FLAG:
  17.  
    mlab.show(stop=True)
学新通

三. 运行过程

1. 生成数据字典

python -m pcdet.datasets.custom.custom_dataset create_custom_infos tools/cfgs/dataset_configs/custom_dataset.yaml

 学新通

 2. 训练

这里我偷懒只训练10轮,自己可以自定义

python tools/train.py --cfg_file tools/cfgs/custom_models/pointpillar.yaml --batch_size=1 --epochs=10

学新通

 这里有个警告不知道怎么回事,暂时忽略[W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool)

3. 评估

由于数据集样本设置比较少,而且训练次数比较少,可以看出评估结果较差

学新通

 4. 结果

学新通

 还好能有显示,如果没有出现检测框可以把demo.py的score调低

这篇好文章是转载于:学新通技术网

  • 版权申明: 本站部分内容来自互联网,仅供学习及演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,请提供相关证据及您的身份证明,我们将在收到邮件后48小时内删除。
  • 本站站名: 学新通技术网
  • 本文地址: /boutique/detail/tanhgfkheg
系列文章
更多 icon
同类精品
更多 icon
继续加载