cosense3d.dataset.toolkit package

Submodules

cosense3d.dataset.toolkit.cosense module

class cosense3d.dataset.toolkit.cosense.CoSenseDataConverter(data_path, meta_path, mode='all')[source]

Bases: object

OBJ_ID2NAME = {0: 'vehicle.car', 1: 'vehicle.van', 2: 'vehicle.truck', 3: 'vehicle.bus', 4: 'vehicle.tram', 5: 'vehicle.motorcycle', 6: 'vehicle.cyclist', 7: 'vehicle.scooter', 8: 'vehicle.other', 9: 'human.pedestrian', 10: 'human.wheelchair', 11: 'human.sitting', 12: 'static.trafficcone', 13: 'static.barrowlist', 14: 'vehicle.tricyclist', 15: 'unknown'}
OBJ_LIST = ['vehicle.car', 'vehicle.van', 'vehicle.truck', 'vehicle.bus', 'vehicle.tram', 'vehicle.motorcycle', 'vehicle.cyclist', 'vehicle.scooter', 'vehicle.other', 'human.pedestrian', 'human.wheelchair', 'human.sitting', 'static.trafficcone', 'static.barrowlist', 'vehicle.tricyclist', 'unknown']
OBJ_NAME2ID = {'human.pedestrian': 9, 'human.sitting': 11, 'human.wheelchair': 10, 'static.barrowlist': 13, 'static.trafficcone': 12, 'unknown': 15, 'vehicle.bus': 3, 'vehicle.car': 0, 'vehicle.cyclist': 6, 'vehicle.motorcycle': 5, 'vehicle.other': 8, 'vehicle.scooter': 7, 'vehicle.tram': 4, 'vehicle.tricyclist': 14, 'vehicle.truck': 2, 'vehicle.van': 1}
static add_cam_to_fdict(fdict, agent_id, cam_id, filenames, intrinsic, extrinsic, **kwargs)[source]
static cal_vbbx_mean_dim(meta)[source]

Calculate mean dimensions of four-wheel vehicles

static draw_sample_distributions(meta_path)[source]

Draw distribution of the number of observation points for each sample category.

Parameters:

meta_path – path contains pickle files of object samples

Returns:

static fdict_template()[source]
static global_boxes_to_local(meta_dict, data_path, meta_path)[source]
static load_meta(meta_path, mode)[source]
obj_from_sustech(label_file)[source]
obj_to_opv2v(bbxs, pose, out_file, timestamp=None)[source]
obj_to_sustech(cosense_objs, sustech_file)[source]
static parse_global_bbox_velo(meta_dict, data_path, meta_path)[source]
static remove_lidar_info(fdict, agent_id)[source]
static supervison_full_to_sparse(meta_dict, out_path, lidar_range=None, det_r=None, num_box_per_frame=None, num_box_total=None, label_ratio=None)[source]
to_kitti(out_dir=None)[source]
to_opv2v(out_dir=None)[source]
to_sustech(out_dir=None)[source]
static update_agent(fdict, agent_id, agent_type=None, agent_pose=None, agent_time=None, **kwargs)[source]
static update_agent_gt_boxes(fdict, agent_id, gt_boxes)[source]
static update_agent_lidar(fdict, agent_id, lidar_id, lidar_pose=None, lidar_time=None, lidar_file=None)[source]
static update_frame_bbx(fdict, bbx)[source]
update_from_sustech(sustech_path)[source]

cosense3d.dataset.toolkit.dairv2x module

cosense3d.dataset.toolkit.dairv2x.calib_to_tf_matrix(calib_file)[source]
cosense3d.dataset.toolkit.dairv2x.convert_v2x_c(root_dir, meta_out_dir)[source]
cosense3d.dataset.toolkit.dairv2x.convert_v2x_seq(root_dir, meta_out_dir)[source]
cosense3d.dataset.toolkit.dairv2x.load_info_to_dict(info_file)[source]
cosense3d.dataset.toolkit.dairv2x.load_label(label_file)[source]
cosense3d.dataset.toolkit.dairv2x.optimize_poses(meta_path)[source]
cosense3d.dataset.toolkit.dairv2x.optimize_trajectory(seq, sdict, root_dir, out_meta_dir, ego_agent_id, idx, sub_idx)[source]

This function iterates over scenarios, for each scenario it does the following steps: 1. register point clouds sequentially for each agent to get accurate trajectory of agents. Before registration, the points belonging to the labeled objets with high dynamics are removed. After registration of each sequence pair, the merged point cloud is down-sampled to save space. 2. match the registered point clouds of different agents to get optimized relative poses. 3. recover the relative pose to the world pose.

Parameters

meta_path: directory of meta files root_dir: root dir of data

Returns

meta: meta information with updated poses of agents

cosense3d.dataset.toolkit.dairv2x.parse_global_bboxes(sdict, frames, root_dir)[source]

Step three

cosense3d.dataset.toolkit.dairv2x.parse_static_pcd(adict, root_dir)[source]
cosense3d.dataset.toolkit.dairv2x.parse_timestamped_boxes(adict, root_dir, four_wheel_only=True)[source]
cosense3d.dataset.toolkit.dairv2x.register_pcds_to_blocks(seq, sdict, root_dir, idx=0)[source]
cosense3d.dataset.toolkit.dairv2x.register_sequence(sdict, frames, root_dir, ignore_ids=[], vis=False)[source]
cosense3d.dataset.toolkit.dairv2x.register_step_one(mf)[source]

Find vehicle that is most close to infra

cosense3d.dataset.toolkit.dairv2x.register_step_two(start_frame, mf, meta_out_dir)[source]

Register point clouds

cosense3d.dataset.toolkit.dairv2x.remove_ego_boxes(meta_in)[source]
cosense3d.dataset.toolkit.dairv2x.select_sub_scenes(meta_in, root_dir, meta_out, split)[source]

cosense3d.dataset.toolkit.opv2v module

cosense3d.dataset.toolkit.opv2v.boxes_3d_to_2d(boxes3d, num_pts, lidar2cam, I, img_size)[source]
cosense3d.dataset.toolkit.opv2v.convert_bev_semantic_map_to_road_height_map(map_dir, map_bounds_file, scenario_town_map_file, meta_dir)[source]
cosense3d.dataset.toolkit.opv2v.corner_to_center(corner3d, order='lwh')[source]

Convert 8 corners to x, y, z, dx, dy, dz, yaw.

Parameters

corner3dnp.ndarray

(N, 8, 3)

orderstr

‘lwh’ or ‘hwl’

Returns

box3dnp.ndarray

(N, 7)

cosense3d.dataset.toolkit.opv2v.create_bbx(extent)[source]

Create bounding box with 8 corners under obstacle vehicle reference.

Parameters

extentlist

Width, height, length of the bbx.

Returns

bbxnp.array

The bounding box with 8 corners, shape: (8, 3)

cosense3d.dataset.toolkit.opv2v.generate_bevmaps(data_dir, meta_path)[source]
cosense3d.dataset.toolkit.opv2v.generate_roadline(map_dir, map_bounds_file)[source]

Convert global BEV semantic maps to 2d road line points.

Parameters:
  • map_dir – directory for images of BEV semantic maps

  • map_bounds_file – json file that describe the world coordinates of the BEV map origin (image[0, 0])

Returns:

Nx2 array, 2d world coordinates of road line points in meters.

cosense3d.dataset.toolkit.opv2v.opv2v_pose_to_cosense(pose)[source]
cosense3d.dataset.toolkit.opv2v.opv2v_to_cosense(path_in, path_out, isSim=True, correct_transf=False, pcd_ext='pcd')[source]
cosense3d.dataset.toolkit.opv2v.pose_to_transformation(pose)[source]
Args:

pose: list, [x, y, z, roll, pitch, yaw]

Returns:

transformation: np.ndarray, (4, 4)

cosense3d.dataset.toolkit.opv2v.project_points(points, lidar2cam, I)[source]

Project 3d points to image planes

cosense3d.dataset.toolkit.opv2v.project_world_objects(object_dict, output_dict, lidar_pose, order)[source]

Project the objects under world coordinates into another coordinate based on the provided extrinsic.

Parameters

object_dictdict

The dictionary contains all objects surrounding a certain cav.

output_dictdict

key: object id, value: object bbx (xyzlwhyaw).

lidar_poselist

(6, ), lidar pose under world coordinate, [x, y, z, roll, yaw, pitch].

orderstr

‘lwh’ or ‘hwl’

cosense3d.dataset.toolkit.opv2v.update_2d_bboxes(fdict, cav_id, lidar_pose, data_dir)[source]
cosense3d.dataset.toolkit.opv2v.update_cam_params(opv2v_params, cosense_fdict, agent_id, scenario, frame)[source]
cosense3d.dataset.toolkit.opv2v.update_global_bboxes_num_pts(data_dir, meta_path)[source]
cosense3d.dataset.toolkit.opv2v.update_local_boxes3d(fdict, objects_dict, ref_pose, order, data_dir, cav_id)[source]
cosense3d.dataset.toolkit.opv2v.x1_to_x2(x1, x2)[source]

Transformation matrix from x1 to x2.

Parameters

x1list or np.ndarray

The pose of x1 under world coordinates or transformation matrix x1->world

x2list or np.ndarray
The pose of x2 under world coordinates or

transformation matrix x2->world

Returns

transformation_matrixnp.ndarray

The transformation matrix.

cosense3d.dataset.toolkit.opv2v.x_to_world(pose: list) ndarray[source]

The transformation matrix from x-coordinate system to carla world system Parameters

Parameters:

pose – [x, y, z, roll, yaw, pitch]

Returns:

The transformation matrix.

cosense3d.dataset.toolkit.opv2v_t module

cosense3d.dataset.toolkit.opv2v_t.gen_time_offsets(data_dir)[source]
cosense3d.dataset.toolkit.opv2v_t.generate_roadline_reference_points(root_dir, meta_file)[source]
cosense3d.dataset.toolkit.opv2v_t.get_box_velo(box, speeds, frame)[source]
cosense3d.dataset.toolkit.opv2v_t.get_local_boxes3d(objects_dict, ref_pose, order)[source]
cosense3d.dataset.toolkit.opv2v_t.get_velos(boxes, speeds, frame)[source]
cosense3d.dataset.toolkit.opv2v_t.load_frame_data(scene_dir, cavs, frame)[source]
cosense3d.dataset.toolkit.opv2v_t.load_vehicles_gframe(params)[source]

Load vehicles in global coordinate system.

cosense3d.dataset.toolkit.opv2v_t.opv2vt_to_cosense(data_dir, split, data_out_dir, meta_out_dir)[source]
cosense3d.dataset.toolkit.opv2v_t.pad_box_result(res, out_len)[source]
cosense3d.dataset.toolkit.opv2v_t.parse_speed_from_yamls(scene_dir)[source]
cosense3d.dataset.toolkit.opv2v_t.parse_sub_frame(f)[source]
cosense3d.dataset.toolkit.opv2v_t.read_frame_plys_boxes(path, frame, prev_frame=None, time_offset=0, parse_boxes=True)[source]
cosense3d.dataset.toolkit.opv2v_t.read_ply(filename, properties=None)[source]
cosense3d.dataset.toolkit.opv2v_t.read_ply_to_dict(f)[source]
cosense3d.dataset.toolkit.opv2v_t.read_sub_frame(f)[source]
cosense3d.dataset.toolkit.opv2v_t.transform_boxes_global_to_ref(boxes, ref_pose)[source]
cosense3d.dataset.toolkit.opv2v_t.update_bev_map(root_dir, meta_in, meta_out, split)[source]
cosense3d.dataset.toolkit.opv2v_t.update_global_boxes(root_dir, meta_in, meta_out, split)[source]
cosense3d.dataset.toolkit.opv2v_t.update_velo(scenario_meta_file)[source]
cosense3d.dataset.toolkit.opv2v_t.vis_cosense_scenario(scenario_meta_file, data_dir)[source]
cosense3d.dataset.toolkit.opv2v_t.vis_frame_data()[source]

Module contents

cosense3d.dataset.toolkit.callback_registrations(source, target, source_points, target_points)[source]

Callback function for point picking. Registers two point clouds using selected corresponding points.

cosense3d.dataset.toolkit.click_register(source, target)[source]
cosense3d.dataset.toolkit.register_pcds(source_cloud, target_cloud, initial_transf, thr=0.2, visualize=False, title='PCD')[source]