EagerMOT: 3D Multi-Object Tracking via Sensor Fusion (ICRA2021)
summary Cameraだけ/LiDARだけ/Camera-Lidarに対応している Sensor FusionのTracking フレームワーク (i) fusion of 3D and 2D evidence that merges detections
Investigating the Effect of Sensor Modalities in Multi-Sensor Detection-Prediction Models (NeurIPS 2020 Workshop)
summary sensor dropoutを用いて Detection + PredictionにおけるSensorのcontriobutionを解析 from Uber Model 2つの合体 RV-MultiXNet (S. Fadadu, S. Pandey, D. Hegde, Y. Shi, F.-C.
Let’s Get Dirty: GAN Based Data Augmentation for Camera Lens Soiling Detection in Autonomous Driving (WACV2021)
Summary https://openaccess.thecvf.com/content/WACV2021/papers/Uricar_Lets_Get_Dirty_GAN_Based_Data_Augmentation_for_Camera_Lens_WACV_2021_paper.pdf レンズの汚れをマスクしたデータセットを公開する予定 https://github.com/uricamic/soiling 2021/02/06地点で未公開 魚眼レンズの汚れたデータ・セット あるデータ・セットに
LU-Net: An Efficient Network for 3D LiDAR Point Cloud Semantic Segmentation Based on End-to-End-Learned 3D Features and U-Net (ICCV Workshop 2019)
概要 https://arxiv.org/abs/1908.11656 https://github.com/pbias/lunet github Lidar 3D Semantic Segmentation 3D -> 2D -> Unet range imageにおけるSegmantationのわかりやすい概要図がある range image baseを割合細かく説明されている印象
Multi-View 3D Object Detection Network for Autonomous Driving (IROS2019)
概要 Rangenet++の提案 https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/milioto2019iros.pdf Paper https://www.youtube.com/watch?v=wuokg7MFZyU youtube http://jbehley.github.io/ 著者 https://github.com/PRBonn/lidar-bonnetal github Lidar-only semantic segmentation. Lidarのreal-time procesing, 10fps程度 projection-based methods a spherical projectionを使った2D
Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks (ICRA2019)
Summary Raster mapを用いた1 agent multi-modal prediction https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/prediction/models/mtp.py github code 確率とともに複数経路(= multi-modal) を出力できるように拡張 single modalだと行きもしないところのpathが出てくる Method i: a
Multiple Trajectory Prediction with Deep Temporal and Spatial Convolutional Neural Networks (IROS2020)
summary temporal convolutional networks (TCNs) を用いたPredictionの提案 Method 軽くて良さげな全体framework mobilenetなのがrealtimeを考慮していて良い
MVLidarNet: Real-Time Multi-Class Scene Understanding for Autonomous Driving Using Multiple Views (IROS2020)
Summary Link https://arxiv.org/abs/2006.05518 arxiv版 https://www.youtube.com/watch?v=2ck5_sToayc https://www.youtube.com/watch?v=T7w-ZCVVUgM https://blogs.nvidia.com/blog/2020/03/11/drive-labs-multi-view-lidarnet-self-driving-cars/ nvidia blog githubはない 2020/10現在 Two-stage型 Lidar 3d multi-class detection framework “multi-view” = “perspecti
One Million Scenes for Autonomous Driving: ONCE Dataset (2021/06 arxiv)
summary ONCE (One millioN sCenEs) dataset Huaweiから出たDataset baselibeについて色々結果をまとめているのでsurvey記事として価値が高い baseli
Optimising the selection of samples for robust lidar camera calibration (arxiv 2021/03)
Summary 使いやすい形になっているっぽい https://gitlab.acfr.usyd.edu.au/its/cam_lidar_calibration https://www.youtube.com/watch?v=WmzEnjmffQU 素人でもcalibrationができるようなtarget-based なLidar- Camera calibのパイプラ