For Japanese

Biography

Profile

  • Name: Satoshi Tanaka

Work Experience

  • Apr. 2020 - Now, TIER IV, Inc. Autonomous Driving Sensing/Perception Engineer
  • Internship
    • Apr. 2018 - Apr. 2019, Internship at Preferred Networks, Inc. as a part-time engineer
    • Aug. 2017 - Mar. 2018, Internship at Hitachi, Ltd as a research assistant

Academic Background

  • Master’s Degree in Information Science and Engineering, the University of Tokyo
    • Apr. 2018 - Mar. 2020, Ishikawa Senoo Lab, Department of Creative Informatics, Graduate School of Information Science and Technology
  • Bachelor’s Degree in Precision Engineering, the University of Tokyo
    • Apr. 2017 - Mar. 2018, Kotani Lab, Research Center for Advanced Sceience and Technology
    • Apr. 2016 - Mar. 2018, Dept. of Precison Engineering
    • Apr. 2014 - Mar. 2016, Faculty of Liberal Arts

Interest

  • Robotics, Computer Vision, Control theory
  • High-speed Robotics
    • System integration of high-speed robot using 1000fps high-speed image processing
    • Deformation Control, robot force control for dynamic manipulation with high speediness
    • Application of high-speed visual control for logistics, Unmanned Aerial Vehicle(UAV)
  • Robot vision
    • 3D perception for robotics with sensor fusion
  • Other hobby

Publication

International Conference (First author)

  • Satoshi Tanaka, Keisuke Koyama, Taku Senoo, Makoto Shimojo, and Masatoshi Ishikawa: High-speed Hitting Grasping with Magripper, a Highly Backdrivable Gripper using Magnetic Gear and Plastic Deformation Control, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2020), Proceedings, pp. 9137 - 9143. [2020 IEEE Robotics and Automation Society Japan Joint Chapter Young Award]
  • Satoshi Tanaka, Keisuke Koyama, Taku Senoo, and Masatoshi Ishikawa: Adaptive Visual Shock Absorber with Visual-based Maxwell Model Using Magnetic Gear, The 2020 International Conference on Robotics and Automation (ICRA2020), Proceedings, pp. 6163-6168.
  • Satoshi Tanaka, Taku Senoo, and Masatoshi Ishikawa: Non-Stop Handover of Parcel to Airborne UAV Based on High-Speed Visual Object Tracking, 2019 19th International Conference on Advanced Robotics (ICAR2019), Proceedings, pp. 414-419.
  • Satoshi Tanaka, Taku Senoo, and Masatoshi Ishikawa: High-speed UAV Delivery System with Non-Stop Parcel Handover Using High-speed Visual Control, 2019 IEEE Intelligent Transportation Systems Conference (ITSC19), Proceedings, pp. 4449-4455.

International Conference (Not first author)

  • Taisei Fujimoto, Satoshi Tanaka, and Shinpei Kato: LaneFusion: 3D Object Detection with Rasterized Lane Map, the 2022 33rd IEEE Intelligent Vehicles Symposium (IV 2022), Proceedings, pp. 396-403.

Other publication

  • Kazunari Kawabata, Manato Hirabayashi, David Wong, Satoshi Tanaka, Akihito Ohsato AD perception and applications using automotive HDR cameras, the 4th Autoware workshop at the 2022 33rd IEEE Intelligent Vehicles Symposium (IV 2022)

Award, Scholarship

Projects

mmCarrot



DepthAnything-ROS



(Research) LaneFusion: 3d detection with HD map

  • Accepted at IV2022

(Research) High-speed Hitting Grasping with Magripper

  • Accepted at IROS2020 [2020 IEEE Robotics and Automation Society Japan Joint Chapter Young Award]

(Research) Adaptive Visual Shock Absorber with Magslider

  • Accepted at ICRA2020

(Research) High-speed supply station for UAV delivery system

  • Accepted at ITSC2019


Robotic Competition

  • Team Leader for ABU Robocon2016
  • Winner of National Championships, 2nd-runnerup of ABU Robocon, ABU Robocon award.
  • Visited to the prime minister’s residence as the team leader of representation from Japan team. Reported by link and link.

Other projects

Latest change (blog, survey)

Center-based 3D Object Detection and Tracking (araxiv 2020/06, CVPR2021)
summary LiDAR-based 3d object detection + tracking Anchor-free 2020年あたりからのデファクトスタンダード https://github.com/tianweiy/CenterPoint github Centerの点を考える手法 anchor-freeで考えられて、tracki
EagerMOT: 3D Multi-Object Tracking via Sensor Fusion (ICRA2021)
summary Cameraだけ/LiDARだけ/Camera-Lidarに対応している Sensor FusionのTracking フレームワーク (i) fusion of 3D and 2D evidence that merges detections
Investigating the Effect of Sensor Modalities in Multi-Sensor Detection-Prediction Models (NeurIPS 2020 Workshop)
summary sensor dropoutを用いて Detection + PredictionにおけるSensorのcontriobutionを解析 from Uber Model 2つの合体 RV-MultiXNet (S. Fadadu, S. Pandey, D. Hegde, Y. Shi, F.-C.
Let’s Get Dirty: GAN Based Data Augmentation for Camera Lens Soiling Detection in Autonomous Driving (WACV2021)
Summary https://openaccess.thecvf.com/content/WACV2021/papers/Uricar_Lets_Get_Dirty_GAN_Based_Data_Augmentation_for_Camera_Lens_WACV_2021_paper.pdf レンズの汚れをマスクしたデータセットを公開する予定 https://github.com/uricamic/soiling 2021/02/06地点で未公開 魚眼レンズの汚れたデータ・セット あるデータ・セットに
LU-Net: An Efficient Network for 3D LiDAR Point Cloud Semantic Segmentation Based on End-to-End-Learned 3D Features and U-Net (ICCV Workshop 2019)
概要 https://arxiv.org/abs/1908.11656 https://github.com/pbias/lunet github Lidar 3D Semantic Segmentation 3D -> 2D -> Unet range imageにおけるSegmantationのわかりやすい概要図がある range image baseを割合細かく説明されている印象
Multi-View 3D Object Detection Network for Autonomous Driving (IROS2019)
概要 Rangenet++の提案 https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/milioto2019iros.pdf Paper https://www.youtube.com/watch?v=wuokg7MFZyU youtube http://jbehley.github.io/ 著者 https://github.com/PRBonn/lidar-bonnetal github Lidar-only semantic segmentation. Lidarのreal-time procesing, 10fps程度 projection-based methods a spherical projectionを使った2D
Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks (ICRA2019)
Summary Raster mapを用いた1 agent multi-modal prediction https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/prediction/models/mtp.py github code 確率とともに複数経路(= multi-modal) を出力できるように拡張 single modalだと行きもしないところのpathが出てくる Method i: a
Multiple Trajectory Prediction with Deep Temporal and Spatial Convolutional Neural Networks (IROS2020)
summary temporal convolutional networks (TCNs) を用いたPredictionの提案 Method 軽くて良さげな全体framework mobilenetなのがrealtimeを考慮していて良い
MVLidarNet: Real-Time Multi-Class Scene Understanding for Autonomous Driving Using Multiple Views (IROS2020)
Summary Link https://arxiv.org/abs/2006.05518 arxiv版 https://www.youtube.com/watch?v=2ck5_sToayc https://www.youtube.com/watch?v=T7w-ZCVVUgM https://blogs.nvidia.com/blog/2020/03/11/drive-labs-multi-view-lidarnet-self-driving-cars/ nvidia blog githubはない 2020/10現在 Two-stage型 Lidar 3d multi-class detection framework “multi-view” = “perspecti
One Million Scenes for Autonomous Driving: ONCE Dataset (2021/06 arxiv)
summary ONCE (One millioN sCenEs) dataset Huaweiから出たDataset baselibeについて色々結果をまとめているのでsurvey記事として価値が高い baseli