For Japanese

Biography

Profile

  • Name: Satoshi Tanaka

Work Experience

  • Aug. 2025 - Freelance engineer
  • Apr. 2020 - Jul. 2025, TIER IV, Inc.
    • Jun. 2024 - Jul. 2025 Lead Software Engineer for MLOps in perception module
    • Apr. 2020 - Dec. 2023 Software Engineer for Autonomous Driving
  • Internship
    • Apr. 2018 - Apr. 2019, Internship at Preferred Networks, Inc. as a part-time engineer
    • Aug. 2017 - Mar. 2018, Internship at Hitachi, Ltd as a research assistant

Academic Background

  • Master’s Degree in Information Science and Engineering, the University of Tokyo
    • Apr. 2018 - Mar. 2020, Ishikawa Senoo Lab, Department of Creative Informatics, Graduate School of Information Science and Technology
  • Bachelor’s Degree in Precision Engineering, the University of Tokyo
    • Apr. 2017 - Mar. 2018, Kotani Lab, Research Center for Advanced Sceience and Technology
    • Apr. 2016 - Mar. 2018, Dept. of Precison Engineering
    • Apr. 2014 - Mar. 2016, Faculty of Liberal Arts

Interest

  • Robotics, Computer Vision, Control Theory
  • Building autonomous robotic systems that can interact with the physical world faster and more dexterously than humans
    • Real-time 3D object detection
    • Development of mechanisms capable of fast and compliant motion
    • Force control for dynamic manipulation

Publication

International Conference (First author)

  • Satoshi Tanaka, Keisuke Koyama, Taku Senoo, Makoto Shimojo, and Masatoshi Ishikawa, High-speed Hitting Grasping with Magripper, a Highly Backdrivable Gripper using Magnetic Gear and Plastic Deformation Control, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2020), Proceedings, pp. 9137 - 9143. [2020 IEEE Robotics and Automation Society Japan Joint Chapter Young Award]
  • Satoshi Tanaka, Keisuke Koyama, Taku Senoo, and Masatoshi Ishikawa, Adaptive Visual Shock Absorber with Visual-based Maxwell Model Using Magnetic Gear, The 2020 International Conference on Robotics and Automation (ICRA2020), Proceedings, pp. 6163-6168.
  • Satoshi Tanaka, Taku Senoo, and Masatoshi Ishikawa, Non-Stop Handover of Parcel to Airborne UAV Based on High-Speed Visual Object Tracking, 2019 19th International Conference on Advanced Robotics (ICAR2019), Proceedings, pp. 414-419.
  • Satoshi Tanaka, Taku Senoo, and Masatoshi Ishikawa, High-speed UAV Delivery System with Non-Stop Parcel Handover Using High-speed Visual Control, 2019 IEEE Intelligent Transportation Systems Conference (ITSC19), Proceedings, pp. 4449-4455.

ArXiv papers (First author)

  • Satoshi Tanaka, Koji Minoda, Fumiya Watanabe, Takamasa Horibe, Rethink 3D Object Detection from Physical World, arXiv 2025, https://arxiv.org/abs/2507.00190.
  • Satoshi Tanaka, Samrat Thapa, Kok Seang Tan, Amadeusz Szymko, Lobos Kenzo, Koji Minoda, Shintaro Tomie, Kotaro Uetake, Guolong Zhang, Isamu Yamashita, Takamasa Horibe, AWML: An Open-Source ML-based Robotics Perception Framework to Deploy for ROS-based Autonomous Driving Software, arXiv 2025, https://arxiv.org/abs/2506.00645.

International Conference (Not first author)

  • Taisei Fujimoto, Satoshi Tanaka, and Shinpei Kato, LaneFusion: 3D Object Detection with Rasterized Lane Map, the 2022 33rd IEEE Intelligent Vehicles Symposium (IV 2022), Proceedings, pp. 396-403.

Other publication

  • Kazunari Kawabata, Manato Hirabayashi, David Wong, Satoshi Tanaka, Akihito Ohsato AD perception and applications using automotive HDR cameras, the 4th Autoware workshop at the 2022 33rd IEEE Intelligent Vehicles Symposium (IV 2022)

Award, Scholarship

Projects

(Research) Rethink 3D Object Detection from Physical World

AWML

DepthAnything-ROS



(Research) High-speed Hitting Grasping with Magripper

  • Developed high-speed hitting grasping executed seamlessly from reaching with Magripper, a highly backdrivable gripper, and hitting grasping, high-speed grasping framework.
  • Accepted at IROS2020 [2020 IEEE Robotics and Automation Society Japan Joint Chapter Young Award]

(Research) Adaptive Visual Shock Absorber with Magslider

  • Developed visual shock absorber system with high-speed vision, high-backdrivablilty hardware, and force control.
  • Accepted at ICRA2020

(Research) High-speed supply station for UAV delivery system

  • Developed high-speed supply station for UAV delivery system
  • Accepted at ITSC2019


Robotic Competition

  • Team Leader for ABU Robocon2016
  • Winner of National Championships, 2nd-runnerup of ABU Robocon, ABU Robocon award.
  • Visited to the prime minister’s residence as the team leader of representation from Japan team. Reported by link and link.

Other projects

Latest change (blog, survey)

LaneFusion: 地図を用いた3d detection
Summary concept Researched at TIER IV, Inc. concept 本研究では、Objectの反対方向への推測を抑えることを目的とした、LiDARとベクターマップを使用した3D detectio
VSCodeのterminal tabの設定をする
VSCodeのterminal tabの設定をする 概要 VSCode v1.58 (2021/06) で対応された “Terminals in the editor area” が便利そうだったのでそれに合わせてVSCodeの設定を見直し
RustでRosbag2から画像を抽出するcrate
RustでRosbag2から画像を抽出するcrate 概要 rosbag2のSQLite DBから直接画像を読み込むcrate を作った rosを立ち
Blogのupdate
Blogのupdate Blogの見た目を変えました HugoのthemeをBegからgithub-styleに変更しました 理由としては Begが
Center-based 3D Object Detection and Tracking (araxiv 2020/06, CVPR2021)
summary LiDAR-based 3d object detection + tracking Anchor-free 2020年あたりからのデファクトスタンダード https://github.com/tianweiy/CenterPoint github Centerの点を考える手法 anchor-freeで考えられて、tracki
EagerMOT: 3D Multi-Object Tracking via Sensor Fusion (ICRA2021)
summary Cameraだけ/LiDARだけ/Camera-Lidarに対応している Sensor FusionのTracking フレームワーク (i) fusion of 3D and 2D evidence that merges detections
Investigating the Effect of Sensor Modalities in Multi-Sensor Detection-Prediction Models (NeurIPS 2020 Workshop)
summary sensor dropoutを用いて Detection + PredictionにおけるSensorのcontriobutionを解析 from Uber Model 2つの合体 RV-MultiXNet (S. Fadadu, S. Pandey, D. Hegde, Y. Shi, F.-C.
Let’s Get Dirty: GAN Based Data Augmentation for Camera Lens Soiling Detection in Autonomous Driving (WACV2021)
Summary https://openaccess.thecvf.com/content/WACV2021/papers/Uricar_Lets_Get_Dirty_GAN_Based_Data_Augmentation_for_Camera_Lens_WACV_2021_paper.pdf レンズの汚れをマスクしたデータセットを公開する予定 https://github.com/uricamic/soiling 2021/02/06地点で未公開 魚眼レンズの汚れたデータ・セット あるデータ・セットに
LU-Net: An Efficient Network for 3D LiDAR Point Cloud Semantic Segmentation Based on End-to-End-Learned 3D Features and U-Net (ICCV Workshop 2019)
概要 https://arxiv.org/abs/1908.11656 https://github.com/pbias/lunet github Lidar 3D Semantic Segmentation 3D -> 2D -> Unet range imageにおけるSegmantationのわかりやすい概要図がある range image baseを割合細かく説明されている印象
Multi-View 3D Object Detection Network for Autonomous Driving (IROS2019)
概要 Rangenet++の提案 https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/milioto2019iros.pdf Paper https://www.youtube.com/watch?v=wuokg7MFZyU youtube http://jbehley.github.io/ 著者 https://github.com/PRBonn/lidar-bonnetal github Lidar-only semantic segmentation. Lidarのreal-time procesing, 10fps程度 projection-based methods a spherical projectionを使った2D