For Japanese

Biography

Profile

  • Name: Satoshi Tanaka

Work Experience

  • Aug. 2025 - Freelance engineer
  • Apr. 2020 - Jul. 2025, TIER IV, Inc.
    • Jun. 2024 - Jul. 2025 Lead Software Engineer for MLOps in perception module
    • Apr. 2020 - Dec. 2023 Software Engineer for Autonomous Driving
  • Internship
    • Apr. 2018 - Apr. 2019, Internship at Preferred Networks, Inc. as a part-time engineer
    • Aug. 2017 - Mar. 2018, Internship at Hitachi, Ltd as a research assistant

Academic Background

  • Master’s Degree in Information Science and Engineering, the University of Tokyo
    • Apr. 2018 - Mar. 2020, Ishikawa Senoo Lab, Department of Creative Informatics, Graduate School of Information Science and Technology
  • Bachelor’s Degree in Precision Engineering, the University of Tokyo
    • Apr. 2017 - Mar. 2018, Kotani Lab, Research Center for Advanced Sceience and Technology
    • Apr. 2016 - Mar. 2018, Dept. of Precison Engineering
    • Apr. 2014 - Mar. 2016, Faculty of Liberal Arts

Interest

  • Robotics, Computer Vision, Control Theory
  • Building autonomous robotic systems that can interact with the physical world faster and more dexterously than humans
    • Real-time 3D object detection
    • Development of mechanisms capable of fast and compliant motion
    • Force control for dynamic manipulation

Publication

International Conference (First author)

  • Satoshi Tanaka, Keisuke Koyama, Taku Senoo, Makoto Shimojo, and Masatoshi Ishikawa, High-speed Hitting Grasping with Magripper, a Highly Backdrivable Gripper using Magnetic Gear and Plastic Deformation Control, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2020), Proceedings, pp. 9137 - 9143. [2020 IEEE Robotics and Automation Society Japan Joint Chapter Young Award]
  • Satoshi Tanaka, Keisuke Koyama, Taku Senoo, and Masatoshi Ishikawa, Adaptive Visual Shock Absorber with Visual-based Maxwell Model Using Magnetic Gear, The 2020 International Conference on Robotics and Automation (ICRA2020), Proceedings, pp. 6163-6168.
  • Satoshi Tanaka, Taku Senoo, and Masatoshi Ishikawa, Non-Stop Handover of Parcel to Airborne UAV Based on High-Speed Visual Object Tracking, 2019 19th International Conference on Advanced Robotics (ICAR2019), Proceedings, pp. 414-419.
  • Satoshi Tanaka, Taku Senoo, and Masatoshi Ishikawa, High-speed UAV Delivery System with Non-Stop Parcel Handover Using High-speed Visual Control, 2019 IEEE Intelligent Transportation Systems Conference (ITSC19), Proceedings, pp. 4449-4455.

ArXiv papers (First author)

  • Satoshi Tanaka, Koji Minoda, Fumiya Watanabe, Takamasa Horibe, Rethink 3D Object Detection from Physical World, arXiv 2025, https://arxiv.org/abs/2507.00190.
  • Satoshi Tanaka, Samrat Thapa, Kok Seang Tan, Amadeusz Szymko, Lobos Kenzo, Koji Minoda, Shintaro Tomie, Kotaro Uetake, Guolong Zhang, Isamu Yamashita, Takamasa Horibe, AWML: An Open-Source ML-based Robotics Perception Framework to Deploy for ROS-based Autonomous Driving Software, arXiv 2025, https://arxiv.org/abs/2506.00645.

International Conference (Not first author)

  • Taisei Fujimoto, Satoshi Tanaka, and Shinpei Kato, LaneFusion: 3D Object Detection with Rasterized Lane Map, the 2022 33rd IEEE Intelligent Vehicles Symposium (IV 2022), Proceedings, pp. 396-403.

Other publication

  • Kazunari Kawabata, Manato Hirabayashi, David Wong, Satoshi Tanaka, Akihito Ohsato AD perception and applications using automotive HDR cameras, the 4th Autoware workshop at the 2022 33rd IEEE Intelligent Vehicles Symposium (IV 2022)

Award, Scholarship

Projects

(Research) Rethink 3D Object Detection from Physical World

AWML

DepthAnything-ROS



(Research) High-speed Hitting Grasping with Magripper

  • Developed high-speed hitting grasping executed seamlessly from reaching with Magripper, a highly backdrivable gripper, and hitting grasping, high-speed grasping framework.
  • Accepted at IROS2020 [2020 IEEE Robotics and Automation Society Japan Joint Chapter Young Award]

(Research) Adaptive Visual Shock Absorber with Magslider

  • Developed visual shock absorber system with high-speed vision, high-backdrivablilty hardware, and force control.
  • Accepted at ICRA2020

(Research) High-speed supply station for UAV delivery system

  • Developed high-speed supply station for UAV delivery system
  • Accepted at ITSC2019


Robotic Competition

  • Team Leader for ABU Robocon2016
  • Winner of National Championships, 2nd-runnerup of ABU Robocon, ABU Robocon award.
  • Visited to the prime minister’s residence as the team leader of representation from Japan team. Reported by link and link.

Other projects

Latest change (blog, survey)

About me (Japanese)
Biography Profile 田中 敬(たなか さとし)    X: @scepter914    Github: @scepter914    Linkedin: @scepter914 Work Experience 2025/08- フリーランスエンジニア 2020/04-2025/07 株式会社ティアフォー Autonomous Driving Sensing/Perception Engineer Internship 2018/04-2019/03 Preferred Networks Part-time Engineer 2017/08-2018/03 株式会社日立製作所 研究補
Rethink 3D Object Detection from Physical World
Summary Worked on TIER IV, Inc. arXiv paper: https://arxiv.org/abs/2507.00190 High-accuracy and low-latency 3D object detection is essential for autonomous driving systems. While previous studies on 3D object detection often evaluate performance based on mean average precision (mAP) and latency, they typically fail to address the trade-off between speed and accuracy, such as 60.0 mAP at 100 ms vs 61.0 mAP at 500 ms. A quantitative assessment of the trade-offs between different hardware devices and accelerators remains unexplored, despite being critical for real-time applications.
Rethink 3D Object Detection from Physical World
Summary TIER IV, Inc. における成果 arXiv paper: https://arxiv.org/abs/2507.00190 高精度かつ低遅延な3D物体検出は自動運転システムにとって不可欠ですが、これまでの3D物体検出に関する研究ではmAP
AWML: An Open-Source ML-based Robotics Perception Framework to Deploy for ROS-based Autonomous Driving Software
Summary Worked on TIER IV, Inc. Repository: https://github.com/tier4/AWML arXiv paper: https://arxiv.org/abs/2506.00645 In this paper, we introduce AWML, a framework designed to support MLOps for robotics. AWML provides a machine learning infrastructure for autonomous driving, supporting not only the deployment of trained models to robotic systems, but also an active learning pipeline that incorporates auto-labeling, semi-auto-labeling, and data mining techniques. We explain the whole design of software and strategy for robotics MLOps and show the benchmark our models on our datasets.
AWML: An Open-Source ML-based Robotics Perception Framework to Deploy for ROS-based Autonomous Driving Software
Summary TIER IV, Inc. における成果 Repository: https://github.com/tier4/AWML arXiv paper: https://arxiv.org/abs/2506.00645 ロボティクス分野におけるMLOpsを支援するフレームワーク「AWML」を開発。 自律走行向けの機械学習インフラと
DRIVE VLM: The Convergence of Autonomous Driving and Large Vision-Language Models (arxiv2024/02, CoRL2024)
DRIVE VLM: The Convergence of Autonomous Driving and Large Vision-Language Models (arxiv2024/02, CoRL2024) Summary https://github.com/Tsinghua-MARS-Lab/DriveVLM 2024/10/17現在未公開 https://tsinghua-mars-lab.github.io/DriveVLM/ Method DriveVLM-Dual architecture Output Meta-action Decision Waypoints Traditional pipeline = E2E model のこと Integrating 3D Perception. 2Dに投影して、critical objec
FutureMotion (2024/05 github)
FutureMotion (2024/05 github) Summary https://github.com/kit-mrt/future-motion かなりちゃんと書かれている predictionのlibrary Method forward 見ると大体分かる inputがかなり抽象化されている class Wayformer(nn.Module): def forward( self, target_valid: Tensor,
Real-Time Motion Prediction via Heterogeneous Polyline Transformer with Relative Pose Encoding (NeurIPS 2023)
Real-Time Motion Prediction via Heterogeneous Polyline Transformer with Relative Pose Encoding (NeurIPS 2023) Summary [[001095_future_motion]] の元になった論文 scene-centricで計算効率をよく、agent-centricで性能よく、を合体させた
Senna: Bridging Large Vision-Language Models and End-to-End Autonomous Driving (arxiv 2024/10)
Senna: Bridging Large Vision-Language Models and End-to-End Autonomous Driving (arxiv 2024/10) Summary VLMとE2E modelを繋ぐフレームワーク Senna を用いた自動運転 DriveVLMの後継 Method Experiment DriveVLM とはそんなに変わらないかな
Survey for real-time 3D detection in autonomous driving
Survey for Real-time 3D Detection in Autonomous Driving Summary In this blog, I summarize 3D detection methods, including implementation using inference optimization techniques like TensorRT. Based on the performance comparison, following models of the multi-camera 3D detection stand out: StreamPETR (ResNet50): This is a lightweight model, making it suitable for a wide range of applications. StreamPETR (ResNet101): This model strikes a good balance between detection performance and inference time. Far3D (V2-99): This model may be too computationally heavy for certain environments.