For Japanese

Biography

Profile

  • Name: Satoshi Tanaka

Work Experience

  • Apr. 2020 - Now, TIER IV, Inc. Autonomous Driving Sensing/Perception Engineer
  • Internship
    • Apr. 2018 - Apr. 2019, Internship at Preferred Networks, Inc. as a part-time engineer
    • Aug. 2017 - Mar. 2018, Internship at Hitachi, Ltd as a research assistant

Academic Background

  • Master’s Degree in Information Science and Engineering, the University of Tokyo
    • Apr. 2018 - Mar. 2020, Ishikawa Senoo Lab, Department of Creative Informatics, Graduate School of Information Science and Technology
  • Bachelor’s Degree in Precision Engineering, the University of Tokyo
    • Apr. 2017 - Mar. 2018, Kotani Lab, Research Center for Advanced Sceience and Technology
    • Apr. 2016 - Mar. 2018, Dept. of Precison Engineering
    • Apr. 2014 - Mar. 2016, Faculty of Liberal Arts

Interest

  • Robotics, Computer Vision, Control theory
  • High-speed Robotics
    • System integration of high-speed robot using 1000fps high-speed image processing
    • Deformation Control, robot force control for dynamic manipulation with high speediness
    • Application of high-speed visual control for logistics, Unmanned Aerial Vehicle(UAV)
  • Robot vision
    • 3D perception for robotics with sensor fusion
  • Other hobby

Publication

International Conference (First author)

  • Satoshi Tanaka, Keisuke Koyama, Taku Senoo, Makoto Shimojo, and Masatoshi Ishikawa: High-speed Hitting Grasping with Magripper, a Highly Backdrivable Gripper using Magnetic Gear and Plastic Deformation Control, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2020), Proceedings, pp. 9137 - 9143. [2020 IEEE Robotics and Automation Society Japan Joint Chapter Young Award]
  • Satoshi Tanaka, Keisuke Koyama, Taku Senoo, and Masatoshi Ishikawa: Adaptive Visual Shock Absorber with Visual-based Maxwell Model Using Magnetic Gear, The 2020 International Conference on Robotics and Automation (ICRA2020), Proceedings, pp. 6163-6168.
  • Satoshi Tanaka, Taku Senoo, and Masatoshi Ishikawa: Non-Stop Handover of Parcel to Airborne UAV Based on High-Speed Visual Object Tracking, 2019 19th International Conference on Advanced Robotics (ICAR2019), Proceedings, pp. 414-419.
  • Satoshi Tanaka, Taku Senoo, and Masatoshi Ishikawa: High-speed UAV Delivery System with Non-Stop Parcel Handover Using High-speed Visual Control, 2019 IEEE Intelligent Transportation Systems Conference (ITSC19), Proceedings, pp. 4449-4455.

International Conference (Not first author)

  • Taisei Fujimoto, Satoshi Tanaka, and Shinpei Kato: LaneFusion: 3D Object Detection with Rasterized Lane Map, the 2022 33rd IEEE Intelligent Vehicles Symposium (IV 2022), Proceedings, pp. 396-403.

Other publication

  • Kazunari Kawabata, Manato Hirabayashi, David Wong, Satoshi Tanaka, Akihito Ohsato AD perception and applications using automotive HDR cameras, the 4th Autoware workshop at the 2022 33rd IEEE Intelligent Vehicles Symposium (IV 2022)

Award, Scholarship

Projects

mmCarrot



DepthAnything-ROS



(Research) LaneFusion: 3d detection with HD map

  • Accepted at IV2022

(Research) High-speed Hitting Grasping with Magripper

  • Accepted at IROS2020 [2020 IEEE Robotics and Automation Society Japan Joint Chapter Young Award]

(Research) Adaptive Visual Shock Absorber with Magslider

  • Accepted at ICRA2020

(Research) High-speed supply station for UAV delivery system

  • Accepted at ITSC2019


Robotic Competition

  • Team Leader for ABU Robocon2016
  • Winner of National Championships, 2nd-runnerup of ABU Robocon, ABU Robocon award.
  • Visited to the prime minister’s residence as the team leader of representation from Japan team. Reported by link and link.

Other projects

Latest change (blog, survey)

Find n’ Propagate: Open-Vocabulary 3D Object Detection in Urban Environments (arxiv 2024/03, ECCV2024)
Find n’ Propagate: Open-Vocabulary 3D Object Detection in Urban Environments (arxiv 2024/03, ECCV2024) Summary Open-Vocabulary 3D Object Detection https://github.com/djamahl99/findnpropagate contribute 2D VLM を用いたfrustum base手法 Greedy Box Seeker frustumからsegmentしてspaceをsea
Injecting Planning-Awareness into Prediction and Detection Evaluation (IV2022)
Injecting Planning-Awareness into Prediction and Detection Evaluation (IV2022) Summary https://github.com/BorisIvanovic/PlanningAwareEvaluation Detectionとforecastingに使えるPI-metricsの提案 Method 他の手法 task-aware evaluation metrics be: Able to capture asymmetries in downstream tasks. Method-agnostic. Computationally feasible to compute. Interpretable
Soft Robotics Commercialization: Jamming Grippers from Research to Product (Soft robotics 2016)
Summary ドラえもんハンドのproduct化で大変だったことまとめ集 ソフトロボティクスを事業化しようとするなら必読 動画 https://www.youtube.com/watch?v=GdJyICIp4t4 https://www.youtube.com/watch?v=KZ0Y2fDZ8Uw https://www.youtube.com/watch?v=zgHSAUEzjn4 Background これまでの世界にimp
Weakly Supervised 3D Object Detection via Multi-Level Visual Guidance (ECCV2024)
Weakly Supervised 3D Object Detection via Multi-Level Visual Guidance (ECCV2024) Summary from google 2D labelだけで3D detectionをするframeworkの提案 画像の特徴量を3D側に伝える工夫 https://github.com/kuanchihhuang/VG-W3D Method Background 3D Object Detection
What data do we need for training an AV motion planner? (ICRA2021)
What data do we need for training an AV motion planner? (ICRA2021) Summary from Lyft どういうperceptionのobjectが影響しているのかを解析した論文 Method data種類 Range and field-of-view Geometric accuracy Addressing domain shift Experiment data fine
Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data (arxiv 2024/01)
Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data (arxiv 2024/01) Summary https://github.com/LiheYoung/Depth-Anything link https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main model https://github.com/spacewalk01/depth-anything-tensorrt TensorRT実装 from TikTok 汎用性のある Zero-shot monocular relative depth estimation 基盤model + Unsupervisedな学習の手法を取り入
MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D Object Detection (ICLR2024)
MixSup: Mixed-grained Supervision for Label-efficient LiDAR-based 3D Object Detection (ICLR2024) Summary MixSupを使ったsemantic point clustersの利用 + PointSAM(3D Panoptic segmentation)使ったpseudo label による
OW-Adapter: Human-Assisted Open-World Object Detection with a Few Examples (IEEE Transaction on visualization and computer graphics 2024/01)
OW-Adapter: Human-Assisted Open-World Object Detection with a Few Examples (IEEE Transaction on visualization and computer graphics 2024/01) Summary https://www.youtube.com/watch?v=QNub6PYMp1k Method 非常にイケているUI 所謂 pseudo label の 分布の表現も必要っぽい Experiment Discussion
FRNet: Frustum-Range Networks for Scalable LiDAR Segmentation (arxiv2023/12)
FRNet: Frustum-Range Networks for Scalable LiDAR Segmentation (arxiv2023/12) Summary https://github.com/Xiangxu-0103/FRNet https://xiangxu-0103.github.io/FRNet Frustum basedな 3D semantic segmentation Method KNN post-processing の差 Experiment 可視化 https://www.youtube.com/watch?v=PvmnaMKnZrc https://www.youtube.com/watch?v=4m5sG-XsYgw https://www.youtube.com/watch?v=-aM_NaZLP8M incorrectの量が減っている score Semi-supervised: 少ないデータセットでもちゃんと上手く
SIMPL: A Simple and Efficient Multi-agent Motion Prediction Baseline for Autonomous Driving (arxiv2024/02, RA-L 2024)
SIMPL: A Simple and Efficient Multi-agent Motion Prediction Baseline for Autonomous Driving (arxiv2024/02, RA-L 2024) Summary https://www.youtube.com/watch?v=_8-6ccopZMM https://github.com/HKUST-Aerial-Robotics/SIMPL?tab=readme-ov-file Transformer base prediction Method Experiment 3060Ti 250Actorでも20ms程度はめっちゃ使い勝手は良さそう 可視化の動画めっちゃ良さげに見え