For Japanese

Biography

Profile

  • Name: Satoshi Tanaka

Work Experience

  • Apr. 2020 - Now, TIER IV, Inc. Autonomous Driving Sensing/Perception Engineer
  • Internship
    • Apr. 2018 - Apr. 2019, Internship at Preferred Networks, Inc. as a part-time engineer
    • Aug. 2017 - Mar. 2018, Internship at Hitachi, Ltd as a research assistant

Academic Background

  • Master’s Degree in Information Science and Engineering, the University of Tokyo
    • Apr. 2018 - Mar. 2020, Ishikawa Senoo Lab, Department of Creative Informatics, Graduate School of Information Science and Technology
  • Bachelor’s Degree in Precision Engineering, the University of Tokyo
    • Apr. 2017 - Mar. 2018, Kotani Lab, Research Center for Advanced Sceience and Technology
    • Apr. 2016 - Mar. 2018, Dept. of Precison Engineering
    • Apr. 2014 - Mar. 2016, Faculty of Liberal Arts

Interest

  • Robotics, Computer Vision, Control theory
  • High-speed Robotics
    • System integration of high-speed robot using 1000fps high-speed image processing
    • Deformation Control, robot force control for dynamic manipulation with high speediness
    • Application of high-speed visual control for logistics, Unmanned Aerial Vehicle(UAV)
  • Robot vision
    • 3D perception for robotics with sensor fusion
  • Other hobby

Publication

International Conference (First author)

  • Satoshi Tanaka, Keisuke Koyama, Taku Senoo, Makoto Shimojo, and Masatoshi Ishikawa: High-speed Hitting Grasping with Magripper, a Highly Backdrivable Gripper using Magnetic Gear and Plastic Deformation Control, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2020), Proceedings, pp. 9137 - 9143. [2020 IEEE Robotics and Automation Society Japan Joint Chapter Young Award]
  • Satoshi Tanaka, Keisuke Koyama, Taku Senoo, and Masatoshi Ishikawa: Adaptive Visual Shock Absorber with Visual-based Maxwell Model Using Magnetic Gear, The 2020 International Conference on Robotics and Automation (ICRA2020), Proceedings, pp. 6163-6168.
  • Satoshi Tanaka, Taku Senoo, and Masatoshi Ishikawa: Non-Stop Handover of Parcel to Airborne UAV Based on High-Speed Visual Object Tracking, 2019 19th International Conference on Advanced Robotics (ICAR2019), Proceedings, pp. 414-419.
  • Satoshi Tanaka, Taku Senoo, and Masatoshi Ishikawa: High-speed UAV Delivery System with Non-Stop Parcel Handover Using High-speed Visual Control, 2019 IEEE Intelligent Transportation Systems Conference (ITSC19), Proceedings, pp. 4449-4455.

International Conference (Not first author)

  • Taisei Fujimoto, Satoshi Tanaka, and Shinpei Kato: LaneFusion: 3D Object Detection with Rasterized Lane Map, the 2022 33rd IEEE Intelligent Vehicles Symposium (IV 2022), Proceedings, pp. 396-403.

Other publication

  • Kazunari Kawabata, Manato Hirabayashi, David Wong, Satoshi Tanaka, Akihito Ohsato AD perception and applications using automotive HDR cameras, the 4th Autoware workshop at the 2022 33rd IEEE Intelligent Vehicles Symposium (IV 2022)

Award, Scholarship

Projects

mmCarrot



DepthAnything-ROS



(Research) LaneFusion: 3d detection with HD map

  • Accepted at IV2022

(Research) High-speed Hitting Grasping with Magripper

  • Accepted at IROS2020 [2020 IEEE Robotics and Automation Society Japan Joint Chapter Young Award]

(Research) Adaptive Visual Shock Absorber with Magslider

  • Accepted at ICRA2020

(Research) High-speed supply station for UAV delivery system

  • Accepted at ITSC2019


Robotic Competition

  • Team Leader for ABU Robocon2016
  • Winner of National Championships, 2nd-runnerup of ABU Robocon, ABU Robocon award.
  • Visited to the prime minister’s residence as the team leader of representation from Japan team. Reported by link and link.

Other projects

Latest change (blog, survey)

A Survey on Autonomous Driving Datasets: Data Statistic, Annotation, and Outlook (arxiv2024/01)
A Survey on Autonomous Driving Datasets: Data Statistic, Annotation, and Outlook (arxiv2024/01) Summary https://github.com/daniel-bogdoll/ad-datasets 元になったrepository Method 結構偏りが激しいし、結局nuScenesが良さげに見える task一覧としてわかり
About me (Japanese)
Biography Profile 田中 敬(たなか さとし)    Twitter: @scepter914    Github: @scepter914 Work Experience 2020/04-Now 株式会社ティアフォー Autonomous Driving Sensing/Perception Engineer Internship 2018/04-2019/03 Preferred Networks Part-time Engineer 2017/08-2018/03 株式会社日立製作所 研究補助員 Academic Background 2018/04-2020/03 東京大学大学院 修士課
DepthAnything-ROS
Summary Developed by hobby coding. Repository: https://github.com/scepter914/DepthAnything-ROS Made prototype ROS2 package of DepthAnything with TensorRT C++. DepthAnything is one of the high performance monocular depth estimation. This work is covered by official github repository and its gallary. Performance results Model Params RTX2070 TensorRT Depth-Anything-Small 24.8M 27 ms, VRAM 300MB Depth-Anything-Base 97.
DepthAnythingのROS2 packageを作った
DepthAnythingのROS2 packageを作った 概要 DepthAnything のROS2 packageを作った それに対する感想や周辺Toolを作った話をつ
VSCode Vim で日本語の変換が変になる問題への対処
VSCode Vim で日本語の変換が変になる問題への対処 概要 VSCode Vim で日本語の変換が変になる問題への対処 環境 Ubuntu 22.04 VSCode v1.82.0 VSCode Vimのextensionを入れている状態
MatrixVT: Efficient Multi-Camera to BEV Transformation for 3D Perception (arxiv2022/11)
MatrixVT: Efficient Multi-Camera to BEV Transformation for 3D Perception (arxiv2022/11) Summary https://github.com/Megvii-BaseDetection/BEVDepth BEVDepthの後継、軽量version 軽量なBEV-base Camera 3d detection CPUでも動作するレベルで軽量 CPUでも数10
DeepFusion: A Robust and Modular 3D Object Detector for Lidars, Cameras and Radars (IROS2022)
DeepFusion: A Robust and Modular 3D Object Detector for Lidars, Cameras and Radars (IROS2022) Summary moduleのように扱える BEV baseのCamera-LiDAR-Radar fusion 3d detection 定性評価で細かく解析 LiDAR only だと縦
aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception (arxiv 2022/11)
aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception (arxiv 2022/11) Summary https://arxiv.org/pdf/2211.09445.pdf https://github.com/aimotive/aimotive_dataset/tree/f71828446692587318ebccbd3cdad5b4335eb9f3 datasetに関して https://github.com/aimotive/aimotive-dataset-loader api https://github.com/aimotive/mm_training Camera-LiDAR-Radar dataset の提案 200mまでannotationされているので遠距離detectio
MTP: Multi-hypothesis Tracking and Prediction for Reduced Error Propagation (IV2022)
MTP: Multi-hypothesis Tracking and Prediction for Reduced Error Propagation (IV2022) Summary from Carnegie mellon and nvidia https://www.youtube.com/watch?v=ydQ9IPbX_-A multi-hypothesis tracking and prediction framework の提案 tracking results を複数持つことでpredictionの性能を上げる tracking errors が prediction performance に与える影響の解析も行って
Simple-BEV: What Really Matters for Multi-Sensor BEV Perception? (arxiv 2022/09)
Simple-BEV: What Really Matters for Multi-Sensor BEV Perception? (arxiv 2022/09) Summary https://simple-bev.github.io/ https://github.com/aharley/simple_bev NuScenes, Lyft でtrain code NuScenesはpretrain modelあり Camera-Radar fusion の BEV detection 検出 input: Camera * 6 (360度) + radar pointcloud Depth-based, Homography-based では