Richard (Lu-Ching) Wang

Robotics Research Engineer - Navigation | Quadruped Locomotion | Reinforcement Learning | Visual Servoing

📧 richard98hess444@gmail.com | 🔗 LinkedIn | 🔗 Github | 📄 CV

About Me

I am currently a robotics research engineer at the Inventec AI Center's robotics group, where we mainly focus on quadruped robots and reinforcement learning research. I have completed my master's degree at National Taiwan University, Department of Electrical Engineering. My master's research focused on robotics and automation, specifically in Visual Servoing, manipulators and autonomous mobile robots. In the past few months, I was working on autonomous navigation and sim-to-real deployment of reinforcement learning for quadruped robots. Recently, our team have concluded the research of "Feasibility-Guided Planning", a planning system for optimal path and policy selection, and the results have been submitted to ICRA2026.

Publications

Projects

Vision-Language Navigation (VLN)

VLN_NaVILA
This work, NaVILA, is proposed by Cheng et al. and presents a vision-language action (VLA) model for legged robot navigation. Please refer to their website for more details. We demonstrate its deployment on a Unitree A1 quadruped performing indoor tasks. The VLA model is executed on an RTX 5090 server, with communication handled via UDP. A detailed setup guide is provided for RTX 50-series GPUs and newer Ubuntu environments.

Feasibility-Guided Planning over Multi-Specialized Locomotion Policies

Feasibility
We present a feasibility-guided planning framework that enables coordinated navigation across complex terrains through multiple specialized locomotion policies. Our contribution focuses on the joint training paradigm while maintaining interpretable policy selection and supporting integration of new locomotion skills without retraining. The planning system achieved a 98.60% success rate in simulation and 70% in the real world on mixed terrain.

Quadruped Robots SDK Development / Sim-to-Real Deployment

Sim-to-Real
Building up quadruped robot interface for customized actuators and sensors through LCM communication. Training locomotion policy from IsaacGym and IsaacLab. C++ deployment and parameters (action scales, motor gains) fine-tuning.

Quadruped Robot Challenges (QRC)

QRC
Navigation deployment from our research: Feasibility-Guided Planning over Multi-Specialized Locomotion Policies. ROS2 integration on navigation, locomotion, manipulator teleoperation and image detection. Successfully traversed multiple extreme terrain autonomously.

Tomato Harvesting Robot with Dual-Camera Image-Based Visual Servoing

Tomato Harvesting Robot
This paper presents a self-built tomato harvesting robot with dual-camera IBVS algorithm implemented to solve the dislocation problem. In the greenhouse experiment, the harvesting time is reduced from 21.2 to 6.26 seconds by the cumulative error compensation, and the success rate for picking tomatoes is 68.4%.

Work Experience

Awards & Competitions