Photo

Zhanpeng He

Email: [firstname] at cs dot columbia dot edu

CV / Google Scholar / GitHub

I am a first-year Ph.D student working on reinforcement learning, meta-learning and robotics at Columbia University. I am advised by Professor Matei Ciocarlie and Professor Shuran Song and am a member of Robotic Manipulation and Mobility Lab.

Before joining CU, I recieved a master's degree from University of Sourthern California, where I worked as a research assistant at the Robotic Embedded System Laboratory and advised by Professor Gaurav Sukhatme and Professor Stefan Schaal. Before joining USC, I received a Bachelor of Science degree in Computer Science from the Rutgers University.

Research Interests

Research Projects

Meta-World: A Benchmark and Evaluation for Multi-Task and Meta- Reinforcement Learning

Tianhe Yu*, Deirdre Quillen*, Zhanpeng He*, Ryan C Julian, Karol Hausman, Sergey Levine and Chelsea Finn.

website / codes / paper

In this paper, we propose an open-source simulated benchmark for meta-reinforcement learning and multi-task learning consisting of 50 distinct robotic manipulation tasks, with the aim of making it possible to develop algorithms that generalize to accelerate the acquisition of entirely new, held-out tasks. We evaluate 6 state-of-the-art meta-reinforcement learning and multi-task learning algorithms on these tasks. Surprisingly, while each task and its variations (e.g., with different object positions) can be learned with reasonable success, these algorithms struggle to learn with multiple tasks at the same time, even with as few as nine distinct training tasks. Our analysis and open-source environments pave the way for future research in multi-task learning and meta-learning that can enable meaningful generalization, thereby unlocking the full potential of these methods.

Simulator Predictive Control: Using Learned Task Representations and MPC for Zero-Shot Generalization and Sequencing

Zhanpeng He*, Ryan C Julian*, Eric Heiden, Hejia Zhang, Stefan Schaal, Joseph Lim, Gaurav S Sukhatme, and Karol Hausman.

arXiv / code / video

We present a method to efficiently performing new robotic tasks directly on a real robot, based on model-predictive control (MPC) and learned task representations. This work is published in Conference on Neural Information Processing Systems 2018 Deep RL Workshop.

Scaling Simulation-to-real Transfer by Learning Composable Robot Skills

Ryan C Julian*, Eric Heiden*, Zhanpeng He, Hejia Zhang, Stefan Schaal, Joseph Lim, Gaurav S Sukhatme, and Karol Hausman.

arXiv / code / video

We present a novel solution to the problem of simulation-to-real transfer, which builds on recent advances in robot skill decomposition. This work is published in the International Symposium on Experimental Robotics. Springer, 2018.

Softwares

I am a member of rlworkgroup and take part in development of several robot-learning-related open-source projects.

Experiences

Before joining RESL, I also worked at Polymorphic Robotics Labotory as a research assistant under supervision of Professor Wei-Min Shen. There, I mainly worked on building robotic system and an Unreal engine simulation for the multi-UAV navigation project.

From 2016 to 2017, I was a software development engineer at IoT Eye Inc. at piscataway, New Jersey.

From June to August 2016, I interned as a software development engineer at the Research Department of VipShop.com.

From 2015 to 2017, I was a teaching assistant for CS111 in Rutgers University.