Mechanical Engineering Associate Professor Xiaoli Zhang received an award from the National Science Foundation’s Mind, Machine and Motor Nexus (M3X) program to develop seamless, in-hand robotic telemanipulation.

Telemanipulation allows a user to immerse, interact, and complete tasks in a remote environment by controlling a robot. It is used in scenarios such as telesurgery, robotic healthcare assistance, and remote search and rescue operations.

In many of these real-world applications, in-hand object manipulation is needed to complete complex tasks. For example, when picking up a power adapter to plug it into a wall, the user must often manipulate the adapter in their hand to orient it correctly before it can be inserted into the outlet. While this is a simple task that a human would not have to think about, it can be a complex operation for a robot.

Current telemanipulation capability is limited to relatively simple manipulation, where the operator maintains an initial grasp throughout the task and changes only wrist orientation. For more complex tasks such as the power adapter example, the operator must go through multiple iterations of grasp, release, and regrasp actions for the robot to properly orient the object. Most telemanipulation research to date has focused on the human’s sense of the remote environment; much less attention has been given to improving the robot’s manipulation intelligence and ability to manage dynamic object interaction based on indirect inputs from a remote operator.

That’s where Zhang’s research comes in. “For telerobotics technology to truly take off,” Zhang explains, “the robot must know how to follow the operator’s dynamic hand motion and be able to actively assist the operator by keeping an object stable in its hand.” This means that the robot needs intelligence to behave as an “optimizer” of imperfect human motion inputs and a “stabilizer” for successful dynamic state transition of the object.

To accomplish this vision, Zhang’s project has three specific objectives: 1) integrate physics-informed metrics to guide the robot learning framework to generate stabilized configurations; 2) combine human inputs and the physics-informed metrics in a hierarchical learning structure to enable the robot to semi-autonomously optimize human motion inputs while ensuring a stabilized grasp; 3) teach the robot to adapt its behaviors to unique operator preferences by actively learning from the operator’s corrective adjustments.

“The combination of these objectives,” Zhang said, “will bridge research gaps in both robotic manipulation and human-robot cooperation in telemanipulation to form a novel semi-autonomous framework to support complex object manipulation for telerobotics.”

Working in Zhang’s Intelligent Robotics and Systems Lab (, the research team will use a commercially available robotic hand system and human subjects to evaluate the proposed semi-autonomous framework. Subjects will remotely manipulate the robot to perform three case studies in two scenarios. In the first scenario, the robot must grasp an object as it manipulates rotational properties (for example, opening a jar). The second scenario is more complex—the robot must change the position and orientation of a tool in its hand.

schematic of test setup

Teleoperation setup using motion capture as control variables where an operator is indirectly interacting with an object in the robot environment.

Zhang’s research promotes the paradigms of NSF’s M3X program to develop intuitive technology that integrates with the human workforce. To extend the research beyond the lab and into the classroom, Zhang will incorporate aspects of the system—such as human factors, artificial intelligence, robotic manipulation, and teleoperation-related applications—into undergraduate and graduate courses to give students firsthand experience in the practical applications of telerobotics.

“The potential impact of our proposed system for in-hand telemanipulation is really exciting,” said Zhang. “It will accelerate the implementation of telerobotics technology in both traditional and new applications. With the ability for humans to remotely feed complex manipulation commands to robots, robots could replace humans in dangerous tasks or inaccessible environments.”