Manipulation Planning For A Scoop End Effector

Crystal Chao, Tiffany Chen, Advait Jain

Project Abstract:

mekabot_HRI_labeled.JPG

We propose implementing object manipulation planning using two 7-DOF compliant robot arms by Meka Robotics, which will be outfitted with unique end effectors that have a scooping affordance. Specifically, the left end effector will serve as a scooping tool (a wedge or flat pan) and the right end effector will be a tool that can push objects onto the scoop. This robot will be tasked with picking up a desired object from a cluttered workspace.

For this platform, the end effector positions for manipulation of a given object are highly constrained. Free space around the desired object may need to be created by moving other objects out of the way. This will be accomplished using a classical planning algorithm that produces an action sequence for directional object pushes. The collision-free joint trajectories for executing each individual action will be produced using a motion planning algorithm based on Rapidly Exploring Random Trees (RRTs). These trajectories will be post-processed for smoothness and efficiency.

dusty.png scoop_end_effector.jpg

Related Work:

  • This paper describes the dustpan end effector being used to grasp objects in front of the robot without any knowledge about the object. Uncluttered test cases, only one object.
    • 1000 Trials: An Empirically Validated End Effector that Robustly Grasps Objects from the Floor. Zhe Xu, Travis Deyle, and Charles C. Kemp, IEEE International Conference on Robotics and Automation, 2009
  • These two papers use RRT based planners on arms with similar DoFs compared to our platform for manipulating objects with known geometries.
    • Grasp Synthesis in Cluttered Environments for Dexterous Hands. Dmitry Berenson and Siddhartha Srinivasa IEEE-RAS International Conference on Humanoid Robots (Humanoids08), December, 2008
    • Toward humanoid manipulation in human-centred environments. T Asfour, P Azad, N Vahrenkamp, K Regenstein, A. Bierbaum, K. Welke, J. Schroder, and R. Dillmann, Robotics and Autonomous Systems, 2008
  • This paper uses point clouds from a laser scanner and a sampling based planner.
    • Real-time Perception-Guided Motion Planning for a Personal Robot Rusu, R.B. and Sucan, I.A. and Gerkey, B. and Chitta, S. and Beetz, M. and Kavraki, L.E. In Proceedings of the 22nd IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2009
  • We will be using object and table segmentation as described in this paper.
    • EL-E: An Assistive Mobile Manipulator that Autonomously Fetches Objects from Flat Surfaces. Advait Jain and Charles C. Kemp, Autonomous Robots, 2009

Hardware Setup

Proposed Work:

Planning Challenges
  • Bimanual manipulation planning with unique end effectors — The platform for this project uses two 7-DOF kinematic chains with scoop end effectors that must be coordinated in order to be used for manipulation. The unique scoop end effector design enables flexible object collection and transportation without needing knowledge of complex object-specific grasp strategies, but it also requires extremely constrained positioning. The challenge is achieving both the kinematic configuration of the robot and the object configuration in the environment that allow the end effectors to pick up the desired object.
  • Planning and collision detection with point clouds from a noisy scanner. Point clouds typically have a lot of occlusion effects, and give 3D points instead of a polygonal mesh representation for the obstacles in the world. We would like to plan and grasp the objects without prior models for them.
  • Creating smooth trajectories — A secondary objective of this project is producing smooth joint trajectories to be executed on the robot. Valid trajectories produced by motion planning algorithms are not necessarily optimal or natural-looking. It can be advantageous to post-process these trajectories to produce final trajectories that (1) take less time to execute, (2) produce smoother-looking motion, (3) avoid joint limits, and possibly fulfill other desired requirements. Changes to the trajectories also need to maintain the constraints of the original problem (collision-free motion to the desired goal).
Proposed Solutions

We propose a 3-step approach to creating joint trajectories for the robot to execute: classical planning for action sequences, motion planning for executing each action, and trajectory smoothing for improved final trajectories.

  • RRTs for kinematic configuration — We intend to use Rapidly Exploring Random Trees (RRTs) for the motion planning component of this project.
  • Classical planning for environment configuration — The area surrounding the desired object to pick up is not necessarily free space. The space needs to be cleared by pushing aside objects in the way to the correct locations. For this constraint satisfaction problem, we intend to use one of the classical planning algorithms to determine the sequence of directional pushes to execute in order to create free space.
  • Post-processing for trajectory smoothing — We intend to try several methods for achieving the desired trajectory attributes, such as attempting to prune the RRT to prevent circuitous routes. We will also quantify metrics for the attributes in order to evaluate the methods we try.
Implementation Challenges
  • We need a safe and intuitive way to test code before running it on the actual robot.
  • We need an accurate representation of the world state.
  • We need to integrate several different code modules for motion planning, classical planning, perception, and robot control into a single pipeline.
Proposed Solutions
  • Simulation — We will create a simulation environment by getting a model of the robot into RST, and use this simulation environment to get the code working. We will also evaluate the success of our project in both simulation and on the real robot.
  • Perception — The environment configuration will be determined by segmenting the workspace and objects from point clouds in a laser scan. Since our project is not focused on perception, we will also plan to use larger objects that can be detected more easily and robustly. We will also maintain a planarity requirement for objects on the workspace (no stacking).
  • Interfacing RST (written in C++) with robot controllers (written in Python)

Timeline

General Task Crystal Tiffany Advait
Week 1 Get project ready to present. Create slides Write document Research references
Week 2 Prepare platform and create simulation environment. Arm 3ds files CAD end effectors Mount end effectors
Week 3 Get motion planning component working in simulation. Extreme programming
Week 4 Get classical planning component working in simulation. Extreme programming
Week 5 Work on trajectory refinement. Extreme programming
Week 6 Get perception working using laser data. Extreme programming
Week 7 Get whole system working on real robot. Extreme programming
Week 8 Take demo videos, prepare deliverables. Writeup Presentation

Week 1

  • Created the project specification.

Week 2

  • Started working on the motion planner. Evaluating two options: RST and ompl (motion planning library in ROS).
  • Started trying out the simulator in ROS, is more complicated than RST but might offer some nice data structures/code for doing planning with point clouds. We can run the simulator and have it avoid an obstacle using simulated point clouds.
  • But occlusions seem to be a big problem, the planner seems to treat occluded parts of the world as free space and can collide with objects that it can't see.
  • All this is while playing around with their demo code, we still need to figure out how to interface the ROS code with code that we will write and our robot.

Week 3

  • Discussed extensions to the sokoban planner that we wrote for assignment 1 that would be needed for us to be able to use it for the classical planning part of the project.
  • This includes discretizing the surface of the table, allowing the objects to occupy multiple cells (the definition of a push and the state transitions will have to become more complicated).
  • Scooped an object while teleoperating the arms. We feel that we can move forward with our current scoop design.

Week 4

  • Used classical planning to push aside two objects away from a target object in order to clear a space for the left and right end effectors to prepare for the scooping behavior.
  • Implemented behaviors that allow the left and right end effectors to lower to a table until it hits and then maintain a constant downward force on the table as the end effector is slid across the surface.
  • Took a scan of a target objects with two obstacle objects to generate a 3D point cloud. Will work on segmenting the objects next.
  • Will determine how much cleared space around the target object is needed in order to perform the scooping behavior. Also will attempt to scoop objects autonomously.

Week N

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License

Subscription expired — please renew

Pro account upgrade has expired for this site and the site is now locked. If you are the master administrator for this site, please renew your subscription or delete your outstanding sites or stored files, so that your account fits in the free plan.