5.3.6 tf2_ros simple API lookup_transform, transform

Course subject(s) Module 5. Robot Vision

After taking an in-depth look at the squabbling robots last video lecture, we will now dive deeper into the lookup_transform API.

Script

This time, we will be looking at the transform_object_pose script, in the same folder of robot_squabblers.

  • Again, we see a list of necessary imports, which includes rospy, the tf2_ros module, and a bunch of necessary message modules.
  • We also create a new tf buffer and listener, like in the previous video.
  • In the callback function for the logical camera, we can see that the logical camera provides us with a lot of information about the pose of an object in the reference frame of the logical camera itself.

Shell

Let’s take a sidestep by starting a new CCS, sourcing it, and starting the factory environment in Gazebo. Don’t forget to verify that all robots have shown up!

  • Robot 1 is currently in the FOV of the camera. We don’t want this, so let’s use MoveIt! Commander in a new CCS to move it out of the way.
  • R1Home is a suitable position for it.

Now the arm is out of the way, let’s spawn in an object on the conveyor belt!

  • In a new, sourced, CCS shell, execute the following command:
    $ rosservice call /spawn_object_once
  • Now you can see a white box object on the conveyor belt near robot 1 in Gazebo. Let’s find out what the logical camera can see:
    $ rostopic echo /hrwros/logical_camera
  • Search for the object string in the output in the console. It will tell you the object name, and its position and rotation with respect to the camera.

Script

With this in mind, let’s go back to the code:

  • We subscribe to the camera output topic in the script. And, with the subscription, we process the topic contents in a callback function.
  • The API transform can only work with poses that have a timestamp, because the transform might be influenced by a moving robot part. You can’t always use the current pose of a link, sometimes you have to look at the past when the pose was generated!
  • Thus, the script creates a new header and time stamp information.
  • After that, we update the pose information using information from the logical camera.
  • Now, we have everything we need for the transform API. We will transform the pose of the object from the camera reference frame to the world reference frame in this example.

Shell

  • Switch to a new, sourced CCS shell, and run the script:
    $ rosrun hrwros_week5 transform_object_pose.py
  • You can now see the output of the script in the terminal.
    • The first pose is the pose of the object in the world reference frame.
    • The second pose is the pose of the object in the camera reference frame.
    • Using the right-hand rule, you can verify the poses are correct!
    • Again, during this course, we will take care of the rotations for you.
Creative Commons License
Hello (Real) World with ROS - Robot Operating System by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://online-learning.tudelft.nl/courses/hello-real-world-with-ros-robot-operating-systems//.
Back to top