Robot Operating System has umpteen number of ready to use packages but the ROS navigation stack is one of the most powerful ones. The ROS navigation stack is the go-to package for autonomous navigation for mobile robots with different drives mechanisms and onboard sensing technologies using multiple high-level control and motion algorithms as well as low-level actuator control strategies.
This article explains all the components of the ROS navigation stack and how to configure and use its tools for autonomous robot navigation.
Salient Features about the ROS Navigation Stack: .
It only works on differential drive and holonomic drive robots with rectangular body frame. If the robot has a custom shape, a rectangular bounding box can be used.
It needs the joint positions and the sensor positions and the relationships between them.
The instructions for the robots are linear and angular velocities for its center.
There should be at least one 2D LiDAR like planar scanner to create the environment map or an array of multiple range finders for the similar effect.
It has the global and local planners as well as global and local cost maps for the paths. Additionally, there are mechanisms to address unexpected interactions with the dynamic environment.
All the nodes within the white box in the image above are already provided by ROS and the ones outside it are either to be provided or configured by the user for implementation on the specific robot. The robot specific components include sensor positioning, odometry information, wheel encoder styles and more.
RIGID BODY TRANSFORMATIONS
The tf transforms tree that has all relative transformations between the robot base, the sensors and the actuators, forms the spine of all navigation tasks.
The standard terminology used in rigid body transformations and the coordinate frames are:
Map Frame: It is the static frame with origin located at the center of the map that the robot is using. The standard origin of this frame is usually the point where the robot starts at, in the map.
Position of the robot is usually represented with respect to the map frame.
Sensor Frame: The onboard sensors interpret the world with respect to their own coordinate frame, called the sensor frame. A stereo-camera measures the depth of an object with respect to its own focal center (called the sensor frame) but that is usually converted with respect to the robot base's center.
LiDAR Frame: For the LiDAR sensor, all the distance/range measurements are made with respect to the physical center of the rotating cylinder. This is an example of a sensor frame.
Like all sensor frames, the LiDAR data frame is converted to the map frame to know about the map of the world and location of obstacles and occlusions with global representation.
Body Frame: All the robots have a frame attached to the themselves. It is usually centered at the center of line joining the rear wheels of a differential drive or the geometric center of the four wheels for a holonomic drive.
Homogeneous Transformations: All the measurement representations can be converted from one reference frame to other using homogeneous transformations.
tf Package: The ROS tf package is a standard package that tracks multiple 3D coordinate frames and maintains the tree between them.
Localization Engine: Several localization engines like the AMCL (Adaptive Monte Carlo Localization) package utilize the complete tf tree to convert various sensor measurements from their respective frames to the frame required for the computation. All localization engines provide the robot pose with respect to the map frame.
Odom Frame: The origin of the odometry frame is the location of the point at which the robot starts motion. This is the frame with reference to which, the odometry of the robot is measured throughtout its motion.
Odometry is used to estimate the robot pose relative to the starting location. They can be computed using a. difference in wheel rotations measured using encoder counts b. integrating the robot velocities instructions over time and c. integrating the measurements from the IMU sensor.
Odometry: All the odometry sources are prone to errors in drift in values (accelerometer values), bias (gyroscope), wheel slippage, incorrect wheel dimensions and inaccurate wheel moutings or eccentric wheel structure.
Odometry data is continuous, evolves smoothly and are short term accurate with respect to the local reference frame.
URDF files can also have definitions for the frames for the various sensors on the robot.
tf debugging tools: The standard commands for debugging issues in the tf tee include roswtf, rosrun tf view_frames and rosrun tf view_monitor used for debugging, visualization of tf tree and broken down visualization from the chain between two frames to individual transforms respectively.
CREATING TRANSFORMS
While storing the individual transforms and deriving transformations across multiple frames using hand-crafted mathematical operations is possible; the tf transform package is helpful to avoid the cumbersome steps.
The tf library can handle all types and quantities of sensors and robot transforms. If the LiDAR is placed at the tip of the robot frame 10 cm and 5 cm ahead and above the body frame center with the same axes' orientations, the tf tree stores this as a linear transformation and can thus convert every depth reading from the sensor frame to the body frame and use it to create a map.
IF YOU LIKED THE ARTICLE, DON'T FORGET TO LEAVE A REACTION OR A COMMENT!
Copyright @Akshay Kumar | Last Updated on 05/25/2019