Warning

This page is located in archive.

TODO: Deadline: 14 April 2024, 23:59 CET (Monday labs) / 10 April 2024, 23:59 CET (Thursday labs)

Your task will be to finish the prepared factorgraph localization pipeline to allow more precise localization of the robot and markers placed in the world.

**Responsible lecturer:** Martin Pecka (peckama2@fel.cvut.cz)

**Relevant lectures:** 00_localization_mle.pdf, 00_localization_se2.pdf

One of the drawbacks of ICP SLAM was that it was difficult to correct the ICP map built from historical measurements. Factorgraph SLAM allows to fully propagate all newly acquired knowledge about the world (observations/measurements) through the whole history of the robot motion and potentially fix even mapping/localization errors made earlier.

The particular task we solve in this homework has the following definition:

- As we solve the task in 2D, each pose variable or measurement is a 3-tuple that has coordinates $x$, $y$ and $yaw$.
- Let $x_t$ be robot pose 3-tuple of variables for time instant $t$, $m^r$ be relative marker pose 3-tuple of variables, and $z^{\ast}_t$ be a 3-tuple of measurements measured at time $t$. $w2r$ is a function converting from
*wcf*to*rcf*(from world frame to robot frame). - The factorgraph is defined by this optimization problem:

${\displaystyle \underset{x_0,\ldots,x_T,m^r}{\mathrm{argmin}} \sum_t ||w2r(x_{t+1}, x_t) − z^{odom}_t||^2_{\Sigma^{odom}_t} + \sum_t ||w2r(m^a, x_t) − z^{m^a}_t||^2_{\Sigma^{m^a}_t} + \sum_t ||w2r(m^r, x_t) − z^{m^r}_t||^2_{\Sigma^{m^r}_t} + \sum_t ||w2r(x_{t+1}, x_t) − z^{icp}_t||^2_{\Sigma^{icp}_t}}$

- The terms inside the norms are called
*residuals*. These are the main point of interest to us, together with their derivatives. - The main goal is to estimate $m^r$ as precisely as possible, and to localize the robot well enough so that it can return to its starting position (the control part of this problem will be the goal of the following homeworks).
- The problem can be formulated as Non-Linear Least Squares, for which there are plenty of solvers available. We have chosen Scikit Sparse Cholmod solver as it is fast and reliable when working with sparse matrices.

- You should already have the ARO ROS environment installed from Lab 01. All the necessary files are already there.
- To fully test this homework, you will also need a working implementation of Homework 3 (ICP SLAM).
- Find places in
`aro_exploration/src/aro_localization/factor_graph.py`

with comment`TODO HW 4:`

or`FIXME:`

and fill in the missing pieces so that the Factorgraph SLAM works. - Test your implementation both using the tests described in Visualization and Testing and also interactively in simulation or on the real robots. The Brute testing dataset contains several difficult instances which will really test the quality of your implementation.
- Also make sure that the optimization is fast enough when using the simulator, usually under 100 ms (you can see the optimization duration printed into console when
`debug_localization`

is true).

The following tests will be run to evaluate your submission:

**Implementation of**`res()`

(1 point, required)**Implementation of**`res_jac()`

tested directly (2 points, required)**Localization of relative marker on a set of static trajectories (2 points)**- Your factor_graph.py will be used in a manner very similar to the example usage in its
`factor_graph_demo()`

function.

Run script `aro_exploration/create_hw_04_zip.sh`

to create an archive for Brute. Upload the ZIP to Brute. Your `factor_graph.py`

file will be used on a set of static datasets to find out it works correctly. Only the `factor_graph.py`

and the YAML configs will be used from your codebase - all other files will be ignored (your icp.py implementation from HW03 is also a part of the archive, but it is not tested in HW04).

Up to 5 points bonus will be awarded to students who will verify and evaluate accuracy of their localization and mapping algorithms on the real robots (HW03 and HW04).

**Deadline:** Before the start of Lab 9.

**What to do:**

- deploy your workspace on the robot
- run the localization on the robot, e.g.
`roslaunch aro_exploration aro_localization_real.launch`

- add
`run_driver:=false`

if you run the robot driver in a separate window

- record data using
`roslaunch --screen aro_exploration/launch/localization/record.launch record_output:=true`

- Test that the recording works correctly. Use
`rosbag info`

to check whether the output bag has some messages on topic`fused_odom`

.

- try to create some difficult situations for localization when recording the data
- take a few photos or screenshots during your try
- write a 1-2 page PDF report analyzing performance of your algorithm

**What to submit:** A single ZIP archive containing the BAG file(s), PDF and photos. File upload limit is 350 MB. Submit the archive to task hw04-bonus in Brute.

These files are relevant for the localization. It should be okay to edit just `factor_graph.py`

, but you might also want to change some of the YAML configs. Other files should not be modified.

`aro_exploration/config/localization`

contains various configurations of the package.`aro_exploration/launch/localization`

provides convenient launchers for the nodes below:`aro_localization.launch`

launches the localization nodes.`aro_localization_sim.launch`

launches everything needed for interactive testing with Gazebo and ROS.

`aro_exploration/nodes/localization/aro_localization`

contains the ROS node that uses`factor_graph.py`

module and feeds it with data coming from ROS.`aro_exploration/src/aro_localization/factor_graph.py`

contains implementation of factorgraph-based localization.

You can directly run the Python module using `python3 aro_exploration/src/aro_localization/factor_graph.py`

. There are some static data your code will be tested on. Passing these tests is mandatory for having a chance that the whole localization algorithm will work. The script will tell you the achieved marker localization accuracy on a randomly generated trajectory. Try to achieve accuracy better than 0.2 m in most cases.

You can also run `python3 -m unittest aro_exploration/src/aro_localization/factor_graph.py`

(or instruct the IDE to run unit tests). There are some unit tests your code will be tested with. Passing these tests is mandatory (but not sufficient) for getting points for the tested functionalities.

You can also run `roslaunch aro_exploration aro_localization_sim.launch world:=aro_maze_1`

to test your code interactively in various simulated worlds.

To use your random walking algorithm from HW 02, call the roslaunch like this: `roslaunch aro_exploration aro_localization_sim.launch world:=aro_maze_1 reactive_control:=true`

The launch file has also parameter `fuse_icp_slam`

which controls whether your ICP SLAM implementation generates additional factors for the localization or not. This parameter is turned on by default.

The launch file has also argument `debug_localization`

which enables debug (green) messages in the console from the `aro_localization`

node. This argument is set to true when launching `aro_localization_sim.launch`

and defaults to false when the localization is launched by later homeworks or the semestral work. However, you are free to pass `debug_localization:=true`

even to these later homeworks to see how the factorgraph is doing. If, in your code, you use `loginfo()`

or `logwarn()`

to log some information, it will be shown regardless of the setting of this argument. Only `logdebug()`

prints are affected.

**Difficult worlds**

To test the localization on a world where ICP odometry may fail horribly, test with world `aro_hallway_1`

and `point_to_point`

ICP alignment.

**Gamepad teleoperation**

If you have a gamepad that appears as `/dev/js0` device in Linux, you should be able to use it to control the simulated robot. Keep button `A`

pressed all the time and use left thumbstick to control the robot. XBox gamepads can be used with the xow or xone drivers (do not mix both!).

It can happen that your gamepad model is not recognized. In such case, the console would get spammed by a lot of error messages. To get rid of them, disable gamepad control by adding roslaunch argument `joy_teleop:=false`

.

**Keyboard teleoperation**

If you add roslaunch argument `keyboard_teleop:=true`

, it opens a window in which you can control the robot using WXAD keys to test your algorithm.

**RViz visualization**

The launch file `aro_exploration/launch/localization/aro_localization_sim.launch`

starts also a preconfigured RViz window that shows several odometry estimates. It also shows a visualization of the marker pose estimates and ground truth. Color-coding of the visualized entities is the same as in the Factorgraph visualization section. The displayed image shows a superposition of the camera image and the detected Apriltags. The small pink ellipses visualize covariance of the ICP odometry (it is enlarged for visualization purposes by a factor).

**Factorgraph visualization**

The launch file `aro_exploration/launch/localization/aro_localization_sim.launch`

also opens a Matplotlib window that shows some interesting parts of the factorgraph internals. Use this window to examine the values of residuals, distances from ground truth etc.

You can see multiple odometries. The blue one is the output of the factor graph localization. Red odometry is ground truth from simulator. Green one is ICP odometry, and yellow is wheel-odometry. You can also see visualization of the poses of markers (and the estimated pose of the relative marker).

Second from top, there is visualizatíon of yaw (heading) of the robot. Color coding is the same as for the first subplot.

Third and fourth are visualizations of the residuals (errors in individual factors). The first of these two shows unscaled residuals, i.e. residuals before applying cost. The latter shows the actual residual values including costs that were used for the optimization.

Last two graphs show error of the odometries compared to ground truth (you can also observe the errors in each of the 3 estimated dimensions x, y, yaw).

There are a few differences between the simulator and the real robot:

- In simulation, marker detection is done from RGB camera, while on the real robot, it is done from near infrared camera. Try to figure out why!
- In simulation, the bumper is the whole body of the robot. On the real robot, the bumper only covers the front part of the robot.
- Quality of the wheel-IMU odometry differs between the simulation and the real robot.
- In simulation, ground truth position is available for free. To get something similar for the real robot, a complicated setup of the room would be required.

- The expressions $||X - Z||^2_{\Sigma}$ are the so-called Mahalanobis distance of the residuals and are equal to $(X-Z)^T\Sigma^{-1}(X-Z)$. Mathematically, you can convert them to L2 distance of altered residuals that look like $||\Sigma^{-\frac{1}{2}}(X-Z)||^2$.
- Please note the $\Sigma^{-\frac{1}{2}}$ is the inverse of the matrix square root of $\Sigma$, and not a pointwise inverse square root. In Python, you can use
`scipy.linalg.sqrtm()`

for the matrix square root.

- The factorgraph state and measurements are expanded by one “time slice” with incoming wheeled odometry measurements (topic
`/odom`

) or marker detections (topic`/apriltag`

). However, the odometry normally comes at 10 Hz, which would cause the FG to grow too quickly. So the ROS node “subsamples” the odometry by taking one measurement every second. This means every second, a new $x_t$ will be added to the FG, with $z^{odom}_t$ and other measurements filled if they exists for the given time instant. To better represent marker observations, if a marker is observed, a new state is created immediately and odometry measurements are interpolated. The marker-generated states are limited to one each 0.25 s. - The factor graph is internally represented as the following table:

mr | — | — | … | — | — | Relative marker wcf pose | |

x[0] | x[1] | x[2] | … | x[N] | x[N+1] | Robot trajectory in wcf | |

z_odom[0] | z_odom[1] | … | z_odom[N-1] | z_odom[N] | — | Wheel-odometry measurements | |

z_ma[0] | NaN | NaN | … | NaN | z_ma[N+1] | Absolute marker observations | |

NaN | NaN | z_mr[2] | … | z_mr[N] | NaN | Relative marker observations | |

NaN | z_icp[1] | … | z_icp[N-1] | z_icp[N] | — | ICP odometry measurements | |

t_odom[0] | t_odom[1] | … | t_odom[N-1] | t_odom[N] | — | Timestamps of x and z_odom | |

t_markers[0] | 0 | t_markers[2] | … | t_markers[N-1] | t_markers[N] | Timestamps of z_ma and z_mr |

- Put correct formula in function
`res()`

in`factor_graph.py`

. Further in the text, we will use the following residuals induced by measurements at time $t$- $res^{odom}_t$ is the odometry residual that is a function of $x_t$, $x_{t+1}$ and $z^{odom}_t$ (denoted
`res_odom`

in code). - $res^{mr}_t$ is the relative marker residual that is a function of $x_{t1}$, $x_{t2}$, $mr$ and $z^{mr}_t$ (denoted
`res_mr`

in code). - $res^{ma}_t$ is the absolute marker residual that is a function of $x_{t1}$, $x_{t2}$ and $z^{ma}_t$ (denoted
`res_ma`

in code). - $res^{icp}_t$ is the ICP odometry residual that is a function of $x_t$, $x_{t+1}$ and $z^{icp}_t$ (denoted
`res_icp`

in code). - Each residual is the error between the observation $z$ and the estimated robot state $x$. All residuals we use are relative poses. Odometry and ICP odometry measurements tell the relative motion of the robot between two time instants (in robot frame). Marker measurements tell the relative pose of the marker in the robot frame.

- Put correct formulas in the definition of
`J`

and`J1`

in`res_jac()`

function. Each of these is a 3×3 submatrix of the whole factorgraph Jacobian.`J`

is $\frac{\partial res^{?}_t}{\partial x_t}$.`J1`

is $\frac{\partial res^{?}_t}{\partial x_{t+1}}$ or $\frac{\partial res^{?}_t}{\partial mr}$ .

- You can play with the
`solver_options`

defaults which defines behavior of the nonlinear least squares solver when running`factor_graph.py`

directly, or you can edit`aro_exploration/config/localization/solver.yaml`

and`aro_exploration/config/localization/costs_icp.yaml`

to change behavior of the solver when testing interactively in ROS. But it should work okay even with the default settings. - To efficiently solve the Non-Linear Least Squares problem, it is essential to represent the

Jacobian as a sparse matrix.

In this homework, you are not required to touch the ROS node at all. It is quite complicated so we did all the heavy lifting for you. However, it might come handy to know how does the node behave.

This is definition of the API of the `aro_localization`

node.

Only the important parameters are listed here. For the full view, refer to `aro_exploration/nodes/localization/aro_localization`

source code.

`map_frame`

(string, default`fused_odom`

): Name of the frame in which the fused localization will be published.`odom_frame`

(string): This is not an actual parameter, but we refer to it in this document as if it were. This is name of the frame in which wheel-IMU odometry is published. The “parameter” is automatically set to the parent frame of the first odometry message on topic`odom`

.`body_frame`

(string): This is not an actual parameter, but we refer to it in this document as if it were. This is name of the robot body frame. The “parameter” is automatically set to the child frame of the first odometry message on topic`odom`

.`publish_tf`

(bool, default True): Whether to publish the`map_frame`

→`odom_frame`

transform.`invert_tf`

(bool, default True): Whether to publish transform`map_frame`

→`odom_frame`

(False) or`odom_frame`

→`map_frame`

(True).`max_iters`

(int, default 50): Maximum number of optimizer iterations.- Parameters starting with
`c_`

configure the scale of costs of the individual loss terms. You can play with these to tune the algorithm behavior.

`apriltag`

(type apriltag_ros/AprilTagDetectionArray) Detected apriltags.`odom`

(type nav_msgs/Odometry) Wheel-IMU odometry provided by the platform (short-term accurate, long-term inconsistent, no discrete jumps, regular frequency ~10 Hz). Position estimate is used, velocity estimate is not used.`icp_odom`

(type nav_msgs/Odometry) Odometry from SLAM node (if available, short-term accurate, long-term consistent, discrete jumps, mostly regular frequency ~10 Hz). Position estimate is used, velocity estimate is not used.`ground_truth_odom`

(type nav_msgs/Odometry) Ground truth odometry (only in simulator and in testing mode, short-term accurate, long-term consistent, no discrete jumps, regular frequency ~10 Hz). Position estimate is used, velocity estimate is not used.`mobile_base/sensors/core`

(type kobuki_msgs/SensorState) State of the Turtlebot2 real robot sensors. Only the bumper state is read.`bumper`

(type gazebo_msgs/ContactsState) Bumper state from simulator.

`fused_odom`

(type nav_msgs/Odometry) The main output of the localization. The best estimate of robot position in frame`map_frame`

(parameter of the launch file).`fused_odom_viz`

(type visualization_msgs/MarkerArray) Visualization helper for displaying the estimated (and ground truth) poses of the markers.`fused_odom_path`

(type nav_msgs/Path) Helper output of the localization algorithm. It represents the whole path in`map_frame`

from the instant the localization node was started. Publishing the whole path is usually inefficient, but it is convenient for debugging and evaluation.`relative_marker_pose`

(type geometry_msgs/PoseStamped). The estimated pose of the relative marker.

`map_frame`

(node parameter) →`odom_frame`

(node parameter): Best estimate of the robot pose from the fused localization algorithm. Short-term accurate, long-term consistent (if markers are observed regularly), discrete jumps, irregular frequency 10 Hz to 0.01 Hz.`map_frame_fast`

→`body_frame`

: Debug/helper frame that is a composition of the latest`map_frame`

→`odom_frame`

and the latest`odom_frame`

→`body_frame`

. Short-term accurate, long-term inconsistent, discrete jumps, regular frequency ~20 Hz. This frame is published by separate node`fused_map_fast_publisher`

.

- marker camera frame →
`body_frame`

(node parameter): Calibrated static transform between the marker detection camera's optical frame and the body frame of the robot. `odom_frame`

(node parameter) →`body_frame`

(node parameter): Wheel-IMU odometry.

courses/aro/tutorials/homework04.txt · Last modified: 2024/04/11 13:39 by peckama2