Using OpenCV with Gazebo in Robot Operating System (ROS) — Part 1 — Display real-time video feed of 2D camera in gazebo
I hope you have everything set up. If you haven’t, follow Part — 0 of this series to set up. And come back here.
Link to Part — 0 of this series
Link to GitHub repo
It contains all the scripts, models and world files that are used in this tutorial series.
Launching the gazebo world
Before running the next command make sure you have run git pull
and sourced the setup.bash
file that’s in {your_workspace_name}/devel/
folder.
Then, to launch the gazebo world, run this command in terminal.
$ roslaunch pkg_cv_ros_tutorial_by_dhanuzch 1_world.launch
This should launch gazebo world that has a pine tree, a camera and a turtlebot. It should look something like this…
If you can see this, Congrats! You have set everything correctly as of now.
Alternatively, if you want to launch gazebo world and start running the script out of the box. Then run
$ roslaunch pkg_cv_ros_tutorial_by_dhanuzch 1_world_and_script.launch
If you wanna know how the code works, then continue reading…
The Code — Explained.
This code is available as a part of a package in my GitHub repo
To run this script, run this command in terminal…
$ rosrun pkg_cv_ros_tutorial_by_dhanuzch camera_read.py
Importing the libraries
line 1
It’s called shebang (or) hashbang, and with it you’re telling your machine to use a specific interpreter. This is required in ROS because our script will be run as an executable. Without this your script will not be executed properly.
line 3&4
rospy allows us to interface ROS features with python. And cv2 imports the OpenCV library, with which we’ll be doing image processing.
line 6
It is ROS package that defines messages for most commonly used sensors. Visit this link to check out the message definition.
line 7
cv_bridge is a package that converts ROS image messages to openCV images and vice versa. Keep reading to know how we implement it.
main() function and __init__ function
line 42
initializes a ROS node with the name camera_read
. When anonymous = True
a random number is assigned to the end of the node name to make it unique. Since we just have one node with the same name, anonymous is set to False.
line 43
calls the main function.
When the main function is called…
line 32
invokes the __init__
function of class camera_1, this function creates an instance variable called image_sub, and it subscribes to /camera_1/image_raw
topic.
That topic has message of type Image
, and every time a new message arrives it calls the callback()
function.
line 35
keeps the ROS node from shutting down, and it will yield activity to other threads.
line 39
destroys all the openCV windows that were created.
callback() function
Finally we will look at the function that gets the job done.
line 18
converts the ROS image message to cv2 image with bgr8
encoding.
For example, let’s say if you want to convert a RGB image to a grayscale image, you may use mono8
instead of bgr8.
If you’re unsure of this you may set the value as desired_encoding= “passthrough”
line 24
the image output is resized to 360 X 640 pixels. If we try to display the output of camera without downscaling, chances are it will be too big for the display and you might not be able to see the full window.
If you see in the above image, the bottom part of the output is cut off. So it is advisable to downscale the image.
line 27
the resized image is displayed, and the name of the window is set to "Camera output resized"
In the Final part of this series, we will be scanning a QR code that’s in a Gazebo world.
Here’s the link to Part — 2 of this tutorial series…
Shoot your queries in the comments section! Bye :)