Column
FEATURE DETAIL
ColumnColumn

About SLAM by LiDAR and ROS

What is ROS?

ROS (Robot Operating System) is a framework that facilitates system development around robots by using abundant existing software. It was also developed to support software reuse, and is designed to standardize hardware control methods and data so that different manufacturers and methods can be treated in the same way.


Although it is named as an operating system, it is a group of applications that run on the OS (mainly Ubuntu) as a software format, unlike what is generally called an OS such as Windows and Linux.


ROS composition

ROS has the following configuration to facilitate software reuse. These elements are related as shown in the above figure to realize robot software.


Element Description Example
Node Software that performs various processes such as nodes that acquire sensor data, nodes that move robots, and nodes that plan routes. urg_node , hokuyo3d, move_base etc.
Message Data exchanged between nodes LaserScan, PointCloud
etc.
Topic A place to mediate the exchange of messages between nodes scan, hokuyo_cloud, cmd_vel etc.
Master Mediator when nodes find each other roscore

Nodes that can be used with LiDAR

The following nodes can be used with the LiDAR. By using these nodes, it is not necessary to design the communication with the sensor and the interface part with the application, which had to be implemented by the customer, so it is easier to use the LiDAR and it is more essential. You can concentrate on developing various functions.


Sensor type Available nodes Message to output
2D sensor (SCIP) urg_node(http://wiki.ros.org/urg_node) LaserScan
3D sensor (VSSP) hokuyo3d(http://wiki.ros.org/hokuyo3d) PointCloud, PointCloud2, Imu
* These nodes were not developed by our company, but it is also a feature of ROS that the nodes developed by a third party are open to the public so that many users can use them.

Robot developed for Nakanoshima Challenge

The Nakanoshima Challenge is an event to test whether mobile robots can act without problems in an environment where people actually come and go, and has been held in Nakanoshima, Osaka since 2018.


Since our volunteer members participated in Nakanoshima Challenge 2019, we will explain the software developed at that time as a concrete application example of ROS. The robot in this photo is the robot that participated in Nakanoshima Challenge 2019 and is called arno.


The robot itself was based on
i-Cart mini developed in the Intelligent Robot Laboratory at the University of Tsukuba.


The sensor is equipped with two 2D sensors (URM-40LC-EW). It also has two 3D sensors (YVT-35LX), but this time it is not used for control.


Soft configuration

The topics that interact with the nodes running on arno are configured like this.
* Actually, there are a few more topics, but they are simplified.


(1) First, two urg_nodes (urg_node1 and urg_node2) get data from two URM-40LCN-EW and send a LaserScan message to the / urg_node1 / scan and / urg_node2 / scan topics. By combining / urg_node1 / scan and / urg_node2 / scan as one data with laserscan_multi_merger, 360 degree scan data / scan is configured and passed to amcl.


(2) amcl is a node that estimates its own position using a particle filter. It estimates its own position from the map acquired from map_server and 360-degree scan data / scan acquired from laserscan_multi_merger, recognizes where it is on the map, and coordinates information. Issue the / tf topic.


(3) move_base is a node that gives an operation instruction, determines the operation from the self-position / tf estimated by amcl and the destination / goal obtained from arno_navigation, and issues the / cmd_vel topic of the operation instruction with.


(4) icart_mini_driver_node is the node that actually drives the motor of i-Cart mini, and drives the motor of i-Cart mini according to the / cmd_vel topic of the operation instruction from move_base.


The robot is controlled in this way.


I also used gmapping to do SLAM when generating the map. Of these nodes, only the arno_navigation node was created from 1, and the others are used only for parameter adjustment.


In this way, by using ROS, many parts of robot software can be created by combining existing nodes, and the development period can be expected to be significantly shortened.

About ROS2

Using ROS has various advantages, but in reality, there are problems such as security, required specifications, and real-time performance, so it is mainly used for research purposes, and its application for products is limited.
On the ROS side, by solving these problems, we are developing ROS2 with a view to applying it not only for research purposes so far but also for products.


Below is a comparison table of current ROS and ROS2 features.


  Current ROS ROS2
Simultaneous use of robots Supports only single robot Supports multiple robots
Computing resources Only high-performance computers are supported Embedded platforms are also supported
Real-time control Must follow special practices General intra- and inter-process communication
Network quality Supports only high quality Accepts losses and delays
Programming format Maximum user freedom Fixed format while remaining flexibility
Application Only for research and academic purposes Also for commercialization
Quoted from [Yutaka Kondo (2019) Lets start with ROS2 next-generation robot programming Gijutsu Hyoronsha]


Hokuyo also provides ROS2 nodes for LiDAR.
For more information, see the ROS2 article on the URG Network site▼
URG Network / Wiki / urg_node2 (sourceforge.net)
Get urg_node2 for ROS2 here▼
https://github.com/UrgNetworks/urg_node2

Summary

By using ROS and ROS2, you can easily incorporate the latest robot technology. In addition, the widespread use of these tools is expected to expand the use of robots and LiDAR in various fields. As a LiDAR manufacturer, we would like to make various proposals in the future.
上に戻る