This tutorial will give insight on micro-ROS benchmarking on different topics:
Memory profiling: Static, dynamic and stack.
Data throughput on different transports.
Latency between micro-ROS Client and Agent.
6.4.1. Memory Profiling¶
This section will cover micro-ROS memory usage on the most basic entity types. The test on this section have been performed using the provided UDP transport with FreeRTOS + TCP as network stack.
22.214.171.124. Profiling methodology¶
The memory profile has been performed with the following configuration:
Reliable entities with a fixed topic size.
UDP transport (FreeRTOS + TCP).
Transport MTU: 512 B.
Micro XRCE-DDS Client history: 4 slots.
RMW History: 4 slots (Except for RMW History section).
For more information on the middleware configuration, check the Memory management tutorial.
There are no differences on memory usage between different topic sizes and the reliability kind used, as the topic size plus reliability and/or middleware overhead shall fit in the static buffers pre-allocated by the program at compile-time, defined by the history configuration.
In general, the topic size will only affect data throughput as it is directly related to the size of the messages exchanged by the middleware.
Meanwhile, to measure the different types of memory:
Static memory: The static memory has been calculated as the difference between the memory occupied by the .bss and .data sections with a non-zero number of entities, and the memory occupied by the same sections when no micro-ROS application is running, that is, the memory occupied by the rest of components of the RTOS and libraries.
Stack memory: The stack consumed during the program execution is taken into account by means of a FreeRTOS specific function involved in the memory management capabilities offered by this RTOS, the uxTaskGetStackHighWaterMark() function. This function returns the amount of stack that remains unused when the stack consumed by the program is at its greatest value. By subtracting this figure to the total stack available, which is known, one can obtain the stack peak used by the app.
Dynamic Memory: This is the memory dynamically allocated by the program by calls to
malloc()functions in the C language. The call to dynamic memory have been override with custom memory allocators to measure the total requested memory.
126.96.36.199. Pub/Sub applications¶
Publishers and subscribers have been tested varying the
DRMW_UXRCE_MAX_PUBLISHERS configuration between 1, 5 and 10. The entities are then initialized and used as usual on a
Notice that each of these entities has its own associated topic, concluding that the number of topics used does not impact memory usage.
The total memory (static plus stack plus dynamic) occupied is summarized in the plots below:
From this data, its concluded that a publisher takes a total of ~ 550 B meanwhile a subscriber uses ~ 600 B. There is virtually no difference between these two entities, as the memory pools of micro-ROS RMW are shared among all the entities participating in a given application.
To get a better understanding of the memory usage, the same is provided data but broken down into its the different memory types used:
This shows that both the static and the dynamic memories change with the entity number, while the stack usage stays constant.
188.8.131.52. Service/Client applications¶
The same approach is used to measure service and clients applications for a
example_interfaces/srv/AddTwoInts service kind.
Notice that this time the total memory is shown along its individual types:
As concluded on the previous section, the memory used is almost identical for a ~ 500 B usage by both entity kinds. Note that it is also virtually identical to the memory used by a publisher or subscriber application.
184.108.40.206. RMW History¶
As explained before, the topic memory comes from the RMW history, which is formed by static memory pools defined on compilation time.
For a varying
RMW_UXRCE_MAX_HISTORY between 1 and 10:
As expected, the static memory used by each history slot equals the
MTU * RMW_UXRCE_STREAM_HISTORY formula, which for this scenario:
512 * 4 = 2048 B. For more details on the middleware memory usage, check the Memory management tutorial.
On this section data throughput will be measured for different transports and topic sizes. To perform this test, a simple best effort publisher micro-ROS application sends variable
std_msgs/msg/String for 5 seconds.
The transport are divided based on their
framing configuration. More details can be found on the Custom Transports tutorial.
220.127.116.11. Stream-oriented transports¶
The tested stream oriented transports and their configuration are:
USB-CDC: 115200 bauds per second.
Serial UART: 115200 bauds per second.
TCP (AWS Secure Sockets) based on Wi-Fi-Pmod-Expansion-Board.
PMOD: 460800 bauds per second.
As expected, USB shows the higher throughput due to the fact that has the higher bandwidth, followed by TCP over WiFi and Serial. There is also a great improvement on the throughput as the payload is increased, caused by the overhead added by the HDLC framing protocol.
18.104.22.168. Packet-oriented transports¶
As for packet oriented transports, the following have been tested:
CAN-FD using a PCAN-USB FD adapter.
Nominal rate: 0.5 Mbps
Data rate: 2 Mbps
UDP (FreeRTOS + TCP) over cable.
UDP (ThreadX + NetX) over cable.
This data shows how variable is micro-ROS data throughput regarding the used RTOS and network stack, as there is a clear difference between UDP using FreeRTOS + TCP and NetX. Its also clear that the throughput in this case is linear with the topic size, avoiding performance differences as in the previous section.
As CAN-FD protocol has a maximum payload of 64 bytes, the topic size used has been adjusted to the available RMW History parameter.
Latency and round trip time (RTT) has been measured with a pub/sub application were timestamps are exchanged using
To calculate the results, the timestamp of the board is synchronized with the Agent using the time synchronization API.
Client publish time
Agent publish time
UDP (ThreadX + NetX)
UDP (FreeRTOS + TCP)
TCP (PMOD WiFi)
As expected, the latency and RTT is directly related to the transport latency and throughput.