3.5. Vulcanexus Cloud and Kubernetes¶
3.5.1. Background¶
This walk-through tutorial sets up both a Kubernetes (K8s) network and a local environment in order to establish communication between a pair of ROS nodes, one sending messages from a LAN (talker) and another one receiving them in the Cloud (listener). Cloud environments such as container-oriented platforms can be connected using eProsima DDS Router, and thus, by launching a DDS Router instance at each side, communication can be established.
3.5.2. Prerequisites¶
Ensure that the Vulcanexus installation includes the cloud and the ROS 2 demo nodes package (it is suggested to use vulcanexus-iron-desktop
).
Also, remember to source the environment in every terminal in this tutorial.
source /opt/vulcanexus/iron/setup.bash
Warning
For the full understanding of this tutorial basic understanding of Kubernetes is required.
3.5.3. Local setup¶
The local instance of DDS Router (local router) only requires to have a Simple Participant and a WAN Participant that will play the client role in the discovery process of remote participants (see Initial Peers discovery mechanism).
After having acknowledged each other’s existence through Simple DDS discovery mechanism (multicast communication), the local participant will start receiving messages published by the ROS 2 talker node, and will then forward them to the WAN participant. Next, these messages will be sent to another participant hosted on a K8s cluster to which it connects via WAN communication over UDP/IP. Following there is a representation of the above-described scenario:
3.5.3.1. Local router¶
The configuration file used by the local router will be the following:
# local-ddsrouter.yaml
version: v3.1
allowlist:
- name: "rt/chatter"
type: "std_msgs::msg::dds_::String_"
participants:
- name: SimpleParticipant
kind: local
domain: 0
- name: LocalWAN
kind: wan
listening-addresses: # Needed for UDP communication
- ip: 3.3.3.3 # LAN public IP
port: 30003
transport: udp
connection-addresses:
- ip: 2.2.2.2 # Public IP exposed by the k8s cluster to reach the cloud DDS-Router
port: 30002
transport: udp
Please, copy the previous configuration snippet and save it to a file in your current working directory with name local-ddsrouter.yaml
.
Note that the simple participant will be receiving messages sent in DDS domain 0
.
Also note that, due to the choice of UDP as transport protocol, a listening address with the LAN public IP address needs to be specified for the local WAN participant, even when behaving as client in the participant discovery process.
Make sure that the given port is reachable from outside this local network by properly configuring port forwarding in your Internet router device.
The connection address points to the remote WAN participant deployed in the K8s cluster.
For further details on how to configure WAN communication, please have a look at WAN Configuration.
Note
As an alternative, TCP transport may be used instead of UDP. This has the advantage of not requiring to set a listening address in the local router’s WAN participant (TCP client), so there is no need to fiddle with the configuration of your Internet router device.
To launch the local router, execute the following command (remember to source the Vulcanexus environment):
ddsrouter --config-path local-ddsrouter.yaml
3.5.3.2. Talker¶
In another terminal, run the following command in order to start the ROS 2 node that publishes messages in DDS domain 0
(remember to source the Vulcanexus environment):
ros2 run demo_nodes_cpp talker
3.5.4. Kubernetes setup¶
Two different deployments are required to receive the talker
messages in the Cloud, each in a different K8s pod; the first one being a DDS Router cloud instance (cloud router), which consists of two participants:
A WAN Participant that receives the messages coming from our LAN through the aforementioned UDP communication channel.
A Local Discovery Server (local DS) that propagates them to a ROS 2 listener node hosted in a different K8s pod.
Note
The choice of a Local Discovery Server instead of a Simple Participant to communicate with the listener has to do with the difficulty of enabling multicast routing in cloud environments.
The other deployment is the ROS 2 listener node. This node has to be launched as a Client to the local DS running on the first deployment.
The described scheme is represented in the following figure:
In addition to the two mentioned deployments, two K8s services are required in order to direct dataflow to each of the pods. A LoadBalancer will forward messages reaching the cluster to the WAN participant of the cloud router, and a ClusterIP service will be in charge of delivering messages from the local DS to the listener pod. Following there are the settings needed to launch these services in K8s:
kind: Service
apiVersion: v1
metadata:
name: ddsrouter
labels:
app: ddsrouter
spec:
ports:
- name: UDP-30002
protocol: UDP
port: 30002
targetPort: 30002
selector:
app: ddsrouter
type: LoadBalancer
kind: Service
apiVersion: v1
metadata:
name: local-ddsrouter
spec:
ports:
- name: UDP-30001
protocol: UDP
port: 30001
targetPort: 30001
selector:
app: ddsrouter
clusterIP: 192.168.1.11 # Private IP only reachable within the k8s cluster to communicate with the ddsrouter application
type: ClusterIP
Note
An Ingress needs to be configured for the LoadBalancer service to make it externally-reachable.
In this example we consider the assigned public IP address to be 2.2.2.2
.
The configuration file used for the cloud router will be provided by setting up a ConfigMap:
kind: ConfigMap
apiVersion: v1
metadata:
name: ddsrouter-config
data:
ddsrouter.config.file: |-
version: v3.1
allowlist:
- name: rt/chatter
type: std_msgs::msg::dds_::String_
participants:
- name: LocalDiscoveryServer
kind: local-discovery-server
discovery-server-guid:
ros-discovery-server: true
id: 1
listening-addresses:
- ip: 192.168.1.11 # Private IP only reachable within the k8s cluster to communicate with the ddsrouter application
port: 30001
transport: udp
- name: CloudWAN
kind: wan
listening-addresses:
- ip: 2.2.2.2 # Public IP exposed by the k8s cluster to reach the cloud DDS-Router
port: 30002
transport: udp
Following there is a representation of the overall K8s cluster configuration:
3.5.4.1. DDS-Router deployment¶
The cloud router is launched from within a Vulcanexus Cloud Docker image (that can be downloaded in Vulcanexus webpage), which uses as configuration file the one hosted in the previously set up ConfigMap.
Assuming the name of the generated Docker image is ubuntu-vulcanexus-cloud:iron
, the cloud router will then be deployed with the following settings:
kind: Deployment
apiVersion: apps/v1
metadata:
name: ddsrouter
labels:
app: ddsrouter
spec:
replicas: 1
selector:
matchLabels:
app: ddsrouter
template:
metadata:
labels:
app: ddsrouter
spec:
volumes:
- name: config
configMap:
name: ddsrouter-config
items:
- key: ddsrouter.config.file
path: DDSROUTER_CONFIGURATION.yaml
containers:
- name: ubuntu-vulcanexus-cloud
image: ubuntu-vulcanexus-cloud:iron
ports:
- containerPort: 30001
protocol: UDP
- containerPort: 30002
protocol: UDP
volumeMounts:
- name: config
mountPath: /tmp
args: ["-r", "ddsrouter -r 10 -c /tmp/DDSROUTER_CONFIGURATION.yaml"]
restartPolicy: Always
3.5.4.2. Listener deployment¶
Since ROS 2 demo nodes package is not installed by default in Vulcanexus Cloud, a new Docker image adding in this functionality must be generated. Also, the IP address and port of the local Discovery Server must be specified, so a custom entrypoint is also provided.
Copy the following snippet and save it to the current directory as Dockerfile
:
FROM ubuntu-vulcanexus-cloud:iron
# Install demo-nodes-cpp
RUN source /opt/vulcanexus/iron/setup.bash && \
apt update && \
apt install -y ros-iron-demo-nodes-cpp
COPY ./run.bash /
RUN chmod +x /run.bash
# Setup entrypoint
ENTRYPOINT ["/run.bash"]
Copy the following snippet and save it to the current directory as run.bash
:
#!/bin/bash
if [[ $1 == "listener" ]]
then
NODE="listener"
else
NODE="talker"
fi
SERVER_IP=$2
SERVER_PORT=$3
# Setup environment
source "/opt/vulcanexus/iron/setup.bash"
echo "Starting ${NODE} as client of Discovery Server ${SERVER_IP}:${SERVER_PORT}"
ROS_DISCOVERY_SERVER=";${SERVER_IP}:${SERVER_PORT}" ros2 run demo_nodes_cpp ${NODE}
Build the docker image running the following command:
docker build -t vulcanexus-cloud-demo-nodes:iron -f Dockerfile
Now, the listener pod can be deployed by providing the following configuration:
kind: Deployment
apiVersion: apps/v1
metadata:
name: ros2-iron-listener
labels:
app: ros2-iron-listener
spec:
replicas: 1
selector:
matchLabels:
app: ros2-iron-listener
template:
metadata:
labels:
app: ros2-iron-listener
spec:
containers:
- name: vulcanexus-cloud-demo-nodes
image: vulcanexus-cloud-demo-nodes:iron
args:
- listener
- 192.168.1.11
- '30001'
restartPolicy: Always
Once all these components are up and running, communication should have been established between the talker and listener nodes, so that messages finally manage to reach the listener pod and get printed in its STDOUT
.
Feel free to interchange the locations of the ROS nodes by slightly modifying the provided configuration files, hosting the talker in the K8s cluster while the listener runs in the LAN.