How VMs get access to the metadata in Neutron
The metadata for a VM are delivered by Nova metadata service. Nova needs to know the instance ID of a VM to be able to deliver its metadata. Neutron has a special agent (metadata agent) just to add this information in the HTTP header of the metadata request of the VM. Let’s see more in detail how it works…
There are two possible configurations:
- Routed networks
- Isolated networks
Routed network
In this case the VM is on a network that is connected to a router. A router is implemented in Neutron using a network namespace. There’s a specific agent that handles the routers in Neutron: the L3 agent. In routed network mode the L3 agent is also in charge of spawning the metadata proxy. As the name says, the metadata proxy is just a proxy that forwards the requests to the metadata agent using a Unix domain socket. When a VM sends a metadata request, the request reaches the router, since it’s the default gateway. In the router namespace there’s an iptables rules that redirects the traffic whose destination is the metadata server to a local port 9697.
vagrant@vagrant:~$ sudo ip netns exec qrouter-5c41b22f-a874-4689-8b93-e82640541929 iptables -t nat -L | grep redir
REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir ports 9697
The metadata proxy is listening to this port.
vagrant@vagrant:~$ sudo ip netns exec qrouter-5c41b22f-a874-4689-8b93-e82640541929 netstat -atp Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:9697 *:* LISTEN 7753/python
When the proxy receives a packets it knows 1) the IP of the VM that is sending the request 2) the router ID of the router that it’s connected to the network the VM is on since there’s one proxy for every router. It adds this information (IP of the VM and router ID) in the HTTP header and forwards the request to the metadata agent. The metadata agent uses the router ID to list all the network connected to that router and identifies the one to which the VM belongs. Then it issues a query to the neutron server to get the instance ID of the VM using the IP and the network ID as filters. It adds the instance ID in the HTTP request and forwards the request to Nova. Yuppie!
Isolated networks
When a network is not connected to a router, how can a VM get its metadata? Well, there’s a flag that you can set in the dhcp agent config file: enable_isolated_metadata . If it’s set to True, the dhcp agent will do some magic. Let’s see the datails.
The dhcp agent is the one that is in charge of dhcp. DHCP is not only about assigning IP addresses, there are other options. For example the Option 121 is for setting a static route. That’s exactly what the dhcp agent uses to tell the VM that the next hop to reach the metadata server is the IP of the dhcp port. If you set enable_isolated_metadata to True and you ssh into the VM you’ll see:
$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 11.10.10.1 0.0.0.0 UG 0 0 0 eth0
11.10.10.0 * 255.255.255.0 U 0 0 0 eth0
169.254.169.254 11.10.10.3 255.255.255.255 UGH 0 0 0 eth0
where 11.10.10.3 in my case is the IP of the dhcp port:
vagrant@vagrant:~$ neutron port-show eb0cd637-0a3a-40a2-90ac-064ef2bca05d
+-----------------------+-----------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-----------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:host_id | vagrant-ubuntu-trusty-64.localdomain |
| binding:profile | {} |
| binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} |
| binding:vif_type | ovs |
| binding:vnic_type | normal |
| device_id | dhcpd439385c-2745-50dd-91dd-8a252bf35915-7fb0c0f4-7ed1-4e4f-8683-ec187a396c51 |
| device_owner | network:dhcp |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "6bad6b4a-23fc-4864-a6d0-668aab7d9486", "ip_address": "11.10.10.3"} |
| id | eb0cd637-0a3a-40a2-90ac-064ef2bca05d |
| mac_address | fa:16:3e:c1:62:ab |
| name | |
| network_id | 7fb0c0f4-7ed1-4e4f-8683-ec187a396c51 |
| security_groups | |
| status | ACTIVE |
| tenant_id | df3187034bcd49a18659c30584d8767a |
+-----------------------+-----------------------------------------------------------------------------------+
So the VM sends the packet with the metadata request to the dhcp namespace (that’s where the dhcp port is) . In this namespace the dhcp agent has spawn a metadata proxy that is listening on port 80.
vagrant@vagrant:~$ sudo ip netns exec qdhcp-7fb0c0f4-7ed1-4e4f-8683-ec187a396c51 netstat -atp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 *:http *:* LISTEN 18968/python
The proxy knows the network ID and the IP of the VM (it’s in the request). It adds this info in the HTTP request and forwards it to the metadata agent. As before the metadata agent gets the instance ID of the VM from the neutron server using the network ID and the IP as filters. It adds the instance ID to the request and forwards it to Nova. Yuppie!
Debugging!
If you use devstack, you can just join the screen session
screen -x
and you will see the metadata server in the window named q-meta.You can set breakpoints directly in the code using pdb.
It’s a bit trickier to debug the metadata proxy, since it’s not in the screen session. Here is what In _get_metadata_proxy_callback add the ‘–nodaemonize’ flag in the command line. You can also specify the log directory if you want to access the logs ‘–log-dir=/opt/stack/logs’. To make the dhcp agent restart the proxy do:
neutron dhcp-agent-network-add <dhcp_agent_id> <net_id>
neutron dhcp-agent-network-remove <dhcp_agent_id> <net_id>
Now you can do ‘ps | grep metadata’ and copy the command line to spawn the proxy:
sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec qdhcp-284eaa7e-082b-4e60-9e4f-3150647d4fdd neutron-ns-metadata-proxy –pid_file=/opt/stack/data/neutron/external/pids/284 eaa7e-082b-4e60-9e4f-3150647d4fdd.pid --metadata_proxy_socket=/opt/stack/data/neutron/metadata_proxy --network_id=284eaa7e-082b-4e60-9e4f-3150647d4fdd --state_path=/opt/stack/data/neutron --metadata_port=80 --log-dir=/opt/stack/logs --nodaemonize --debug –verbose
Kill the current proxy process. Create a new window in the screen section that you can name q-meta-proxy. Just paste the command line you copied there to start the proxy. Now you can modify the source code and debug the metadata proxy directly. Don’t forget that you need to restart the proxy every time you modify the code. To make the VM send a metadata request you can just ssh into the VM and do:
curl http://169.254.169.254/latest/meta-data/
Related Articles
Nov 27th, 2024
No comments yet