DevOops
DevOops

Reputation: 955

How can I configure Openstack Packstack Juno to work with external network on Centos 7

I have first disabled both NetworkManager and selinux on a Centos 7 x86_64 minimal install.

I have followed the Red Hat instructions for deploying Openstack with Packstack here: https://openstack.redhat.com/Running_an_instance_with_Neutron

After spinning up a Cirros instance, my floating ip is one matching the DHCP pool I setup, however its not assigned to eth0 by default.

I logged into the VM, and configured eth0 to match the floating ip, but it is still unreachable, even when I set the default gateway with route.

The Security group has ingress rules for tcp and IMCP on 0.0.0.0/0, so it's my understanding that I should be able to access it, if it were configured.

I have launched a Centos7 image, but I suspect its having the same issue because I can't connect.

Can someone please let me know how I might debug this? I am using neutron on this server and followed the instructions to the T

My network is 192.168.1.0/24

# neutron net-show public
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | cfe5a8cc-1ece-4d63-85ea-6bd8803f2997 |
| name                      | public                               |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 10                                   |
| router:external           | True                                 |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | 9b14aa61-eea9-43e0-b03c-7767adc4cd62 |
| tenant_id                 | 75505125ed474a3a8e904f6ea8638cf0     |
+---------------------------+--------------------------------------+
# neutron subnet-show public_subnet
+-------------------+----------------------------------------------------+
| Field             | Value                                              |
+-------------------+----------------------------------------------------+
| allocation_pools  | {"start": "192.168.1.100", "end": "192.168.1.220"} |
| cidr              | 192.168.1.0/24                                     |
| dns_nameservers   |                                                    |
| enable_dhcp       | False                                              |
| gateway_ip        | 192.168.1.1                                        |
| host_routes       |                                                    |
| id                | 9b14aa61-eea9-43e0-b03c-7767adc4cd62               |
| ip_version        | 4                                                  |
| ipv6_address_mode |                                                    |
| ipv6_ra_mode      |                                                    |
| name              | public_subnet                                      |
| network_id        | cfe5a8cc-1ece-4d63-85ea-6bd8803f2997               |
| tenant_id         | 75505125ed474a3a8e904f6ea8638cf0                   |
+-------------------+----------------------------------------------------+
# neutron router-show router1
+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                 | Value                                                                                                                                                                                     |
+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                                                                                                                      |
| distributed           | False                                                                                                                                                                                     |
| external_gateway_info | {"network_id": "cfe5a8cc-1ece-4d63-85ea-6bd8803f2997", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "9b14aa61-eea9-43e0-b03c-7767adc4cd62", "ip_address": "192.168.1.100"}]} |
| ha                    | False                                                                                                                                                                                     |
| id                    | ce896a71-3d7a-4849-bf67-0e61f96740d9                                                                                                                                                      |
| name                  | router1                                                                                                                                                                                   |
| routes                |                                                                                                                                                                                           |
| status                | ACTIVE                                                                                                                                                                                    |
| tenant_id             |                                                                                                                                                                                           |
+-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
# neutron port-list
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                            |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| 5dddaf6c-7aa3-4b59-943c-65c7f05f8597 |      | fa:16:3e:b0:8b:29 | {"subnet_id": "9b14aa61-eea9-43e0-b03c-7767adc4cd62", "ip_address": "192.168.1.101"} |
| 6ce2580c-4967-488b-a803-a0f9289fe096 |      | fa:16:3e:50:2f:de | {"subnet_id": "9b14aa61-eea9-43e0-b03c-7767adc4cd62", "ip_address": "192.168.1.100"} |
| 920a7b64-76c0-48a0-a682-5a0051271252 |      | fa:16:3e:85:33:9a | {"subnet_id": "9b14aa61-eea9-43e0-b03c-7767adc4cd62", "ip_address": "192.168.1.102"} |
| 9636c04a-c3b0-4dde-936d-4a9470c9fd53 |      | fa:16:3e:8b:f2:0b | {"subnet_id": "892beeef-0e1c-4b61-94ac-e8e94943d485", "ip_address": "10.0.0.2"}      |
| 982f6394-c188-4eab-87ea-954345ede0a3 |      | fa:16:3e:de:7e:dd | {"subnet_id": "9b14aa61-eea9-43e0-b03c-7767adc4cd62", "ip_address": "192.168.1.103"} |
| d88af8b3-bf39-4304-aeae-59cc39589ed9 |      | fa:16:3e:23:b8:c5 | {"subnet_id": "892beeef-0e1c-4b61-94ac-e8e94943d485", "ip_address": "10.0.0.1"}      |

I can ping the gateway created by Neutron from my local network here at:

# ping 192.168.1.100
PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data.
64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=0.437 ms
64 bytes from 192.168.1.100: icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from 192.168.1.100: icmp_seq=3 ttl=64 time=0.063 ms

However I can't ping this gateway when I configure it within a guest vm.

Using ovsctl, I see that the bridge is there and has it's external port set correctly on my second NIC:

[root@server neutron]# ovs-vsctl list-br
br-ex
br-int
br-tun
[root@server neutron]# ovs-vsctl list-ports br-ex
enp6s0f1

Upvotes: 0

Views: 1629

Answers (1)

larsks
larsks

Reputation: 311606

First, you may have a misconception about how this is supposed to work:

After spinning up a Cirros instance, my floating ip is one matching the DHCP > pool I setup, however its not assigned to eth0 by default.

Floating ip addresses are never assigned directly to your instances. Your instance receives an address from an internal network (that you have created with neutron net-create, neutron subnet-create, etc).

When you associate a floating ip with an instance:

nova floating-ip-create <external_network_name>
nova floating-ip-associate <instance_name_or_id> <ip_address>

This address is configured inside a neutron router, and neutron creates NAT rules that will map this address to the internal address of your instance.

With this in mind:

A default packstack mediated setup will generate result in two networks configured in neutron, named public and private. Instances should be attached to the private network; if you are using credentials for the admin user, you will need to make this explicit by passing --nic net-id=nnnnn to the nova boot command.

If you launch an instance as the demo user, this will happen automatically, because the private network is owned by the demo tenant and is the only non-external network visible to that tenant.

Your instance should receive an ip address from the private network, which for a default packstack configuration will be the 10.0.0.0/24 network.

If your instance DOES NOT receive an ip address

There is a configuration problem somewhere that is preventing dhcp requests originating with your instance from reaching the dhcp server for your private network, which is running on your controller in a network namespace named dhcp-nnnn, where nnnn is the UUID of the private network. Applying tcpdump at various points along the path from the instance to the dhcp namespace is a good way to diagnose things at this point.

This article (disclaimer: I am in the author) goes into detail about how the various components are connected in a Neutron environment. It's a little long in the tooth (e.g., it does not cover newer features like DVR or HA routers), but it's still a good overview of what connects to what.

If your instance DOES receive an ip address

If your instance does receive an ip address from the private network, then you'll need to focus your attention on the configuration of your neutron router and your external network.

A neutron router is realized as a network namespace named qrouter-nnnn, where nnnn is the UUID of the associated neutron router. You can inspect this namespace by using the ip netns command. For example, given:

$ neutron router-list
+--------------------------------------+------------+...
| id                                   | name       |...
+--------------------------------------+------------+...
| 92a5e69a-8dcf-400a-a2c2-46c775aee06b | router-nat |...
+--------------------------------------+------------+...

You can run:

# ip netns exec qrouter-92a5e69a-8dcf-400a-a2c2-46c775aee06b ip addr

And see the interface configuration for the router:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
10: qr-416ca0b2-c8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether fa:16:3e:54:51:50 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-416ca0b2-c8
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe54:5150/64 scope link 
       valid_lft forever preferred_lft forever
13: qg-2cad0370-bb: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether fa:16:3e:f8:f4:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.10/24 brd 192.168.200.255 scope global qg-2cad0370-bb
       valid_lft forever preferred_lft forever
    inet 192.168.200.2/32 brd 192.168.200.2 scope global qg-2cad0370-bb
       valid_lft forever preferred_lft forever
    inet 192.168.200.202/32 brd 192.168.200.202 scope global qg-2cad0370-bb
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fef8:f4c4/64 scope link 
       valid_lft forever preferred_lft forever

You can also use ip netns to do things like ping from inside the router namespace to verify connectivity to external addresses. This is a good place to start, actually -- ensure that you have functional outbound connectivity from within the router namespace before you starting trying to test things from your Nova instances.

You should see one or more address on the qg-nnnn interface that are within the CIDR range of your floating ip network. If you run ip route inside the namespace:

# ip netns exec qrouter-92a5e69a-8dcf-400a-a2c2-46c775aee06b ip route
default via 192.168.200.1 dev qg-2cad0370-bb 
10.0.0.0/24 dev qr-416ca0b2-c8  proto kernel  scope link  src 10.0.0.1 
192.168.200.0/24 dev qg-2cad0370-bb  proto kernel  scope link  src 192.168.200.10 

You should see a default route using the gateway address appropriate for your floating ip network.

I'm going to pause here. If you run through some of these diagnostics and spot problems or have questions, please let me know and I'll try to update this appropriately.

Upvotes: 2

Related Questions