Sep 072015
 

OpenStack Logo

This is the third part of configuring neutron (Networking) on Ubuntu 14.04, you can go through previous article on Configure Neutron #1 and Configure Neutron #2 in which we have installed and configured Networking components on Controller node and Network Node.

Here, we will be configuring compute node to use neutron.

Prerequisites:

Configure kernel parameters on compute node, edit /etc/sysctl.conf file.

# nano /etc/sysctl.conf

Add the following parameters into the file.

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

Apply the changes.

# sysctl -p

Install and configure Networking components:

Install the following packages on each and every compute node you have it on OpenStack environment.

# apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent

Edit the /etc/neutron/neutron.conf file.

# nano /etc/neutron/neutron.conf

Modify the below settings and make sure to place a entries in the proper sections. In the case of database section, comment out any connection options as network node does not directly access the database.

[DEFAULT]
...
rpc_backend = rabbit
verbose = True
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
auth_strategy = keystone

[database]
...
#connection = sqlite:////var/lib/neutron/neutron.sqlite

##Comment out the above line.

[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = password

## Replace "password" with the password you chose for the openstack account in RabbitMQ

[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = password

## Replace "password" with the password you chose for neutron user in the identity service

Configure Modular Layer 2 (ML2) plug-in:

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file.

# nano /etc/neutron/plugins/ml2/ml2_conf.ini

Modify the below sections.

[ml2]
...
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
...
tunnel_id_ranges = 1:1000

[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]
local_ip = 192.168.12.23

## Tunnel network interface on your Compute Node.

[agent]
tunnel_types = gre

## [ovs] and [agent] stanzas are need to be added extra at the bottom of the file.

Restart the Open vSwitch service.

# service openvswitch-switch restart

Configure Compute node to use Networking:

By default, Compute node uses legacy networking. We must reconfigure Compute to manage networks through Neutron.

Edit the /etc/nova/nova.conf file.

# nano /etc/nova/nova.conf

Modify the below settings and make sure to place a entries in the proper sections. If the sections does not exists, create a sections accordingly.

[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[neutron]

url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = password

## Replace "password" with the password you chose for neutron user in the identity service

Restart the compute and Open vSwitch agen on compute node.

# service nova-compute restart
# service neutron-plugin-openvswitch-agent restart

Verify operation:

Load admin credentials on the controller node.

# source admin-openrc.sh

List the agents.

# neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id                                   | agent_type         | host    | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| 23da3f95-b81b-4426-9d7a-d5cbfc5241c0 | Metadata agent     | network | :-)   | True           | neutron-metadata-agent    |
| 4217b0c0-fbd4-47d9-bc22-5187f09d958a | DHCP agent         | network | :-)   | True           | neutron-dhcp-agent        |
| a4eaabf8-8cf0-4d72-817d-d80921b4f915 | Open vSwitch agent | compute | :-)   | True           | neutron-openvswitch-agent |
| b4cf95cd-2eba-4c69-baa6-ae8832384e40 | Open vSwitch agent | network | :-)   | True           | neutron-openvswitch-agent |
| d9e174be-e719-4f05-ad05-bc444eb97df5 | L3 agent           | network | :-)   | True           | neutron-l3-agent          |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+

The output should have four agents alive on the network node and one agent alive on the compute node.