Netbox is a fantastic tool for storing network-related data such as IP addressing, device information, interface connectivity, rack elevations, etc. Its goal isn't necessarily to provide a data store for configuration-specific information, but it is frequently used to do so because it houses so much network device data natively. Generating a basic configuration using Netbox data is pretty straightforward -- pull the data out of it using the REST API and run it through a configuration template. This can even be accomplished without any formal logic using say, Ansible, to both pull the data and render the template. However, this doesn't account for additional logic required for complex configurations or data that isn't stored natively in Netbox.
Here at Vector we use an approach that generates complete network device configurations from Netbox as part of a workflow that also pushes the configurations via Ansible:
In this post we are focusing on the data required in Netbox to generate those configurations and the logic and templates to create them (steps 1-4 illustrated above). Ansible doesn't do anything other than simply push a configuration after it's been rendered, so its portion of the workflow is really basic. If you are interested in trying a demo of the full workflow, we have created a Github repository here that assists you with building out a complete demo environment containing Netbox data, a sample configuration generator script, Ansible, and a containerized Arista leaf/spine environment.
Storing custom configuration data in Netbox
Out-of-the-box data fields in Netbox can provide a lot of data that is required to generate a device configuration, but in order to store more complex or unique data some more work is required. Our goal in storing configuration information in Netbox was to keep the data vendor agnostic. This allows the same data to generate different configurations depending on the NOS syntax defined in the template. We use two primary methods for storing custom configuration data in Netbox:
Custom Fields. Custom fields can be defined on a lot of objects inside of Netbox and provide an easy way to define custom configuration data. For example, one custom field we almost always define is the BGP ASN at the device level. This permits for the configuration of "router bgp <BGP_ASN> " amongst other things. Custom Fields are the logical first choice for storing custom configuration data in Netbox because they are easy to define and query. However, they have a couple of drawbacks. The first and most obvious is they aren't available on all Netbox objects. For the application of generating configuration data, this limitation is most acutely felt on the "interfaces" object because frequently that is where a lot of configuration-specific data is required. Another drawback of custom fields is that they appear on all objects of a type which can cause a lot of unnecessary clutter and sometimes lead to confusion.
Tags. Tags are not inherently a means of storing custom configuration data, but they can be used to imply configuration data. What that means is the logic used to query Netbox data can look for a tag and then translate that tag into a set of configuration commands. This is frequently used to overcome the Custom Fields limitation on Interfaces as detailed above. For example, let's say a core uplink interface on a distribution switch is defined with a tag named "interface-type-core-uplink". That tag could imply a lot about the configuration, including things like being a routed interface, running PIM, enabling LLDP, QoS policy, etc. Instead of storing the actual configuration commands, the configuration/template logic would simply render the configuration when it sees an interface tagged with the "interface-type-core-uplink" tag. In this example:
interface X/Y
no switchport
ip pim sparse-mode
lldp transmit
lldp receive
service-policy output CORE-UPLINK
Tags can also be used to provide mappings between objects inside of Netbox that don't exist natively. For example, if you wanted to map a VRF into specific Device Roles, you could do that by tagging the VRF with the Device Role names.
Generating Configurations: Python and Jinja2
In order to generate custom configurations using Netbox data we employ Python to pull the data out of Netbox, generate the template variables, and render the templates using Jinja2. To better understand the process, we will walk you through four examples that utilize custom data as laid out in the previous section. All code used in these examples can be found here as part of a full demo environment in which you can test out the workflow yourself.
Scenario
For the sake of the examples provided below, we've contrived the following scenario (this isn't necessarily something you would want/need in production; it exists to demonstrate concepts). Here is the baseline topology:
This is a (super) pared down Arista EOS leaf/spine topology that employs BGP (not EVPN) and a "VRF-lite" type of methodology. Between the leafs and the spine there exists Layer 3 routed /31 transit interfaces on dot1q subinterfaces, per VRF. Here is an example of leaf-02's Ethernet1 interfaces (from Netbox):
A device's role in Netbox determines which VRFs it will have. In this case, leaf-01 has the role "leaf-border" so it contains the "default" and "dev" VRFs. Leaf-02 has the role "leaf-access" so it contains the "default", "dev", and "dmz" VRFs (more on this VRF mapping concept in Example #2).
Example #1: Leaf-01 BGP ASN
This example is the easiest and most straightforward because it doesn't use any additional logic. It queries Netbox for the data and then passes it directly to the template.
Netbox Data
Here the Netbox data exists simply as a custom attribute on the device:
Python Snippet
In this case the device object is a response received by querying a Device in Netbox using pynetbox. The template variable is passed on directly by the script in the form of a Python dictionary that makes all of the device object's attributes (including BGP ASN) accessible to the templating engine:
template_vars['device'] = vars(device)
Template Snippet
router bgp {{ device.custom_fields.bgp_asn }}
Rendered Configuration
router bgp 65021
Example #2: Leaf-02 VRFs
In this example we are using a leaf's role -- 'leaf-access' in this case -- to determine which VRFs get mapped to it. We accomplish this by creating a tag for the role name, and then associating that tag with the VRF object. When we are rendering the template for the device, we determine which VRFs are going to be created on the device based on the role tags associated with the VRF.
Netbox Data
leaf-02 has the role 'leaf-access':
The 'role-leaf-access' tag is associated with all the VRFs:
Python Snippet
The basic logic here is "iterate over the VRFs in Netbox. If there is a tag that matches the device's role, add that VRF to the device." In this case, vrfs is a pynetbox response that contains all of the VRFs (device_vrfs will eventually land in the template variables).
device_vrfs = list()
for vrf in vrfs:
for tag in vrf.tags:
if tag.name == 'role-' + role:
device_vrfs.append(vrf.name)
return device_vrfs
Template Snippet
Here we render the VRFs on the device and enable routing, but this can also be used for VRF-aware services such as multicast and routing protocols (e.g., BGP)
{% for vrf in vrfs %}
{% if vrf != 'default' %}
vrf instance {{ vrf }}
ip routing vrf {{ vrf }}
{% else %}
ip routing
{% endif %}
Rendered Configuration
ip routing
vrf instance dev
ip routing vrf dev
vrf instance dmz
ip routing vrf dmz
Example #3: Spine-01 Interface
In this example we use a tag on an interface to generate a specific configuration (it simply enables PIM in this case but as noted above could be used to imply significantly more configuration). The interface in this example, Ethernet1.200, is a fabric transit interface to leaf-01 in the DEV VRF.
Netbox Data
The spine-01 interface below is tagged with the 'interface-type-fabric-transit' tag:
Python Snippet
The code iterates over spine-01's interfaces and if it sees the 'interface-type-fabric-transit' tag it will set the appropriate template variable (intf_dict will eventually make it's way into the template variables):
for tag in intf.tags:
if tag.name == 'interface-type-fabric-base':
intf_dict['base_routed_int'] = True
elif tag.name == 'interface-type-fabric-transit':
intf_dict['fabric_transit'] = True
Template Snippet
Here we see the entire interface rendering block, but what we are concerned about is what our 'interface-type-fabric-transit' tag accomplished:
{% for intf in intfs %}
interface {{ intf.name }}
no shutdown
{% if intf.base_routed_int %}
no switchport
mtu 9214
{% endif %}
{% if intf.fabric_transit %}
pim ipv4 sparse-mode
{% endif %}
{% if intf.description %}
description {{ intf.description }}
{% endif %}
{% if intf.dot1q_encap %}
encapsulation dot1q vlan {{ intf.dot1q_encap }}
{% endif %}
{% endfor %}
Rendered Configuration
interface Ethernet1.200
no shutdown
pim ipv4 sparse-mode
encapsulation dot1q vlan 200
(Note that IP addresses aren't seen here because they are rendered later since they are their own objects in Netbox)
Example #4: Leaf-01 BGP Networks
In this example we will determine which networks leaf-01 should originate into BGP. This is accomplished by looking for leaf-01's IP addresses that contain the 'bgp-originate' tag.
Netbox Data
If an IP address is to be originated into BGP, it is tagged with the 'bgp-originate' tag. In this case it's simply leaf-01's loopbacks and transits, but it could certainly include SVI IP addresses, etc.
Python Snippet
While we are iterating through a device's IP addresses, find the ones tagged with 'bgp-originate', determine their base network, and add them to a data structure for the template variables (bgp_networks will land in the template variables):
for ip in device_ips:
if ip.vrf.name not in bgp_networks:
bgp_networks[ip.vrf.name] = list()
bgp_networks[ip.vrf.name].extend([get_ip_network(ip.address) for tag in ip.tags if tag.name == 'bgp-originate'])
return bgp_networks
Template Snippet
router bgp {{ device.custom_fields.bgp_asn }}
maximum-paths 6 ecmp 6
{% for vrf in vrfs %}
neighbor SPINES-VRF-{{ vrf|upper }} remote-as 65000
neighbor SPINES-VRF-{{ vrf|upper }} peer group
neighbor SPINES-VRF-{{ vrf|upper }} send-community
neighbor SPINES-VRF-{{ vrf|upper }} maximum-routes 12000
neighbor SPINES-VRF-{{ vrf|upper }} bfd
neighbor SPINES-VRF-{{ vrf|upper }} route-map SPINE-OUT-VRF-{{ vrf|upper }} out
neighbor SPINES-VRF-{{ vrf|upper }} route-map SPINE-IN-VRF-{{ vrf|upper }} in
vrf {{ vrf }}
router-id {{ router_ids[vrf] }}
{% for neighbor in spine_neighbors[vrf] %}
neighbor {{ neighbor }} peer group SPINES-VRF-{{ vrf|upper }}
{% endfor %}
{% for net in bgp_networks[vrf] %}
network {{ net }}
{% endfor %}
{% endfor %}
Rendered Configuration
router bgp 65021
maximum-paths 6 ecmp 6
neighbor SPINES-VRF-DEFAULT remote-as 65000
neighbor SPINES-VRF-DEFAULT peer group
neighbor SPINES-VRF-DEFAULT send-community
neighbor SPINES-VRF-DEFAULT maximum-routes 12000
neighbor SPINES-VRF-DEFAULT bfd
neighbor SPINES-VRF-DEFAULT route-map SPINE-OUT-VRF-DEFAULT out
neighbor SPINES-VRF-DEFAULT route-map SPINE-IN-VRF-DEFAULT in
vrf default
router-id 10.128.15.2
neighbor 10.128.14.0 peer group SPINES-VRF-DEFAULT
network 10.128.14.0/31
network 10.128.15.2/32
neighbor SPINES-VRF-DEV remote-as 65000
neighbor SPINES-VRF-DEV peer group
neighbor SPINES-VRF-DEV send-community
neighbor SPINES-VRF-DEV maximum-routes 12000
neighbor SPINES-VRF-DEV bfd
neighbor SPINES-VRF-DEV route-map SPINE-OUT-VRF-DEV out
neighbor SPINES-VRF-DEV route-map SPINE-IN-VRF-DEV in
vrf dev
router-id 10.128.31.2
neighbor 10.128.30.0 peer group SPINES-VRF-DEV
network 10.128.30.0/31
network 10.128.31.2/32
Final Thoughts
Hopefully we've given you some ideas on how to generate network device configurations using Netbox as the data source. There are many possibilities here and if you are interested in the code and a working example please checkout out the demo Github repo. If you have any questions or thoughts, feel free to leave a comment or email us at info@vectornetworksllc.com.
Comments