Ip Addresses For Network Devices: Best Practices And Recommendations

Assigning ip addresses to network devices.

Assigning Internet Protocol (IP) addresses is an essential task when configuring devices on computer networks and internetworks. Careful planning and strategic assignment of IP addresses provides networks with critical benefits such as optimized traffic flow, simplified management, and room for future expansion.

Benefits of Proper IP Addressing

Implementing an intentional and well-organized IP addressing scheme for routers, switches, servers, printers, computer workstations, mobile devices, and other networked equipment delivers major advantages for network manageability and performance.

  • Optimizes traffic flow patterns by logically grouping devices into subnetworks
  • Simplifies administration tasks such as managing access control lists and security policies
  • Reduces subnetting complexity by allowing flexible expansion of number of hosts
  • Enables location-based identification and management of devices
  • Facilitates network segmentation for security zones, departments, etc.

IP Addressing Schemes

Public vs private ip addresses.

Public IP addresses uniquely identify networked devices across the Internet, while private IP addresses identify devices within internal networks only behind Network Address Translation (NAT). Most home networks utilize private addressing with routers assigning public addresses only when accessing the Internet.

Table of Contents

Static vs Dynamic IP Addresses

Network hosts may utilize static IP addressing, in which IP address assignments do not change, or dynamic IP addressing such as Dynamic Host Configuration Protocol (DHCP), in which devices are automatically assigned IP addresses from a pool that can be updated.

Best Practices

Plan your ip addressing scheme.

Prior to assigning IP addresses, an IP scheme should be carefully planned and documented. Important planning factors include:

  • Number and locations of subnets needed
  • Number of required host IP addresses per subnet
  • Which private address ranges to utilize
  • Grouping of devices by usage or department

Leave Room for Growth

Carefully determine current and future IP addressing needs and assign address ranges that allow room for growth. Strategies include:

  • Assigning address blocks in multiples of 4, 8, 16 etc. to simplify binary math for future subnetting
  • Selecting private address ranges with ample room for expansion such as 172.16.0.0/12
  • Implementing subnet sizes greater than the number of hosts currently needed

Use Subnetting to Organize your Network

Leverage subnetting and Classless Inter-Domain Routing (CIDR) notation to segment networks in a hierarchical fashion for better traffic management and easier administration.

Assign Static IPs to Servers and Printers

Assign static IP addresses to servers, printers, and other infrastructure devices for reliability and consistency. Use memorable numbering patterns related to device function or location.

Use Dynamic IPs for Client Devices

Use DHCP to assign user workstations, laptops, and mobile devices addresses dynamically from a predefined numbering pool. This eases administration if devices are added, moved, reassigned, or removed.

Enable DHCP

Dynamic Host Configuration Protocol (DHCP) enables the automatic distribution of IP settings like IP addresses, DNS servers, default gateways etc. This reduces manual administration.

Document your IP Addressing Scheme

Fully document and regularly update details of the IP plan including subnet masks, address ranges, static assignments, and DHCP pools. This aids management and troubleshooting.

Common Mistakes to Avoid

Not future-proofing your ip scheme.

Failure to properly anticipate growth frequently results in address shortages, subnetting complexities, and redesigns. Assign address blocks appropriately sized for foreseeable expansion.

Using Random, Unorganized IP Address Assignments

Lack of planning leads to chaotic, hard-to-manage IP schemes prone to errors and hampering troubleshooting. Organize logically from the beginning.

Forgetting to Document your Scheme

Undocumented IP assignments complicate management, maintenance, and integration with other systems. Documentation is critical.

Example IP Addressing Schemes

Class c network with 2 subnets.

A class C private network using subnetting to create two segments, accommodating up to 62 hosts per subnet.

Small Business Network with VLANs

A small business using VLANs and a class B private network to logically segment departments, servers, WiFi, and IP cameras across multiple switches totaling up to 65534 host addresses.

Tools and Utilities for IP Address Management

Specialized network management software and IP address management (IPAM) tools can automate and simplify administering more advanced, dynamic networks with many subnets. Example utilities include SolarWinds IP Address Manager, Infoblox BloxOne, and phpIPAM.

Summary and Conclusion

Strategically assigning IP addresses to networked devices provides major benefits for enterprises and small networks alike when properly planned and documented. Leveraging logical organization, segmentation, and other best practices allows room for expansion while delivering simpler, more resilient IP infrastructure. Avoiding common errors like failing to plan for growth or document your scheme leads to more maintainable networks. As networks scale in complexity, purpose-built IPAM solutions can further aid administration.

' src=

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Understanding IP Address Assignment: A Complete Guide

avatar

Introduction

In today's interconnected world, where almost every aspect of our lives relies on the internet, understanding IP address assignment is crucial for ensuring online security and efficient network management. An IP address serves as a unique identifier for devices connected to a network, allowing them to communicate with each other and access the vast resources available on the internet. Whether you're a technical professional, a network administrator, or simply an internet user, having a solid grasp of how IP addresses are assigned within the same network can greatly enhance your ability to troubleshoot connectivity issues and protect your data.

The Basics of IP Addresses

Before delving into the intricacies of IP address assignment in the same network, it's important to have a basic understanding of what an IP address is. In simple terms, an IP address is a numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication. It consists of four sets of numbers separated by periods (e.g., 192.168.0.1) and can be either IPv4 or IPv6 format.

IP Address Allocation Methods

There are several methods used for allocating IP addresses within a network. One commonly used method is Dynamic Host Configuration Protocol (DHCP). DHCP allows devices to obtain an IP address automatically from a central server, simplifying the process of managing large networks. Another method is static IP address assignment, where an administrator manually assigns specific addresses to devices within the network. This method provides more control but requires careful planning and documentation.

Considerations for Efficient IP Address Allocation

Efficient allocation of IP addresses is essential for optimizing network performance and avoiding conflicts. When assigning IP addresses, administrators need to consider factors such as subnetting, addressing schemes, and future scalability requirements. By carefully planning the allocation process and implementing best practices such as using private IP ranges and avoiding overlapping subnets, administrators can ensure smooth operation of their networks without running out of available addresses.

IP Address Assignment in the Same Network

When two routers are connected within the same network, they need to obtain unique IP addresses to communicate effectively. This can be achieved through various methods, such as using different subnets or configuring one router as a DHCP server and the other as a client. Understanding how IP address assignment works in this scenario is crucial for maintaining proper network functionality and avoiding conflicts.

Basics of IP Addresses

IP addresses are a fundamental aspect of computer networking that allows devices to communicate with each other over the internet. An IP address, short for Internet Protocol address, is a unique numerical label assigned to each device connected to a network. It serves as an identifier for both the source and destination of data packets transmitted across the network.

The structure of an IP address consists of four sets of numbers separated by periods (e.g., 192.168.0.1). Each set can range from 0 to 255, resulting in a total of approximately 4.3 billion possible unique combinations for IPv4 addresses. However, with the increasing number of devices connected to the internet, IPv6 addresses were introduced to provide a significantly larger pool of available addresses.

IPv4 addresses are still predominantly used today and are divided into different classes based on their range and purpose. Class A addresses have the first octet reserved for network identification, allowing for a large number of hosts within each network. Class B addresses reserve the first two octets for network identification and provide a balance between network size and number of hosts per network. Class C addresses allocate the first three octets for network identification and are commonly used in small networks.

With the depletion of available IPv4 addresses, IPv6 was developed to overcome this limitation by utilizing 128-bit addressing scheme, providing an enormous pool of potential IP addresses - approximately 3.4 x 10^38 unique combinations.

IPv6 addresses are represented in hexadecimal format separated by colons (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334). The longer length allows for more efficient routing and eliminates the need for Network Address Translation (NAT) due to its vast address space.

Understanding these basics is essential when it comes to assigning IP addresses in a network. Network administrators must consider various factors such as the number of devices, network topology, and security requirements when deciding on the IP address allocation method.

In the next section, we will explore different methods of IP address assignment, including Dynamic Host Configuration Protocol (DHCP) and static IP address assignment. These methods play a crucial role in efficiently managing IP addresses within a network and ensuring seamless communication between devices.

Methods of IP Address Assignment

IP address assignment is a crucial aspect of network management and plays a vital role in ensuring seamless connectivity and efficient data transfer. There are primarily two methods of assigning IP addresses in a network: dynamic IP address assignment using the Dynamic Host Configuration Protocol (DHCP) and static IP address assignment.

Dynamic IP Address Assignment using DHCP

Dynamic IP address assignment is the most commonly used method in modern networks. It involves the use of DHCP servers, which dynamically allocate IP addresses to devices on the network. When a device connects to the network, it sends a DHCP request to the DHCP server, which responds by assigning an available IP address from its pool.

One of the key benefits of dynamic IP address assignment is its simplicity and scalability. With dynamic allocation, network administrators don't have to manually configure each device's IP address. Instead, they can rely on the DHCP server to handle this task automatically. This significantly reduces administrative overhead and makes it easier to manage large networks with numerous devices.

Another advantage of dynamic allocation is that it allows for efficient utilization of available IP addresses. Since addresses are assigned on-demand, there is no wastage of unused addresses. This is particularly beneficial in scenarios where devices frequently connect and disconnect from the network, such as in public Wi-Fi hotspots or corporate environments with a high turnover rate.

However, dynamic allocation does have some drawbacks as well. One potential issue is that devices may receive different IP addresses each time they connect to the network. While this might not be an issue for most users, it can cause problems for certain applications or services that rely on consistent addressing.

Additionally, dynamic allocation introduces a dependency on the DHCP server. If the server goes down or becomes unreachable, devices will not be able to obtain an IP address and will be unable to connect to the network. To mitigate this risk, redundant DHCP servers can be deployed for high availability.

Static IP Address Assignment

Static IP address assignment involves manually configuring each device's IP address within the network. Unlike dynamic allocation, where addresses are assigned on-demand, static assignment requires administrators to assign a specific IP address to each device.

One of the main advantages of static IP address assignment is stability. Since devices have fixed addresses, there is no risk of them receiving different addresses each time they connect to the network. This can be beneficial for applications or services that require consistent addressing, such as servers hosting websites or databases.

Static assignment also provides greater control over network resources. Administrators can allocate specific IP addresses to devices based on their requirements or security considerations. For example, critical servers or network infrastructure devices can be assigned static addresses to ensure their availability and ease of management.

However, static IP address assignment has its limitations as well. It can be time-consuming and error-prone, especially in large networks with numerous devices. Any changes to the network topology or addition/removal of devices may require manual reconfiguration of IP addresses, which can be a tedious task.

Furthermore, static allocation can lead to inefficient utilization of available IP addresses. Each device is assigned a fixed address regardless of whether it is actively using the network or not. This can result in wastage of unused addresses and may pose challenges in scenarios where addressing space is limited.

In order to efficiently allocate IP addresses within a network, there are several important considerations that need to be taken into account. By carefully planning and managing the allocation process, network administrators can optimize their IP address usage and ensure smooth operation of their network.

One of the key factors to consider when assigning IP addresses is the size of the network. The number of devices that will be connected to the network determines the range of IP addresses that will be required. It is essential to accurately estimate the number of devices that will need an IP address in order to avoid running out of available addresses or wasting them unnecessarily.

Another consideration is the type of devices that will be connected to the network. Different devices have different requirements in terms of IP address assignment. For example, servers and other critical infrastructure typically require static IP addresses for stability and ease of access. On the other hand, client devices such as laptops and smartphones can often use dynamic IP addresses assigned by a DHCP server.

The physical layout of the network is also an important factor to consider. In larger networks with multiple subnets or VLANs, it may be necessary to segment IP address ranges accordingly. This allows for better organization and management of IP addresses, making it easier to troubleshoot issues and implement security measures.

Security is another crucial consideration when allocating IP addresses. Network administrators should implement measures such as firewalls and intrusion detection systems to protect against unauthorized access or malicious activities. Additionally, assigning unique IP addresses to each device enables better tracking and monitoring, facilitating quick identification and response in case of any security incidents.

Efficient utilization of IP address ranges can also be achieved through proper documentation and record-keeping. Maintaining an up-to-date inventory of all assigned IP addresses helps prevent conflicts or duplicate assignments. It also aids in identifying unused or underutilized portions of the address space, allowing for more efficient allocation in the future.

Furthermore, considering future growth and scalability is essential when allocating IP addresses. Network administrators should plan for potential expansion and allocate IP address ranges accordingly. This foresight ensures that there will be sufficient addresses available to accommodate new devices or additional network segments without disrupting the existing infrastructure.

In any network, the assignment of IP addresses is a crucial aspect that allows devices to communicate with each other effectively. When it comes to IP address assignment in the same network, there are specific considerations and methods to ensure efficient allocation. In this section, we will delve into how two routers in the same network obtain IP addresses and discuss subnetting and IP address range distribution.

To understand how two routers in the same network obtain IP addresses, it's essential to grasp the concept of subnetting. Subnetting involves dividing a larger network into smaller subnetworks or subnets. Each subnet has its own unique range of IP addresses that can be assigned to devices within that particular subnet. This division helps manage and organize large networks efficiently.

When it comes to assigning IP addresses within a subnet, there are various methods available. One common method is manual or static IP address assignment. In this approach, network administrators manually assign a specific IP address to each device within the network. Static IP addresses are typically used for devices that require consistent connectivity and need to be easily identifiable on the network.

Another widely used method for IP address assignment is Dynamic Host Configuration Protocol (DHCP). DHCP is a networking protocol that enables automatic allocation of IP addresses within a network. With DHCP, a server is responsible for assigning IP addresses dynamically as devices connect to the network. This dynamic allocation ensures efficient utilization of available IP addresses by temporarily assigning them to connected devices when needed.

When considering efficient allocation of IP addresses in the same network, several factors come into play. One important consideration is proper planning and design of subnets based on anticipated device count and future growth projections. By carefully analyzing these factors, administrators can allocate appropriate ranges of IP addresses for each subnet, minimizing wastage and ensuring scalability.

Additionally, implementing proper security measures is crucial when assigning IP addresses in the same network. Network administrators should consider implementing firewalls, access control lists (ACLs), and other security mechanisms to protect against unauthorized access and potential IP address conflicts.

Furthermore, monitoring and managing IP address usage is essential for efficient allocation. Regular audits can help identify any unused or underutilized IP addresses that can be reclaimed and allocated to devices as needed. This proactive approach ensures that IP addresses are utilized optimally within the network.

The proper assignment of IP addresses is crucial for maintaining network security and efficiency. Throughout this guide, we have covered the basics of IP addresses, explored different methods of IP address assignment, and discussed considerations for efficient allocation.

In conclusion, understanding IP address assignment in the same network is essential for network administrators and technical professionals. By following proper allocation methods such as DHCP or static IP assignment, organizations can ensure that each device on their network has a unique identifier. This not only enables effective communication and data transfer but also enhances network security by preventing unauthorized access.

Moreover, considering factors like subnetting, scalability, and future growth can help optimize IP address allocation within a network. Network administrators should carefully plan and allocate IP addresses to avoid conflicts or wastage of resources.

Overall, a well-managed IP address assignment process is vital for the smooth functioning of any network. It allows devices to connect seamlessly while ensuring security measures are in place. By adhering to best practices and staying updated with advancements in networking technology, organizations can effectively manage their IP address assignments.

In conclusion, this guide has provided a comprehensive overview of IP address assignment in the same network. We hope it has equipped you with the knowledge needed to make informed decisions regarding your network's IP address allocation. Remember that proper IP address assignment is not only important for connectivity but also plays a significant role in maintaining online security and optimizing network performance.

Enhance Online Security: The Ultimate Guide to Conceal Your IP Address

Alternative Methods to Conceal Your IP Address Without a VPN

Maintain Privacy: Learn How to Alter Your IP Address

The Significance of IP Address for Online Security and Privacy

Comprehensive Handbook on VPNs, IP Addresses, and Proxy Servers

Lumics

Best Practices for Efficient IP Address Management

As the digital landscape continues to expand exponentially, we’re seeing an unprecedented explosion in IP addresses. This surge is fuelled by the ever-increasing number of connected devices, applications, and the growing demand for mobility. Consequently, we find ourselves navigating an increasingly complex network environment. The management of these networks, in particular IP Address Management (IPAM), now faces a chief challenge and priority: simplification.

What is IP Address Management?

IP Address Management, often abbreviated as IPAM , is a method that allows network administrators to manage and track IP addresses within their network. When properly implemented, it can reduce the complexity and time spent on administrative tasks.

IP Address Allocation

IP Address Allocation refers to the assignment of IP addresses to devices within a network. The process should be well-planned, factoring in the current and future needs of the network. An effective IP plan ensures efficient utilization of address space, helping to optimize IP management.

IP Address Tracking

Keeping track of each allocated IP address is vital for maintaining network integrity. IP Address Tracking allows administrators to monitor the status of each IP address within their network, making it easier to identify any issues or conflicts. It contributes to the optimization of IP addresses allocated, preventing wastage of resources.

IP Address Management Solutions

To simplify this process, businesses are turning to robust IPAM solutions. These systems integrate with DNS and DHCP servers to manage, monitor, and record IP address configurations automatically. Adopting an IPAM solution is one of the IPAM best practices that aids in simplifying and streamlining, improving overall network management.

Best practices

Centralizing ip address management.

Centralizing your IP address management is a fundamental best practice. By consolidating all your IP address data in one location, you can easily keep tabs on your network’s health and promptly identify any potential conflicts or security risks.

Planning for Growth

Keeping an eye on the future is crucial. As your network expands, you’ll need to accommodate more devices and applications. Therefore, your IP address management should be scalable and flexible, ensuring it can handle future growth with ease.

Documenting IP Address Usage

Maintaining up-to-date documentation of your IP addresses is key to effective management. This aids in tracking which addresses are in use and which are available, facilitating efficient allocation and preventing conflicts.

Using DHCP for Dynamic IP Address Assignment

The Dynamic Host Configuration Protocol ( DHCP ) is a valuable tool for managing IP addresses. It automates the assignment of IP addresses, freeing up administrators to focus on other tasks and ensuring optimal use of address spaces.

Subnetting is the practice of dividing a network into smaller, more manageable parts. This technique enhances network performance and security, and it’s an essential component of effective IP address management.

Implement IP Address Security

Securing IP addresses is paramount to protect your network from attacks. This includes practices like setting up firewalls, restricting access to certain IP addresses, and regularly updating security patches.

Regularly Audit IP Address Usage

Regular audits of IP address usage help ensure the network is running optimally. They provide crucial insights into your IP address allocation, allowing you to identify patterns, track usage, and spot potential issues before they escalate.

IPAM Automation for Enhanced Efficiency

Ipam automation.

IPAM can save organizations time and money by automating the potentially tedious process of assigning and tracking IP addresses throughout their network. This means that organizations can spend their valuable resources on other important initiatives, allowing them to optimize the performance of their infrastructure in a timely manner. With the help of IPAM, managing large networks has become much simpler, making it more cost-effective in the long run.

Integrating IPAM with Other Management Services

Many businesses use various network management services such as Virtual Private Networks (VPN), Network Access Control (NAC), or Remote Authentication Dial-In User Service (RADIUS). Integrating your IPAM solution with these services can streamline network management, reducing the potential for errors and conflicts.

Wrapping up

Efficient IP Address Management is critical for maintaining a well-functioning network. By following these best practices and adopting an automated IPAM solution, businesses can optimize their IP addresses, streamline administrative tasks, and improve overall network management. With the ever-increasing demand for connectivity and mobility, now is the time to prioritize efficient IP address management within your organization. 

Lumics logo

+1 (801) 819-7525 [email protected]

563 W 500 S Suite #300 Bountiful Utah 84010

  • Link to LinkedIn
  • Privacy Policy
  • Tech Sheet Download
  • Request a Demo

AICPA SOC Certified

+1 (801) 819-7525

[email protected]

563 W 500 S Suite #300   Bountiful Utah 84010

graphic of load balancing in a server

IP Addressing Best Practices

Table of contents link to heading.

  • Addressing Design Considerations
  • Devices That Require Addressing
  • Static versus Dynamic Addressing
  • Subnet Allocation
  • Common Addressing Standards
  • Guidelines for VLSM

Addressing Design Considerations Link to heading

  • Number of locations
  • Number of devices per location
  • Number of devices to be supported in each communication closet
  • Site requirements: data networks, wireless LANs, IP Telephony (IPT) networks, CCTV networks (security cameras), video conference systems, access control systems, network management, server farms, point-to-point links, and router/switch loopback addresses.
  • Subnet size
TitleReference
Guidelines for Management of IP Address Space
Internet Registry IP Allocation Guidelines

Devices That Require Addressing Link to heading

The end devices requiring an IP address include these:

  • Network Hosts
  • Peripherals
  • Administrator computers
  • Other end devices such as printers, IP phones, and IP cameras

Network devices requiring an IP address include these:

  • Router LAN interfaces
  • Router WAN (serial) interfaces

Network devices requiring an IP address for management include these:

  • Wireless access points

Static versus Dynamic Addressing Link to heading

IP addresses can either be assigned statically or dynamically:

  • These static IP addresses are assigned in the network infrastructure, data centre modules, and in modules of the enterprise edge and WAN.
  • You need to manage and monitor these systems, so you must access them via a stable IP address.
  • This is to reduce the configuration tasks required to connect these systems to the network. Cisco IP phones and mobile devices are also assigned an IP address dynamically. Wireless access points also learn their IP address and the IP address of the wireless controller via DHCP.
  • When you assign client workstation characteristics dynamically, the system automatically learns which network segment it is assigned to and how to reach its default gateway as the network is discovered.

Subnet Allocation Link to heading

For the allocation of IPv4 subnets, stick to the following best practices:

  • Private addresses are used for internal networks.
  • Allocate /24 subnets for user devices (laptops, PCs).
  • Allocate a parallel /24 subset for VoIP devices (IP phones).
  • Allocate subnets for access control systems and video conference systems.
  • Reserve subnets for future use.
  • Use /30 subnets for point-to-point links.
  • Use /32 for loopback addresses.
  • Allocate subnets for remote access and network management.
  • Use public addresses for the public facing network.

Common Addressing Standards Link to heading

Defining, using, and documenting addressing standards for your network will make it easier for operations to troubleshoot any network issues. Examples of standards are as follows:

  • Use .1 or .254 (in the last octet) as the default gateway of the subnet.
  • Match the VLAN ID number with the third octet of an IP address (e.g. the IP subnet 10.10.150.0/25 is assigned to VLAN 150).
  • Reserve .1 to .15 of a subnet for static assignments and .16 to .239 for the DHCP address pool.
  • Employ router and switch naming using international two-digit country codes, city airport codes, device codes, and numeric codes (e.g. ausydrtr61 for a router located in Sydney, Australia).

Guidelines for VLSM Link to heading

When designing the use of VLSM within your network, consider the following guidelines:

  • Optimal summarisation occurs with contiguous blocks of addresses.
  • If small subnets are grouped, routing information can be summarised.
  • Group VLSM subnets so that routing information can be consolidated.
  • Allocate VLSM by taking one regular subnet and subnetting it further.
  • Avoid using two different classful subnet masks inside a given network address.

CLIMB

10 IP Addressing Scheme Best Practices

IP addresses are a necessary part of any network, but there are best practices to follow to make sure they are used in the most effective way.

ip address assignment best practices

IP Addressing is a fundamental networking concept. It’s the process of assigning numerical labels to devices connected to a network. The purpose of IP Addressing is to uniquely identify devices on a network so that they can communicate with each other.

There are a few different IP Addressing schemes in use today, but the most common is the IPv4 scheme. This scheme uses a 32-bit address space, which allows for a total of 4,294,967,296 unique addresses.

While this may seem like a lot, the IPv4 address space is actually running out. This is due to the fact that the world population is growing and more and more devices are being connected to the internet. As a result, a new IP Addressing scheme, known as IPv6, is being slowly adopted.

IPv6 uses a 128-bit address space, which allows for a total of 340,282,366,920,938,463,463,374,607,431,768,211,456 unique addresses. This is a vast improvement over the IPv4 address space and will be able to accommodate the needs of the internet for many years to come.

When designing an IP Addressing scheme, there are a few best practices that should be followed in order to ensure that the scheme is effective and efficient.

1. Use a private IP address range

If you use a public IP address range, then your devices will be reachable from the Internet. This is not necessarily a bad thing, but it does open up your devices to potential attacks.

If you use a private IP address range, then your devices will only be reachable from within your own network. This is much more secure, as it means that anyone on the Internet will not be able to directly access your devices.

There are a few different private IP address ranges that you can choose from, but the most common one is the 192.168.0.0/16 range. This is the range that most home routers use, and it’s a good choice for small networks.

For larger networks, you may want to use a different private IP address range. The 10.0.0.0/8 range is often used for this purpose.

2. Assign static IP addresses to servers and network devices

If you don’t assign static IP addresses to your devices, then every time the device reboots it will be assigned a new IP address by the DHCP server. This can cause all sorts of problems, such as breaking firewall rules, disrupting network connectivity, and so on.

It’s much easier to manage a network when all of the devices have static IP addresses. You can easily add these devices to your inventory management system, and you’ll always know what IP address they are using.

There are some exceptions to this rule, such as when you’re using mobile devices that connect to the network via WiFi. In these cases, it’s usually best to use DHCP so that the devices will be automatically assigned an IP address when they connect to the network.

3. Use DHCP reservations for other hosts

When you use DHCP reservations, the IP address of a host is permanently assigned to that host, which means that the same IP address will always be assigned to that host as long as it’s on the network. This is useful for hosts that need to be accessible by other devices on the network using their IP address (e.g. servers, printers, etc.).

Using DHCP reservations also has the added benefit of making it easier to manage your IP addresses, since you don’t have to worry about manually assigning IP addresses to hosts or keeping track of which IP addresses are assigned to which hosts.

4. Avoid using the last IP in each subnet

When a device sends a broadcast, it sends it to all devices on the same subnet. The last IP in each subnet is reserved for broadcast, so if you use that IP as a host address, your device will receive its own broadcasts and process them, which can lead to all sorts of problems.

It’s much better to use the second-to-last IP in each subnet as your host address. That way, you’ll still be able to communicate with all devices on the subnet, but you won’t have to worry about processing your own broadcasts.

5. Don’t use broadcast or multicast addresses

Broadcast addresses are used to send data packets to all devices on a network. However, this can be a security risk because it means that any malicious actor on the network can intercept and read the data packets meant for other devices.

Multicast addresses are similar to broadcast addresses, but they’re used to send data packets to a group of devices rather than all devices on a network. While this is less of a security risk than using broadcast addresses, it can still be problematic because it can lead to network congestion if too many devices are receiving the multicast data packets.

6. Use VLANs to separate different types of traffic

VLANs are virtual LANs that can be used to segment traffic on a network. By separating different types of traffic onto different VLANs, you can improve security and performance while making it easier to manage your network.

For example, you might put all of your user traffic on one VLAN and all of your server traffic on another VLAN. This would make it much harder for an attacker to sniff traffic or launch a man-in-the-middle attack, and it would also make it easier to troubleshoot problems since you wouldn’t have to worry about cross-traffic affecting your results.

Additionally, using VLANs can help improve performance by reducing congestion on your network. For example, if you have a lot of video streaming traffic, you might want to put that traffic on its own VLAN so it doesn’t slow down other types of traffic.

Overall, using VLANs is a great way to improve the security and performance of your network while making it easier to manage.

7. Use NAT when connecting multiple networks

When you have multiple networks that need to communicate with each other, it’s important to use NAT (Network Address Translation) so that each network can have its own unique IP address range. This way, there won’t be any conflicts between the addresses of the different networks.

NAT also allows you to hide the internal IP addresses of your devices from the outside world. This is important for security because it makes it more difficult for hackers to target specific devices on your network.

Finally, NAT can help improve the performance of your network by reducing the number of broadcasts that are sent. Broadcasts are packets that are sent to all devices on a network, and they can cause problems if too many of them are sent. By using NAT, you can reduce the number of broadcasts that are sent, which can improve the performance of your network.

8. Use DNS names instead of IP addresses wherever possible

DNS names are much easier for humans to remember than IP addresses. They’re also less likely to change, which means that you won’t have to go through the hassle of updating your configuration files every time there’s a change in the IP address scheme.

What’s more, using DNS names instead of IP addresses can help improve security. That’s because it’s harder for attackers to guess DNS names than IP addresses.

Finally, using DNS names instead of IP addresses can help improve performance. That’s because DNS names are cached by DNS servers, which means that they don’t have to be resolved every time they’re used.

9. Document your IP addressing scheme

If you don’t document your IP addressing scheme, then when something goes wrong (and something always goes wrong), it will be very difficult for someone else to understand what you did and why you did it. This is especially true if the person who designed the scheme is no longer with the company.

Documenting your IP addressing scheme doesn’t have to be complicated. A simple spreadsheet that lists each subnet, the network address, the broadcast address, the netmask, and a description of what each subnet is used for is usually sufficient.

If you want to get really fancy, you can create a Visio diagram that shows how all of the subnets are interconnected. But even a simple spreadsheet will go a long way towards making sure that your IP addressing scheme is understandable and maintainable.

10. Keep it simple!

A simple IP addressing scheme is much easier to understand and manage than a complex one. When you have a large network with hundreds or even thousands of devices, it can be very difficult to keep track of all the different IP addresses and subnets.

It’s also important to keep your IP addressing scheme consistent across all your devices. This will make it much easier to configure and troubleshoot networking issues.

Finally, you should avoid using private IP addresses for public-facing services. Private IP addresses are not routable on the public Internet, so anyone trying to access your website or email server will not be able to reach it.

10 Git Repository Structure Best Practices

10 rest api file upload best practices, you may also be interested in..., 10 sqlalchemy best practices, 10 vue.js folder structure best practices, 10 user account termination best practices, 10 go gin best practices.

Blog Category

  • Career Tips
  • Job Hunting
  • Making Money
  • All Featured
  • All Hall of Fame
  • Network Management
  • Study Tools
  • Troubleshooting

CBT IT Certification Training

Unlimited IT Certification Courses via Streaming Video

Remember me

  • Lost your password?

Enter your username or email:

  • Back to login

Network Design – Designing Advanced IP Addressing

October 19, 2021 By Paul Browning 1 Comment

This blog post covers the following network design topics:

  • Summarizable and structured addressing designs
  • IPv6 for Enterprise Campus design considerations

When designing IP addressing at a professional level, several issues must be taken into consideration. This blog post will cover generic IP addressing designs, including subnets and summarizable blocks design recommendations, address planning, and advanced addressing concepts, in addition to IPv6 design considerations, which will be covered in the last section of the post.

Importance of IP Addressing for Network Design

One of the major concerns in the network design phase is ensuring that the IP addressing scheme is properly designed. This aspect should be carefully planned and an implementation strategy should exist for the structural, hierarchical, and contiguous allocation of IP address blocks. A solid addressing scheme should be hierarchical, structural, and modular .

These features will add value to the continually improving concept of the Enterprise Campus design. This is also important in scaling any of the dynamic routing protocols. A solid IP addressing scheme helps routing protocols function in an optimal manner, using RIPv2, EIGRP, OSPF, or BGP. Facilitating summarization and the ability to summarize addresses provides several advantages for the network:

  • Shorter Access Control Lists (ACLs)
  • Reduces the overhead on routers (the performance difference is noticeable, especially on older routers)
  • Faster convergence of routing protocols
  • Addresses can be summarized to help isolate trouble domains
  • Overall improvement of network stability

Address summarization is also important when there is a need to distribute addresses from one routing domain into another, as it impacts both the configuration effort and the overhead in the routing processing. In addition, having a solid IP addressing scheme not only makes ACLs easier to implement and more efficient for security, policy, and QoS purposes, but also it facilitates advanced routing policies and techniques (such as zone-based policy firewalls), where modular components and object groupings that are based on the defined IP addressing schemes can be created.

Solid IP address planning supports several features in an organization:

  • Route summarization
  • A more scalable network
  • A more stable network
  • Faster convergence

Subnet Network Design Recommendations

The importance of IP addressing is reflected in the new requirements that demand greater consideration of IP addressing, as the following examples illustrate:

  • The transition to VoIP Telephony and the additional subnet ranges required to support voice services. Data and voice VLANs are usually segregated, and in some scenarios, twice as many subnets may be needed when implementing Telephony in the network .
  • Layer 3 switching at the edge, replacing the Layer 2 switching with multi-layer switches. This involves more subnets needed at the Enterprise Edge, so t he number of smaller subnets will increase . There should be as little re-addressing as necessary, and making efficient use of the address space should be a priority. Sometimes, Layer 3 switching moved to the edge will involve a redesign of the IP addressing hierarchy.
  • The company’s needs are changing and sometimes servers will be isolated by functions or roles (also called segmentation). For example, the accounting server, the development subnets, and the call-center subnets can be separated from an addressing standpoint. Identifying the subnets and ACLs based on corporate requirements can also add complexity to the environment .
  • Many organizations use technologies like Network Admission Control (NAC), Cisco 802.1x (IBNS), or Microsoft NAP. These types of deployments will be dynamically assigning VLANs based on the user login or port-based authentication. In this situation, an ACL can actually manage connectivity to different servers and network resources based on the source subnet (which is based on the user role). Using NAC over a wired or wireless network will add more complexity to the IP addressing scheme.
  • Many network topologies involve having separated VLANs (i.e., data, voice, and wireless). Using 802.1x may also involve a guest VLAN or a restricted VLAN , and authorization policies can be assigned based on VLAN membership from an Authentication, Authorization, and Accounting (AAA) server.
  • Using role-based security techniques might require different sets of VPN clients, such as administrators, customers, vendors, guests, or extranets, so different groups can be implemented for different VPN client pools. This role-based access can be managed through a group password technique for each Cisco VPN client; every group can be assigned a VPN endpoint address from a different pool of addresses. If the pools are subnets of a summarizable block, then routing traffic back to the client can also be accomplished in a simplified fashion.
  • Network designers should also consider that Network Address Translation (NAT) and Port Address Translation (PAT) can be applied on customer edge routers (often on the PIX firewall or on the ASA devices). NAT and PAT should not be used internally on the LAN or within the Enterprise Network to simplify the troubleshooting process. NAT can be used in a data center to support the Out-of-Band (OOB) management of the VLAN (i.e., on devices that cannot route or cannot find a default gateway for the OOB management of the VLAN).

You can read Ciscos network design guide here .

Summarization

After planning the network design for a IPv4 addressing scheme and determining the number and types of necessary addresses, a hierarchical design might be necessary. This design is useful when finding a scalable solution for a large organization and this involves address summarization. Summarization reduces the number of routes in the routing table and involves taking a series of network prefixes and representing them as a single summary address. It also involves reducing the CPU load and the memory utilization on network devices. In addition, this technique reduces processing overhead because routers advertise a single prefix instead of many smaller ones.

A summarizable address is one that contains blocks with sequential numbers in one of the octets . The sequential patterns must fit a binary bit pattern, with X numbers in a row, where X is a power of 2. The first number in this sequence must be a multiple of X. For example, 128 numbers in a row could be summarized with multiples starting at 0 or 128. If there are 64 numbers in a row (2 6 ), these will be represented in multiples of 64, such as 0, 64, 128, or 192, and 32 numbers in a row can be summarized with the multiples 0, 32, 64, and so on. This process can be easily accomplished using software subnet calculators.

Another planning aspect of summarizable blocks involves medium or large blocks of server farms or data centers. Servers can be grouped based on their functions and on their level of mission criticality, and they can all be in different subnets. In addition, with servers that are attached to different Access Layer switches, it is easier to assign subnets that will provide a perfect pattern for wildcarding in the ACLs. Simple wildcard rules and efficient ACLs are desired, as complex ACLs are very difficult to deal with, especially for new engineers who must take over an existing project.

When implementing the hierarchical addressing scheme for network design, it is important to have a good understanding of the math behind it and how route summarization works. Below is an example of combining a group of Class C addresses into an aggregate address. Summarization is a way to represent several networks in a single summarized route. In a real-world scenario, a subnet calculator can be used to automatically generate the most appropriate aggregate route from a group of addresses.

In this example, the Enterprise Campus backbone (Core Layer) submodule is connected to several other buildings. In a single building, there are several networks in use:

  • A network for the server farm
  • A network for the management area
  • A few networks for the Access Layer submodule (that serve several departments)

The goal is to take all of these networks and aggregate them into one single address that can be stored at the edge distribution submodule or at the Core Layer of the network. The first thing to understand when implementing a hierarchical addressing structure is the use of continuous blocks of IP addresses . In this example, the addresses 192.100.168.0 through 192.100.175.0 are used:

192.100.168.0 11000000.01100100.10101 000. 00000000
192.100.169.0 11000000.01100100.10101 001. 00000000
192.100.170.0 11000000.01100100.10101 010. 00000000
192.100.171.0 11000000.01100100.10101 011. 00000000
192.100.172.0 11000000.01100100.10101 100. 00000000
192.100.173.0 11000000.01100100.10101 101. 00000000
192.100.174.0 11000000.01100100.10101 110. 00000000
192.100.175.0 11000000.01100100.10101 111. 00000000

In this scenario, network design summarization will be based on a location where all of the uppermost bits are identical. Looking at the first address above, the first 8 bits equal the decimal 192, the next 8 bits equal the decimal 100, and the last 8 bits are represented by 0. The only octet that changes is the third one; to be more specific, only the last 3 bits in that octet change when going through the address range.

The summarization process requires writing the third octet in binary format and then looking for the common bits on the left side. In the example above, all of the bits are identical up to the last three bits in the third octet. With 21 identical bits, all of the addresses will be summarized to 192.100.168.0/21.

After deciding on a hierarchical addressing design and understanding the math involved in this process, the next approach will be a modular and scalable design, which will involve deciding how to divide the organization (i.e., Enterprise Network modules, submodules, and remote locations) in terms of addressing. This includes deciding whether to apply a hierarchical address to each module/submodule or to the entire Enterprise Network.

Another aspect to consider is the way summarization may affect the routing protocols used. Summarization usually affects routing because it reduces the size of the routing tables, the processor, and memory utilization, and it offers a much faster convergence of the routed network. The following are the most important advantages of using route aggregation:

  • Lower device processing overhead
  • Improved network stability
  • Ease of future growth

Figure 1 below offers another example of a large organization network design using a campus with multiple buildings:

ip addressing

Figure 1 – Network Design Addressing for a Large Organization with Multiple Buildings

The internal private addressing will use the popular 10.0.0.0/8 range. Within the organization’s domain, two separate building infrastructures (on the same campus or in remote buildings) will be aggregated using the 10.128.0.0/16 and 10.129.0.0/16 ranges.

     The 10.128.0.0 and 10.129.0.0 ranges are used instead of 10.1.0.0 or another lower second octet because many organizations already use those lower octet ranges, and there would be problems if the company decided to buy another company that uses one of those ranges. This minimizes the changes in overlap when merging other infrastructures with the network.

Going deeper within each building, the addressing scheme can be broken down within different departments, using the 10.128.1.0, 10.128.2.0 or the 10.129.1.0, 10.128.2.0 networks with a 24-bit mask. Because of the scalable design, another tier could be included above the departmental addresses that would be within the 10.129.0.0/21 range, for example. Moving beyond that point leads to the Enterprise Edge module and its various submodules (e.g., e-commerce, Internet connectivity, etc.) that can have point-to-point connections to different ISPs. Variable Length Subnet Masking (VLSM) can be used to break down the addressing scheme further.

To summarize, from a network designer standpoint, it is very important to tie the addressing scheme to the modular Enterprise Network design . The advantages of using route summarization and aggregation are numerous but the most important ones are as follows:

  • Isolates changes to the topology to a particular module
  • Isolates routing updates to a particular module
  • Fewer updates need to be sent up the hierarchy (preventing all of the updates from going through the entire network infrastructure)
  • Lower overall recalculation of the entire network when links fail (e.g., a change in a routing table does not converge to the entire network); for example, route flapping in a particular department is constrained within the specific department and does not have a cascading effect on other modules (considering the example above)
  • Narrow scope of route advertisement propagation
  • Summarized module is easier to troubleshoot
     The ultimate route summary is the default route, which summarizes everything. This can be created automatically using routing protocols or manually using the “ ” command.

Routing Protocols and Summarization for Network Design

Different routing protocols handle summarization in different manners. Routing Information Protocol (RIP) version 2 (RIPv2) has classful origins (it summarizes by default), although it can act in a classless manner because it sends subnet mask information in the routing updates.

Because of its classful origins, RIPv2 performs automatic summarization on classful boundaries, so any time RIPv2 is advertising a network across a different major network boundary, it summarizes to the classful mask without asking for permission. This can lead to big problems in the routing infrastructure of discontiguous networks. RIPv2’s automatic summarization behavior should be disabled in many situations to gain full control of the network routing operations.

In addition to the automatic summarization feature, RIPv2 allows for manual route summarization to be performed at the interface level . The recommendation is to disable automatic summarization and configure manual summarization where necessary. RIPv2 does not allow for summarization below the classful address. The next example involves the following prefixes:

210.22.10.0/24

210.22.9.0/24

210.22.8.0/24

RIPv2 will not allow the summarization of addresses above a /22 address because these are Class C addresses, and this would involve trying to summarize beneath this class. This is a limitation due to the classful origin of RIP.

EIGRP functions similar to RIPv2 regarding summarization, as EIGRP also has classful origins because it is an enhanced version of the Interior Gateway Routing Protocol (IGRP). EIGRP automatically summarizes on classful boundaries and, just like with RIPv2, this feature can be disabled and manual summarization can be configured on specific interfaces. The biggest issue with this behavior is that there might be discontiguous networks and this could cause problems with any of the automatic summarization mechanisms described.

network design

Figure 2 – Discontiguous Network Issue

An example of a discontiguous network issue is illustrated in Figure 2 above. The 172.16.10.0/24 subnet is on the left side and the 172.16.12.0/24 subnet is on the right side. These networks are divided by a different major network in the middle (172.20.60.0/24), which causes a problem. Applying EIGRP in this scenario, automatic summarization will be enabled by default, with summarization toward the middle of the topology (172.16.0.0) from both sides, and this will cause great confusion to that device. As a result of this confusion, the device might send one packet to the left side and one packet to the right side, so there will be packets going in the wrong direction to get to a particular destination. To solve this issue, the automatic summarization feature should be disabled in discontiguous networks. Another possible fix to this problem is designing the addressing infrastructure better so that no discontiguous subnets are present.

     RIPv1 and IGRP cannot be replaced with modern routing protocols, but discontiguous network issues can be solved using static routes.

OSPF does not have an automatic summarization feature but two different forms of summarization can be designed:

  • Summarization between the internal areas
  • Summarization from another separate domain

Two separate commands are used to handle these different summarization types. Summarizing from one area to another involves a Type 3 Link-State Advertisement (LSA). Summarizing from another domain involves two types of LSAs in the summarization process: a Type 4 LSA, which advertises the existence of the summarizing device (e.g. the OSPF Autonomous System Border Router – ASBR), and the actual summary of information, carried in a Type 5 LSA.

Border Gateway Protocol (BGP) uses a single type of summarization called aggregation , and this is accomplished during the routing process. BGP is used to summarize automatically, just like RIPv2 and EIGRP, but this behavior has been automatically disabled by the 12.2(8)T IOS code.

Variable Length Subnet Masking and Structured Addressing

A structured addressing plan involves the concept of Variable Length Subnet Masking (VLSM), a technology that all of the modern routing protocols can easily handle. VLSM provides efficiency, as it disseminates an addressing plan that does not waste address space (i.e., it assigns only the number of addresses needed for a certain subnetwork). VLSM also accommodates efficient summarization. The most important benefits of VLSM and summarization include the following:

  • Less CPU utilization on network devices
  • Less memory utilization on network devices
  • Smaller convergence domains

ip addressing

Figure 3 – VLSM Example (Part 1)

VLSM functions by taking unused subnets from the address space used and further subnets them. Figure 3 above starts with the major network of 172.16.0.0/16 (not shown in the example), which is initially subnetted using a 24-bit mask, resulting in two large subnets on the two router interfaces (Fa0/0 and Fa0/1), 172.16.1.0/24 and 172.16.2.0/24, respectively. Two key formulas can be used when calculating the number of subnets and hosts using VLSM. An example of the subnet and host split in the address is shown below in Figure 4:

VLSM Subnet

Figure 4 – VLSM Subnet and Host Split

The formula for calculating the number of subnets is 2 s , where “s” is the number of borrowed subnet bits. In Figure 3.3 above, the network expanded from a /16 network to a /24 network by borrowing 8 bits. This means 2 8 = 256 subnets can be created with this scheme.

The formula for calculating the number of hosts that can exist in a particular subnet is 2 h -2, where “h” is the number of host bits. Two hosts are subtracted from the 2 h formula because the all-zeros host portion of the address represents the major network itself and the all-ones host portion of the address represents the broadcast address for the specific segment, as illustrated below:

  • Major networks (all zeros in the host portion): 172.16.1.0 and 172.16.2.0
  • Broadcast networks (all ones in the host portion): 172.16.1.255 and 172.16.2.255

After summarizing the 172.16.0.0/16 address space into 172.16.1.0/24 and 172.16.2.0/24, further subnetting might be needed to accommodate smaller networks, which can be achieved by taking one of the next available subnets (after the subnetting process), for example, 172.16.3.0/24. This will create additional subnets such as those below:

172.16.3.32/27

172.16.3.64/27

The /27 subnets are suitable for smaller networks and can accommodate the number of machines in those areas. The number of hosts that can be accommodated is 2 5 -2=30.

vlsm

Figure 5 – VLSM Example (Part 2)

A subnet might be needed for the point-to-point link that will connect two network areas, and this can be accomplished by further subnetting one of the available subnets in the /27 scheme, for example 172.16.3.96/27. This can be subnetted with a /30 to obtain 172.16.3.100/30, which offers just two host addresses: 172.16.3.101 and 172.16.3.102. This scheme perfectly suits the needs for the point-to-point connections (one address for each end of the link). By performing VLSM calculations, subnets that can accommodate just the right number of hosts in a particular area can be obtained.

Private versus Public Addressing

As a network design expert, after determining the number of necessary IP addresses, the next big decision is to find out whether private, public, or a combination of private and public addresses will be used. Private internetwork addresses are defined in RFC 1918 and are used internally within the network. From a real-world standpoint, because of the limitation of the number of public IP addresses, NAT techniques are usually used to translate the private internal numbers to external public addresses. Internally, one of the following three ranges of addresses can be used:

  • 0.0.0/8 (10.0.0.0 to 10.255.255.255), usually used in large organizations
  • 16.0.0/12 (172.16.0.0 to 172.31.255.255), usually used in medium organizations
  • 168.0.0/16 (192.168.0.0 to 192.168.255.255), usually used in small organizations

Any address that falls within the three private address ranges cannot be routed on the Internet. Service Provider Edge devices usually have policies and ACLs configured to ensure that any packet containing a private address that arrives at an inbound interface will be dropped.

All of the other addresses are public addresses that are allocated to ISPs or other point of presence nodes on the Internet. ISPs can then assign Class A, B, or C addresses to customers to use on devices that are exposed to the Internet, such as:

  • Web servers
  • DNS servers
  • FTP servers
  • Other servers that run public-accessible services
     Customers can also be assigned IP addresses by one of the following five Regional Internet Registries (RIRs) that are controlled by the Internet Assigned Numbers Authority (IANA):

When deciding to use private, public, or a combination of private and public addresses for your network design, one of the following four types of connections will be used:

  • No Internet connectivity
  • Only one public address (or a few) for users to access the Web
  • Web access for users and public-accessible servers
  • Every end-system has a public IP address

No Internet connectivity would imply that all of the connections between the locations are private links and the organization would not be connected to the Internet in any of its nodes. In this case, there is no need for any public IP addresses because the entire address scheme can be from the private address ranges.

Another situation would be the one in which there is Internet connectivity from all of the organization’s locations but there are no servers to run public-accessible services (e.g., Web, FTP, or others). In this case, a public IP address is needed that will allow users to access the Web. NAT can be used to translate traffic from the internal network to the outside network, so the internal networks contain only private IP addresses and the external link can use just one public address.

The third scenario is one of the most common, especially when considering the growth of enterprise networking. This involves having user Internet connectivity (just like in the previous scenario) but also having public-accessible servers. Public IP addresses must be used to connect to the Internet and access specific servers (e.g., Web, FTP, DNS, and others). The internal network should use private IP addresses and NAT to translate them into public addresses.

The most highly unlikely scenario would be the one in which every end-system is publicly accessible from the global Internet . This is a dangerous situation because the entire network is exposed to Internet access and this implies high security risks. To mitigate these risks, strong firewall protection policies must be implemented in every location. In addition to the security issues, this scenario is also not very effective because many IP addresses are wasted and this is very expensive. All of these factors make this scenario one not to be used in modern networks.

The two most common solutions from the scenarios presented above are as follows:

  • One or a few public addresses for users to access the Web
  • A few public addresses that provide Web access for users and public-accessible servers

Both scenarios imply using private internal addresses and NAT to reach outside networks.

For a deeper analysis of these aspects, it is useful to focus on how they map to the Cisco Enterprise Architecture model and where private and public addresses should be used, which is illustrated in Figure 6 below:

ip address design

Figure 6 – Cisco Enterprise Architecture Model Addressing Scheme

First, in the figure above, assume that there is some kind of Internet presence in the organization that offers services either to internal users in the Access Layer submodule or to different public-accessible servers (e.g., Web, FTP, or others) in the Enterprise Edge module. Regardless of what modules receive Internet access, NAT is run in the edge distribution submodule to translate between the internal addressing structure used in the Enterprise Campus and the external public IP addressing structure. NAT mechanisms can also be used in the Enterprise Edge module.

Using the 10.0.0.0/8 range internally, both in the Enterprise Campus module and in the network management submodule, Enterprise Campus devices that use private IP addresses include all of its component submodules:

  • Access Layer
  • Distribution Layer
  • Server farm

The edge distribution submodule will use a combination of private and public IP addresses. The Enterprise Edge module will use a combination of private and public addresses, depending on each submodule. The remote access submodule can use a combination of private and public addresses but it will need to support some kind of NAT techniques. The WAN submodule can use either private addresses (when connecting to other remote sites) or public addresses (when connected to outside locations for a backup solution).

     When connecting to the outside world using public addresses, consider implementing efficient security features.

Address Planning

An important issue in the IP addressing design is how the addresses will be assigned. One way would be to use static assigning and the other way would be to use dynamic protocols such as the Dynamic Host Configuration Protocol (DHCP). Deciding on the address allocation method requires answering the following questions:

  • How many end-systems are there?

For a small number of hosts (less than 50), consider using statically/manually assigned addresses; however, if there are several hundred systems, use DHCP to speed up the address allocation process (i.e., avoid manual address allocation).

  • What does the security policy demand?

Some organizations demand the use of static IP addressing for every host or for every node to create a more secure environment. For example, an outsider cannot plug in a station to the network, automatically get an IP address, and have access to internal resources. The organization’s security policy might demand static addressing, regardless of the network size.

  • What is the likelihood of renumbering?

This includes the possibility of acquisitions and mergers in the near future. If the likelihood of renumbering is high, DHCP should be used.

  • Are there any high availability demands?

If the organization has high availability demands, DHCP should be used in a redundant server architecture.

In addition, static addressing should always be used on certain modules in certain devices:

  • Corporate servers
  • Network management workstations
  • Standalone servers in the Access Layer submodule
  • Printers and other peripheral devices in the Access Layer submodule
  • Public-accessible servers in the Enterprise Edge module
  • Remote Access Layer submodule devices
  • WAN submodule devices

Role-Based Addressing

From a Cisco standpoint, the best way to implement role-based addressing is to have it mapped to the corporate structure or to the roles of the servers or end-user stations. Using an example based on the 10.0.0.0/8 network, consider the first octet to be the major network number, the second octet to be the number assigned to the closet (i.e., the server room or wiring closets throughout the organization), the third octet to be the VLAN numbers, and the last octet to be the number of hosts. An address of 10.X.Y.Z would imply the following octet definitions:

  • X = closet numbers
  • Y = VLAN numbers
  • Z = host numbers

This is an easy mechanism that can be used with Layer 3 closets. Role-based addressing avoids binary arithmetic , so if there are more than 256 closets, for example (more than can be identified in the second octet), some bits can be borrowed from the beginning of the third octet because there will not be 256 VLANs for every switch. Thereafter, advanced binary arithmetic or bit splitting can be used to adapt the addressing structure to specific needs. Bit splitting can be used with routing protocols, as well as route summarization, to help number the necessary summarizable blocks.  In this case, addresses will be split into a network part, an area part, a subnet part, and a host part.

Network designers might not always have the luxury of using the summarizable blocks around simple octet boundaries and sometimes this is not even necessary, especially when some bit splitting techniques would better accommodate the organization and the role-based addressing scheme. This usually involves some binary math, such as the example below:

172.16.aaaassss.sshhhhhh

The first octet is 172 and the second octet is 16. The “a” bits in the third octet identify the area and the “s” bits identify the network subnet or VLAN. Six bits are reserved for the hosts in the forth octet. This offers 62 hosts per VLAN or subnet, or 2 16 -2 (two host addresses will be reserved for the network address – all zeros in the last bits and the broadcast address and all ones in the last bits).

This logical scheme will result in the following address ranges, based on the network areas:

  • Area 0: 172.16.0.0 to 172.16.15.255
  • Area 1: 172.16.16.0 to 172.16.31.255
  • Area 2: 172.16.32.0 to 172.16.47.255

Subnet calculations should be made to ensure that the right type of bit splitting is used to represent the subnet and VLANs. Remember that a good summarization technique is to take the last subnet in every area and divide it so that the /30 subnet can be used for any WAN or point-to-point links. This will maximize the address space so for each WAN link there will be only two addresses with a /30 or .252 subnet mask.

     Binary and subnet calculations can be also achieved using subnet calculator software that can be found on a variety of Internet sites.

Most organizations have their addressing schemes mapped out onto spreadsheets or included in different reports and stored as part of their documentation for the network topology. This should be done very systematically and hierarchically, regardless of the addressing scheme used. Always take into consideration the possible growth of the company through mergers or acquisitions.

Network Address Translation Applications

Although the goal with IPv6 is to avoid the need for NAT, NAT for IPv4 will still be used for a while. NAT is one of the mechanisms used in the transition from IPv4 to IPv6, so it will not disappear any time soon. In addition, it is a very functional tool for working with IPv4 addressing. NAT and PAT (or NAT Overload) are usually carried out on ASA devices, which have powerful tools to accomplish these tasks in many forms:

  • Dynamic NAT
  • Identity NAT

A recommended best practice is to try to avoid using NAT on internal networks, except for situations in which NAT is required as a stop-gap measure during mergers or migrations. NAT should not be performed between the Access Layer and the Distribution Layer or between the Distribution Layer and the Core Layer. Following this recommendation will prevent address translation between OSPF areas, for example.

Organizations with a merger in progress usually use the same internal network addressing schemes and these can be managed with NAT overlapping techniques (also referred to as bidirectional NAT), which translates between the two organizations when they have an overlapping internal IP addressing space that uses RFC 1918 addressing.

If there are internal servers or servers in the DMZ that are reached using translated addresses, it is a good practice to isolate these servers into their own address space and VLAN, possibly using private VLANs. NAT is often used to support content load balancing servers, which usually must be isolated by implementing address translation.

NAT can also be used in the data center submodule to support a management VLAN that is Out-of-Band from production traffic. It should also be implemented on devices that cannot route or cannot define a gateway for the management VLAN. This results in smaller management VLANs, not a single large management VLAN that covers the entire data center. In addition, large companies or Internet entities can exchange their summary routes, and then they can translate with NAT blocks into the network. This will offer faster convergence but the downside is an increased troubleshooting process because of the use of NAT or PAT.

PAT is harder to troubleshoot because one or a few IP addresses are used to represent hundreds or even thousands of internal hosts, all using TCP and UDP ports to create logical sockets. This increases the complexity of the troubleshooting process because it is difficult to know what IP address is assigned to a particular host. Each host uses a shared IP address and a port number. If the organization is connected to several different partners or vendors, each partner can be represented by a different NAT block, which can be translated in the organization.

Network Design for IPv6 Addressing

CCDP certification requires a solid understanding of the IP version 6 specifications, addressing, and some of the design issues. The IPv6 protocol is based on RFC 2460. From a network designer standpoint, the most important features offered by IPv6 include the following:

  • A 128-bit address space
  • Supports hierarchical addressing and auto-configuration
  • Every host can have a globally unique IPv6 address; no need for NAT
  • Hosts can have multiple addresses
  • Efficient fixed header size for IPv6 packets
  • Enhanced security and privacy headers
  • Improved multicasting and QoS
  • Dedicated IPv6 routing protocols: RIPng, OSPFv3, Integrated IS-ISv6, BGP4+
  • Every major vendor supports IPv6

IPv6 is a mechanism that was created to overcome the limitations of the current IPv4 standard . One of the major shortcomings of IPv4 is that it uses a 32-bit address space. Because of the classful system and the growth of the Internet, the 32-bit address space has proven to be insufficient. The key factors that led to the evolution of IPv6 were large institutions, Enterprise Networks, and ISPs that demanded a larger pool of IP addresses for different applications and services.

Address Representation

IPv4 uses a 32-bit address space, so it offers around 4.2 billion possible addresses, including the multicast, experimental, and private ones. The IPv6 address space is 128 bits, so it offers around 3.4×10 38 possible addressable nodes. The address space is so large that there are about 5×10 28 addresses per person in the world. IPv6 also gives every user multiple global addresses that can be used for a wide variety of devices (e.g., PDAs, cell phones, and IP-enabled devices). IPv6 addresses will last a very long time. An IPv6 packet contains the following fields, as depicted in Figure 7 below:

ip packet

Figure 7 – IPv6 Packet Fields  

Version 4 bits Identifies the IP version (which is 6 in this case).
Traffic Class 8 bits Similar to the ToS byte in the IPv4 header; QoS marking functionality.
Flow Label 20 bits Used to identify and classify packet flows.
Payload Length 16 bits The size of the packet payload.
Next Header 8 bits Similar to the Protocol field in the IPv4 header; defines the type of traffic contained within the payload and which header to expect.
Hop Limit 8 bits Similar to the TTL field in the IPv4 header; prevents endless loops.
Source IP Address 128 bits Source logical IPv6 address.
Destination IP Address 128 bits Destination logical IPv6 address.
Data Variable Transport Layer data.

Knowing what is in the IPv4 header is important from a network designer standpoint because many of the fields in the header are used for features such as QoS or protocol type. The IPv6 header offers additional functionality, even though some fields from the IPv4 header have been eliminated, such as the Fragment Offset field and the Flags field.

The Version field, as in the IPv4 header, offers information about the IP protocol version. The Traffic Class field is used to tag the packet with the class of traffic it uses in its DiffServ mechanisms. IPv6 also adds a Flow Label field, which can be used for QoS mechanisms, by tagging a flow. This can be used for multilayer switching techniques and will offer faster packet switching on the network devices. The Payload Length field is the same as the Total Length field in IPv4.

The Next Header is an important IPv6 field. The value of this field determines the type of information that follows the basic IPv6 header. It can be a Transport Layer packet like TCP or UDP or it can designate an extension header. The Next Header field is the equivalent of the Protocol field in IPv4. The next field is Hop Limit, which designates the maximum number of hops an IP packet can traverse. Each hop/router decrements this field by one, so this is similar to the TTL field in IPv4. There is no Checksum field in the IPv6 header , so the router can decrement the Hop Limit field without recalculating the checksum. Finally, there is the 128-bit source address and the 128-bit destination address.

In addition to these fields there are a number of extension headers. The extension headers and the data portion of the packet will follow the eight fields covered thus far. The total length of an extension header’s chain can be variable because the number of extension headers is not fixed. There are different types of extension headers, such as the following:

  • Routing header
  • Fragmentation header
  • Authentication header
  • IPsec ESP header
  • Hop-by-Hop Options header

The IPv4 address is comprised of a string of 32 bits represented in four octets using a dotted decimal format. IPv6, on the other hand, is comprised of 128 bits represented in eight groups of 16 bits using a hexadecimal format (i.e., 16 bits separated by colons), for example:

2001:43aa:0000:0000:11b4:0031:0000:c110.

Considering the complex format of IPv6 addresses, some rules were developed to shorten them:

  • One or more successive 16-bit groups that consist of all zeros can be omitted and represented by two colons (::).
  • If a 16-bit group begins with one or more zeros, the leading zeros can be omitted.

Considering the IPv6 example above, here are its shortened representations:

2001:43aa::11b4:0031:0000:c110

2001:43aa::11b4:0031:0:c110

2001:43aa::11b4:31:0:c110

     The double colon (::) notation can appear only one time in an IPv6 address.

In a mixed IPv4 and IPv6 environment, the IPv4 address can be embedded in the IPv6 address, specifically in the last 32 bits.

The prefix portion in IPv6 is the number of contiguous bits that represent the network host. For example, the address 2001:0000:0000:0AABC:0000:0000:0000:0000/60 can be represented as 2001:0:0:ABC::/60.

Several types of IPv6 addresses are required for various applications. When compared to IPv4 address types (i.e., unicast, multicast, and broadcast), IPv6 presents some differences: special multicast addresses are used instead of broadcast addressing, and a new address type was defined called anycast.

Aggregatable Global Unicast 2000::/3 Public addresses, host-to-host communications; equivalent to IPv4 unicast.
Multicast FF00::/8 One-to-many and many-to-many communications; equivalent to IPv4 multicast.
Anycast Same as Unicast Interfaces from a group of devices can be assigned the same anycast address; the device closest to the source will respond; application-based, including load balancing, optimization traffic for a particular service, and redundancy.
Link-local Unicast FE80::/10 Connected-link communications; assigned to all device interfaces and used only for local link traffic.
Solicited-node Multicast FF02::1:FF00:0/104 Neighbor solicitation.

Anycast addresses are generally assigned to servers located in different geographical locations. By connecting to the anycast address, users will reach the closest server. Anycast addresses are also called one-to-nearest addresses. The IPv6 multicast address is a one-to-many address that identifies a set of hosts that will receive the packet. This is similar to an IPv4 Class D multicast address . IPv6 multicast addresses also supersede the broadcast function of IPv4 broadcast. IPv6 broadcast functionality is an all-nodes multicast behavior. The following are well-known multicast addresses that should be remembered:

  • FF01::1 = all-nodes multicast address (broadcast)
  • FF02::2 = all-routers multicast address (used for link-local address mechanisms)

Another important multicast address is the solicited node multicast address, which is created automatically and placed on the interface. This is used by the IPv6 Neighbor Discovery process to improve upon IPv4 ARP. A special IPv6 address is 0:0:0:0:0:0:0:1, which is the IPv6 loopback address, equivalent to the 127.0.0.1 IPv4 loopback address. This can also be represented as ::1/128.

The link-local addresses are significant only to individual nodes on a single link. Routers forward packets with a link-local source or destination address beyond the local link. Link-local addresses can be configured automatically or manually. Global unicast addresses are globally unique and routable and are defined in RFC 2374 and RFC 3587.

IPv6 Global Unicast Address Format

Figure 8 – IPv6 Global Unicast Address Format

Based on the IPv6 global unicast address format shown in Figure 8 above, the first 23 bits represent the registry, the first 32 bits represent the ISP prefix, the first 48 bits are the site prefix, and /64 represents the subnet prefix. The remaining bits are allocated to the interface ID.

The global unicast address and the anycast address share the same format. The unicast address space actually allocates the anycast address. To devices that are not configured for anycast, these addresses will appear as unicast addresses.

IPv6 global unicast addressing allows aggregation upward to the ISP. A single interface may be assigned multiple addresses of any type (i.e., unicast, anycast, and multicast). However, every IPv6-enabled interface must have a loopback address and a link-local address.

The IPv6 global unicast address is structured as presented above in Figure 3.8 to facilitate aggregation and reduce its number in the global routing tables, just like with IPv4. Global unicast addresses are defined by a global routing prefix, a subnet ID, and an interface ID. Typically, a global unicast address is made up of a 48-bit global routing prefix and a 16-bit subnet identifier.

IPv6 Mechanisms

As with IPv4, there are different mechanisms available for IPv6 and the most important of these includes the following:

  • IPv6 Neighbor Discovery (ND)
  • Name resolution
  • Path Maximum Transmission Unit (MTU) Discovery
  • IPv6 security
  • IPv6 routing protocols

The Internet Control Message Protocol (ICMP) was modified to support IPv6 and is one of the most important mechanisms that support IPv6 functionality. ICMPv6 uses a Next Header number of 58. ICMP provides informational messages (e.g., Echo Request and Echo Reply) and error messages (e.g., Destination Unreachable, Packet Too Big, and Time Exceeded). IPv6 also uses ICMPv6 to determine important parameters, such as neighbor availability, Path MTU Discovery, destination addresses, or port reachability.

IPv6 uses a Neighbor Discovery protocol (RFC 2461), unlike IPv4, which uses the Address Resolution Protocol (ARP). IPv6 hosts use ND to implement “plug and play” functionality and to discover all other nodes on the same link. ND is also used in checking for duplicate addresses and finding the routers on a specific link. ND uses the ICMPv6 message structure in its operations and its type codes are 133 through 137:

  • Router Solicitation
  • Router Advertisement
  • Neighbor Solicitation
  • Neighbor Advertisement

Neighbor Discovery goes beyond the capabilities of ARP, as it performs many functions:

  • Address Auto-Configuration (a host can find its full address without using DHCP)
  • Duplicate Address Detection (DAD)
  • Prefix Discovery (learns prefixes on local links)
  • Link MTU Discovery
  • Hop Count Discovery
  • Next-Hop Determination
  • Address Resolution
  • Router Discovery (allows routers to find other local routers)
  • Neighbor Reachability Detection
  • Redirection
  • Proxy Behavior
  • Default Router Selection

Many of the features mentioned above have IPv4 equivalencies but some of them are unique to IPv6 and provide additional functionalities.

One of the important features made possible by the ND process is DAD, as defined in RFC 4862. This is accomplished through Neighbor Solicitation messages that are exchanged before the interface is allowed to use a global unicast address on the link, and this can determine whether the particular address is unique. The Target Address field in these specific packets is set to the IPv6 address for which duplication is being detected and the source address is set to unspecified (::).

The IPv6 stateless Auto-Configuration feature avoids using DHCP to maintain a mapping for the address assignment. This is a very low-overhead manner in which to disseminate addresses and it accommodates low-overhead re-addressing. In this process, the router sends a Router Advertisement message to advertise the prefix and its ability to act as a default gateway. The host receives this information and uses the EUI-64 format to generate the host portion of the address. After the host generates the address, it starts the DAD process to ensure that the address is unique on the network.

IPv4 performs Name Resolution by using A records in the DNS. RFC 3596 offers a new DNS record type to support the transition to IPv6 Name Resolution, which is AAAA (Quad A). The Quad A record will return an IPv6 address based on a given domain name.

IPv6 does not allow packet fragmentation through the network (except for the source of the packet), so the MTU of every link in an IPv6 implementation must be 1280 bytes or greater. The ICMPv6 Packet Too Big error message determines the path MTU because nodes along the path will send this message to the sending hosts if the packet is larger than the outgoing interface MTU.

DHCPv6 is an updated version of DHCP that offers dynamic address assignment for version 6 hosts. DHCPv6 is described in RD 3315 and provides the same functionality as DHCP but it offers more control, as it supports renumbering without numbers.

IPv6 also has some security mechanisms. Unlike IPv4, IPv6 natively supports IPsec (an open security framework) with two mechanisms: the Authentication Header (AH) and the Encapsulating Security Payload (ESP).

The support for IPsec in IPv6 is mandatory, unlike with IPv4. By making it mandatory in all the IPv6 nodes, secure communication can be created with any node in the network. An example of mandatory and leveraged IPsec in IPv6 is OSPF, which carries out its authentication using only IPsec. Another example of the IPsec IPv6 mechanism is the IPsec Site-to-Site Virtual Tunnel Interface, which allows easy creation of virtual tunnels between two IPv6 routers to very quickly form a site-to-site secured Virtual Private Network (VPN).

The following new routing protocols were developed for IPv6:

  • RIPng (RIP new generation)
  • Integrated Intermediate System-to-Intermediate System Protocol (IS-IS)
  • EIGRP for IPv6
  • BGP4 multiprotocol extensions for IPv6

Transitioning from IPv4 to IPv6

Because IPv6 almost always comes as an upgrade to the existing IPv4 infrastructure, IPv6 network design and implementation considerations must include different transition mechanisms between these two protocol suites. The IPv4 to IPv6 transition can be very challenging, and during the transition period it is very likely that both protocols will coexist on the network .

The designers of the IPv6 protocol suite have suggested that IPv4 will not go away anytime soon, and it will strongly coexist with IPv6 in combined addressing schemes. The key to all IPv4 to IPv6 transition mechanisms is dual-stack functionality, which allows a device to operate both in IPv4 mode and in IPv6 mode.

One of the most important IPv4 to IPv6 transition mechanisms involves tunneling between dual-stack devices and this can be implemented in different flavors:

  • Generic Routing Encapsulation (GRE) – default tunnel mode
  • IPv6IP (less overhead, no CLNS transport)
  • 6to4 (embeds IPv4 address into IPv6 prefix to provide automatic tunnel endpoint determination); automatically generates tunnels based on the utilized addressing scheme
  • Intra-Site Automatic Tunnel Addressing Protocol (ISATAP) – automatic host-to-router and host-to-host tunneling

IPv6 over IPv4 Tunneling

Figure 9 – IPv6 over IPv4 Tunneling

Analyzing Figure 9 above, the IPv4 island contains two dual-stack routers that run both the IPv4 and the IPv6 protocol stacks. These two routers will be able to support the transition mechanisms by tunneling IPv6 inside IPv4, and the two routers each connect to an IPv6 island. To carry IPv6 traffic between the two edge islands, a tunnel is created between the two routers that encapsulate IPv6 packets inside IPv4 packets. These packets are sent through the IPv4 cloud as regular IPv4 packets and they get de-encapsulated when they reach the other end. An IPv6 packet generated in the left-side network reaches a destination in the right-side network, so it is very easy to tunnel IPv6 inside IPv4 because of the dual-stack routers at the edge of the IPv4 infrastructure. Static tunneling methods are generally used when dealing with point-to-point links, while dynamic tunneling methods work best when using point-to-multipoint connections.

Network Address Translation Protocol Translation (NAT-PT) is another technology that can be utilized to carry out the transition to an IPv6 network. NAT-PT is often confused with NAT but it is a completely different technology. Simple NAT can also be used in IPv6 but this is very rare because IPv6 offers a very large address space and private addresses are not necessary. NAT-PT is another translation mechanism that will dynamically convert IPv4 addresses to IPv6 addresses, and vice-versa.

Another static tunneling technology is IPv6IP, which encapsulates IPv4 packets directly into IPv6. This is also called manual tunneling. Another type of static tunnel that can be created is a GRE tunnel that encapsulates the IPv6 packets within a GRE packet. GRE tunneling might be necessary when using special applications and services, like the IS-IS routing protocol for IPv6.

The dynamic tunnel types include the 6to4 tunnel, which is appropriate when a group of destinations needs to be connected dynamically utilizing IPv6. ISATAP is a unique type of host-to-router dynamic tunnel, unlike the previously mentioned tunneling techniques, which are router-to-router. ISATAP allows hosts to dynamically get to their IPv6 default gateway.

     ISATAP is a protocol that will soon fade away because almost all modern hosts and routers have native IPv6 support.

IPv6 Compared to IPv4

A network designer should have a very clear picture of the advantages IPv6 has over IPv4. The enhancements of IPv6 can be summarized as follows:

  • IPv6 uses hexadecimal notation instead of dotted-decimal notation (IPv4).
  • IPv6 has an expanded address space, from 32 bits to 128 bits .
  • IPv6 addresses are globally unique due to the extended address space, eliminating the need for NAT.
  • IPv6 has a fixed header length (40 bytes), allowing vendors to improve switching efficiency.
  • IPv6 supports enhanced options (that offer new features) by placing extension headers between the IPv6 header and the Transport Layer header.
  • IPv6 offers Address Auto-Configuration, providing for the dynamic assignment of IP addresses even without a DHCP server.
  • IPv6 offers support for labeling traffic flows.
  • IPv6 has security capabilities built-in, including authentication and privacy via IPsec
  • IPv6 offers Path MTU Discovery before sending packets to a destination, eliminating the need for fragmentation.
  • IPv6 supports site multi-homing.
  • IPv6 uses the ND protocol instead of ARP.
  • IPv6 uses AAAA DNS records instead of A records (IPv4).
  • IPv6 uses site-local addressing instead of RFC 1918 (IPv4).
  • IPv4 and IPv6 use different routing protocols.
  • IPv6 provides for anycast addressing.

You can learn more about network design for security and wireless in our Cisco CCNP Encor course here .

Good IP addressing for network design uses summarizable blocks of addresses that enable route summarization and provide a number of benefits:

  • Reduced router workload and routing traffic
  • Increased network stability
  • Significantly simplified troubleshooting

Creating and using summary routes depends on the use of summarizable blocks of addresses . Sequential numbers in an octet may denote a block of IP addresses as summarizable. For sequential numbers to be summarizable, the block must be X numbers in a row, where X is a power of 2, and the first number in the sequence must be a multiple of X. The created sequence will then end one before the next multiple of X in all cases.

Efficiently assigning IP addresses to the network is a critical network design decision, impacting the scalability of the network and the routing protocols that can be used. IPv4 addressing has the following characteristics:

  • IPv4 addresses are 32 bits in length.
  • IPv4 addresses are divided into various classes (e.g., Class A networks accommodate more than 16 million unique IP addresses, Class B networks support more than 65 thousand IP addresses, and Class C networks permit 254 usable IP addresses). Originally, organizations applied for an entire network in one of these classes. Today, however, subnetting allows an ISP to give a customer just a portion of a network’s address space, in an attempt to conserve the depleting pool of IP addresses. Conversely, ISPs can use supernetting (also known as Classless Inter-Domain Routing – CIDR) to aggregate the multiple network address spaces that they have. Aggregating multiple network address spaces into one address reduces the amount of route entries a router must maintain.
  • Devices such as PCs can be assigned a static IP address, by hard coding the IP address in the device’s configuration. Alternatively, devices can dynamically obtain an address from a DHCP server , for example.
  • Because names are easier to remember than IP addresses are, most publicly accessible Web resources are reachable by their name. However, routers must determine the IP address with which the name is associated to route traffic to that destination. Therefore, a DNS server can perform the translation between domain names and their corresponding IP addresses.
  • Some IP addresses are routable through the public Internet, whereas other IP addresses are considered private and are intended for use within an organization. Because these private IP addresses might need to communicate outside the LAN, NAT can translate a private IP address into a public IP address. In fact, multiple private IP addresses can be represented by a single public IP address using NAT. This type of NAT is called Port Address Translation (PAT) because the various communication flows are identified by the port numbers they use to communicate with outside resources.

When beginning to design IP addressing for a network, the following aspects must be determined:

  • The number of network locations that need IP addressing
  • The number of devices requiring an IP address at each location
  • Customer-specific IP addressing requirements (e.g., static IP addressing versus dynamic IP addressing)
  • The number of IP addresses that need to be contained in each subnet (e.g., a 48-port switch in a wiring closet might belong to a subnet that supports 64 IP addresses)

A major challenge with IPv4 is the limited number of available addresses . A newer version of IP, specifically IPv6, addresses this concern. An IPv6 address is 128 bits long, compared to the 32-bit length of an IPv4 address.

To make such a large address more readable, an IPv6 address uses hexadecimal numbers and the 128-bit address is divided into eight fields. Each field is separated by a colon, as opposed to the four fields in an IPv4 address, which are each separated by a period. To further reduce the complexity of the IPv6 address, leading 0s in a field are optional and if one or more consecutive fields contain all 0s, those fields can be represented by a double colon (::). A double colon can be used only once in an address; otherwise, it would be impossible to know how many 0s are present between each pair of colons.

Consider some of the benefits offered by IPv6:

  • IPv6 dramatically increases the number of available addresses.
  • Hosts can have multiple IPv6 addresses, allowing those hosts to multi-home to multiple ISPs.
  • Other benefits include enhancements relating to QoS, security, mobility, and multicast technologies.

Unlike IPv4, IPv6 does not use broadcasts. Instead, IPv6 uses the following methods for sending traffic from a source to one or more destinations:

  • Unicast (one-to-one): Unicast support in IPv6 allows a single source to send traffic to a single destination, just as unicast functions in IPv4.
  • Anycast (one-to-nearest): A group of interfaces belonging to nodes with similar characteristics (e.g., interfaces in replicated FTP servers) can be assigned an anycast address. When a host wants to reach one of those nodes, the host can send traffic to the anycast address and the node belonging to the anycast group that is closest to the sender will respond.
  • Multicast (one-to-many): Like IPv4, IPv6 supports multicast addressing, where multiple nodes can join a multicast group. The sender sends traffic to the multicast IP address and all members of the multicast group receive the traffic.

The migration of an IPv4 network to an IPv6 network can take years because of the expenditures of upgrading equipment. Therefore, during the transition, IPv4-speaking devices and IPv6-speaking devices need to coexist on the same network. Consider the following solutions for maintaining both IPv4 and IPv6 devices in the network:

  • Dual stack: Some systems (including Cisco routers) can simultaneously run both IPv4 and IPv6, allowing communication to both IPv4 and IPv6 devices.
  • Tunneling: To send an IPv6 packet across a network that uses only IPv4, the IPv6 packet can be encapsulated and tunneled through the IPv4 network.
  • Translation: A device, such as a Cisco router, could sit between an IPv4 network and an IPv6 network and translate between the two addressing formats.

IPv6 allows the use of static routing and supports specific dynamic routing protocols that are variations of the IPv4 routing protocols modified or redesigned to support IPv6:

Network Design Quiz

Related posts:.

QoS

frank Kapele says

January 14, 2024 at 1:46 pm

i have interacted on this

Leave a Reply

Your email address will not be published. Required fields are marked *

content-filler

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Best practice for assigning private IP ranges?

Is it common practice to use certain private IP address ranges for certain purposes?

I'm starting to look into setting up virtualization systems and storage servers. Each system has two NICs, one for public network access, and one for internal management and storage access.

Is it common for businesses to use certain ranges for certain purposes? If so, what are these ranges and purposes? Or does everyone do it differently?

I just don't want to do it completely differently from what is standard practice in order to simplify things for new hires, etc.

  • virtualization
  • storage-area-network

Tauren's user avatar

  • I would modify that because there's more likely to be more than 254 devices on a floor than 254 floors in a building see google.com/… . So, use the first 200 addresses of the 3rd octet for floor, then use the last 54 addresses and the remaining octet for devices. That gives 254 * 54 devices possible. Printers, workstations, laptops, Internet of things (IOT)(the most overused hype word today followed by 'folks', 'technology') [Toaters, light switches, lighting controllers, coffee pots. –  Dennis Commented Jul 19, 2015 at 2:22

4 Answers 4

Most systems I've seen attempt to map the IP ranges to a hierarchy of geography and/or system components.

One employer tended to use:

10.building.floor.device (with non-user resource VLANs using 10.x.100.x to 10.x.120.x )

10.major_system.tier_or_subsystem.component

caelyx's user avatar

  • @caelyx: this sounds like a good approach that I could make use of. thanks! –  Tauren Commented Mar 1, 2010 at 9:31
  • @Tauren - no worries; happy to help! Thanks for the upvote :) –  caelyx Commented Mar 1, 2010 at 10:38
  • 2 I would modify that because there's more likely to be more than 254 devices on a floor than 254 floors in a building see google.com/…. So, use the first 200 addresses of the 3rd octet for floor, then use the last 54 addresses and the remaining octet for devices. That gives 254 * 54 devices possible. Printers, workstations, laptops, Internet of things (IOT)(the most overused hype word today followed by 'folks', 'technology') [Toaters, light switches, lighting controllers, coffee pots. – Dennis just now edit –  Dennis Commented Jul 19, 2015 at 2:23

One thing I would suggest is to use randomly selected private ranges from the 10.0.0.0/8 block for all of your private addresses. This avoids lots of problems, particularly when setting up VPNs between home/partner networks and your corporate network. Most home routers (and many corporate setups) use 192.168.0.0/24 or 10.0.0.0/24, so you'll spend hours sorting out various connecticity issues when you try to establish connectivity between two private networks.

If, however, you chose a random range like 10.145.0.0/16, and then subnet from there, it is far less likely that you will "collide" with a business partner or home network's private IP range.

rmalayter's user avatar

  • 1 for site addressing you could subnet 10.0.0.0/24 and encode the longitude and latitude in the the spare octet's. ;-) –  The Unix Janitor Commented Mar 22, 2010 at 13:03
  • Unless your sites are less than one degree apart. We had two offices a few city blocks apart at one point, which are less than 0.02 degrees apart in terms of lat/lon ;-) –  rmalayter Commented Mar 22, 2010 at 21:42
  • 2 If this is a concern (and it's a reasonable one), then use the third private IP range: 172.(16-31).0.0/16. Most people don't even know it's there. I've only seen it in use in one place ever. –  Dan Pritts Commented Sep 24, 2012 at 15:47
  • 1 @DanPritts Rackspace uses 172.16.0.0/12 extensively for hosting and cloud servers. Probably because of its "obscurity". As with 10.0.0.0/8, though, it's better to choose random bits of that space where possible to avoid potential collisions. –  rmalayter Commented Sep 24, 2012 at 21:59
  • 1 @RyanTM wow, I never read the RFC that closely. Glad to see that my own conclusion based on painful experience actually fits with the standard. –  rmalayter Commented Nov 11, 2015 at 20:49

RFC1918 details the 3 IP blocks that are reserved for private address space. The 2 common ones are:

  • 10.0.0.0 - 10.255.255.255 (10/8 prefix)
  • 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)

Less common is:

  • 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)

If you're setting up a separate network for storage, it would probably make sense to choose an IP range similar but slightly different to what you are using for regular networking. Consistency is good, but using different IP ranges allows you to be connected to both networks simultaneously, for example if you need to look something up while doing management with your laptop?

Craig McQueen's user avatar

  • So my laptop gets an IP number in the 192.168.0.x range from DHCP. I'm thinking that my storage network should be in the 10.x.x.x range to keep them really separate. Is this common practice, or do many places use something lik 192.168.1.x for their storage? –  Tauren Commented Feb 27, 2010 at 8:03
  • 3 172.16-31/16 also =) Not much used though. –  Antoine Benkemoun Commented Feb 27, 2010 at 9:43
  • 2 @Tauren: 192.168.1.x/24 is as equally separate from 192.168.0.x/24 as 10.0.0.x/24 is. It can't be "more" or "less" separate. They are on different subnets, full stop... :) –  rytis Commented Feb 27, 2010 at 11:06
  • That's true for computers, but not for the people that work on them. Keeping staff members non-confused is a good thing, and naming standards go a long way towards that. –  pboin Commented Feb 27, 2010 at 13:26
  • @pulegium: yes, I understand they are actually separate, but I meant in the "human sense", like @pboin mentions. –  Tauren Commented Mar 1, 2010 at 9:27

There is about as much consensus on IP addressing as on server names (see this site ad naseum) it just comes down to personal preference - typically of the first guy to set it all up!

No there is no proper way of doing it - simply pick one of the the 3 RFC1918 ranges (cheers @Nic Waller), split it into subnets (traditionally /24s but /23s are becoming more popular). Assign one of the subnets for public access and one for private - job done. Really the hard part is setting up the VLANs and ACLs.

Personally I prefer using the 10.x.x.x range as I can type it quicker than the other two, but really it makes no difference unless you need the larger size (192.168.x.x gives you 256 subnets of 254 IP addresses whereas 10.x.x.x gives you 65,536).

I would not suggest mixing the ranges for instance having 192.168.x.x for private and 10.x.x.x for public, technically it shouldn't matter but it would be very confusing.

Jon Rhoades's user avatar

  • @Jon, thanks for your suggestions. this helps confirm most of what I thought was the case. –  Tauren Commented Mar 1, 2010 at 9:29
  • How can the 10.x.x.x range be public? If they are assigned to public devices accessing the net through a VPN? (Seems the only way) –  Dennis Commented Jul 18, 2015 at 19:45

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged networking virtualization routing storage-area-network ..

  • The Overflow Blog
  • One of the best ways to get value for AI coding tools: generating tests
  • The world’s largest open-source business has plans for enhancing LLMs
  • Featured on Meta
  • User activation: Learnings and opportunities
  • Site maintenance - Mon, Sept 16 2024, 21:00 UTC to Tue, Sept 17 2024, 2:00...

Hot Network Questions

  • Accelerating computation of a "2D Variance" using C++ and Eigen in a MEX function
  • Proving that a function from one metric space to another is continuous or discontinuous at a point
  • Does carbon fiber wings need wing spar?
  • Why was Panama Railroad in poor condition when US decided to build Panama Canal in 1904?
  • Why does Sfas Emes start his commentary on Parshat Noach by saying he doesn't know it? Is the translation faulty?
  • How to Place a Smooth "Breathing" Above the E with Textgreek?
  • Converge of a series
  • How to best characterize the doctrine deriving from Palsgraf?
  • Doesn't nonlocality follow from nonrealism in the EPR thought experiment and Bell tests?
  • siunitx dollar per hour broken going from SI to qty
  • Equation of Time (derivation Analemma)
  • Is Produce Flame a spell that the caster casts upon themself?
  • Copyright Fair Use: Is using the phrase "Courtesy of" legally acceptable when no permission has been given?
  • Why is resonance such a widespread phenomenon?
  • How to reply to a revise and resubmit review, saying is all good?
  • Should tiny dimension tables be considered for row or page compression on servers with ample CPU room?
  • Stuck as a solo dev
  • Does a Malaysian citizen require a Canadian visa to go on an Alaskan cruise
  • How should I email HR after an unpleasant / annoying interview?
  • Is a thing just a class with only one member?
  • How to prove that the Greek cross tiles the plane?
  • Is "my death" a/the telos of human life?
  • Navigating career options after a disastrous PhD performance and a disappointed advisor?
  • Why did early ASCII have ← and ↑ but not ↓ or →?

ip address assignment best practices

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

What is the best practice for assigning static IP addresses? [closed]

In my scenario, I would be assigning IP addresses to security cameras.

I am wondering if I should assign a static IP address from the device itself, or have a static entry on my DHCP server with their MAC addresses. I know both would work, but I was wondering if it is best practice to assign the IP from the devices, DHCP server, or both.

Dr_Xunil's user avatar

  • This comes down to whichever option works best for you to configure and maintain. If you prefer setting static mac reservations in the DHCP configuration or if you prefer configuring each IP camera. So it's a matter of preference, administration, and maintenance. –  Vomit IT - Chunky Mess Style Commented May 12, 2022 at 13:29
  • Its a matter of preference. The biggest advantage of using a DHCP reservation, is that you ensure that there are no IP Conflicts. But if you have a system and everyone else that maintains that network uses the same system, then static ip addresses will also work. One benefit of static ip addresses is that if the DHCP server goes wonky, it may stop access to the camera's and troubleshooting that its the DHCP server can be tricky. Your camera stops working, and it is then not the DHCP server that you usually think of at first glance. –  LPChip Commented May 12, 2022 at 14:00

2 Answers 2

If your Router can handle DHCP Reservation, I find this to be the best way to handle small device Static IP setup. If the device changes you need only change the MAC address in the router and you are good to go. No setup in the device required.

This method also helps to ensure that other individuals will not set up conflicting IP addresses. IP setup is under the control of the person setting up the MAC address table.

Now not all Routers can do DHCP Reservation: My older Cisco RV325 CAN; my newer Cisco RV345 could not initially but now with later firmware updates CAN.

So you have to take your router into account, but my experience (Servers and Routers) is that DHCP reservation is better than setting up a device with Static IP.

While there is an element of personal preference of course, DHCP Reservation is a fairly normal business practice (used on all my Customer Servers) and works really well.

Another way to look at it can be configuration preferences concerning central versus non-central:

If I want to have a central administration interface I usually go for DHCP setup with IP reservations.

If central administration doesn't matter I don't bother about reserving IPs on the DHCP server. But you need to make sure that static IPs and dynamic IPs have their separate ranges to avoid assigning the same IP to two devices.

Doing both is possible but it's twice the work I don't see any benefit to your particular use case.

Albin's user avatar

Not the answer you're looking for? Browse other questions tagged networking ip dhcp ip-address static-ip .

  • The Overflow Blog
  • One of the best ways to get value for AI coding tools: generating tests
  • The world’s largest open-source business has plans for enhancing LLMs
  • Featured on Meta
  • User activation: Learnings and opportunities
  • Site maintenance - Mon, Sept 16 2024, 21:00 UTC to Tue, Sept 17 2024, 2:00...

Hot Network Questions

  • Are there something like standard documents for 8.3 filename which explicitly mention about the folder names?
  • In QGIS 3, changing borders of shared polygon shapefile features
  • Accelerating computation of a "2D Variance" using C++ and Eigen in a MEX function
  • How can I analyze the anatomy of a humanoid species to create sounds for their language?
  • Is it possible to draw this picture without lifting the pen? (I actually want to hang string lights this way in a gazebo without doubling up)
  • Drill perpendicular hole through thick lumber using handheld drill
  • How did people know that the war against the mimics was over?
  • Is it feasible to create an online platform to effectively teach college-level math (abstract algebra, real analysis, etc.)?
  • How to best characterize the doctrine deriving from Palsgraf?
  • Browse a web page through SSH? (Need to access router web interface remotely, but only have SSH access to a different device on LAN)
  • Manhwa where the (female) main character can see how and when someone will die
  • What would a planet need for rain drops to trigger explosions upon making contact with the ground?
  • Doesn't nonlocality follow from nonrealism in the EPR thought experiment and Bell tests?
  • If Act A repeals another Act B, and Act A is repealed, what happens to the Act B?
  • Is it possible to copy the configuration of the Fruity Parametric EQ 2 from one project to another?
  • Why was Panama Railroad in poor condition when US decided to build Panama Canal in 1904?
  • How to reply to a revise and resubmit review, saying is all good?
  • How to limit matrix underbrace?
  • Will "universal" SMPS work at any voltage in the range, even DC?
  • How did NASA figure out when and where the Apollo capsule would touch down on the ocean?
  • siunitx dollar per hour broken going from SI to qty
  • Equation of Time (derivation Analemma)
  • Subject verb agreement - I as well as he is/am the culprit
  • Custom PCB with Esp32-S3 isn't recognised by Device Manager for every board ordered

ip address assignment best practices

  • Network infrastructure

ip address assignment best practices

Getty Images/iStockphoto

Static IP vs. dynamic IP addresses: What's the difference?

Static IP addresses are typically used for servers, routers and switches. Dynamic IP addresses, however, are commonly used for workstations, phones and tablets.

Damon Garn

  • Damon Garn, Cogspinner Coaction

It's imperative for sys admins to manage IP addressing properly, even in simple networks. Routers, firewalls and monitoring tools all use IP addresses to uniquely identify and organize network devices.

Network nodes usually have the following three identities:

  • Internet Protocol (IP) address.
  • Media access control (MAC) address.

Hostnames are assigned by administrators and are descriptive names helpful to human users, such as webserver3.mydomain.internal. MAC addresses are hardcoded on the network interface card (NIC) and are unique to it. IP addresses are logical addresses managed by administrators.

Each network node needs an IP address. These addresses are assigned and configured in two primary ways: static assignment and dynamic assignment.

This article discusses both static and dynamic addressing, what these concepts mean and when to use each one.

Rules to keep in mind

At a minimum, IP address settings consist of the actual IP address and a subnet mask. It is likely, however, that sys admins will also configure the IP address of a default gateway (router) and name resolution servers. These configurations can't contain mistakes or typos, and no duplicate IP addresses on the network are permitted. These are critical factors to keep in mind.

Static IP address assignment

An administrator manually configures static IP addresses on a node. The admin sets the desired IP address, subnet mask, default gateway, name server and other values. While the process is usually simple, admins should keep the following points in mind:

  • The sys admin can make no mistakes or typographical errors and must avoid any duplicate IP address assignments for either static or dynamic addressing.
  • The process is easy but time-consuming when calculated against every device on the network.
  • Any updates or modifications to the IP settings also must be configured manually.

In practice, static IP address assignments are usually only made to a specific and relatively small part of the network, such as the following:

  • Network print devices (though not all admins set static IP addresses on printers).

If these devices are the only ones that are manually configured, how do other devices -- such as workstations, phones or tablets -- get their IP address settings?

The answer: dynamic IP address assignment.

Dynamic IP address assignment

Most network devices temporarily lease an IP address configuration from a central server called a Dynamic Host Configuration Protocol ( DHCP ) server. Administrators configure the DHCP server with a pool of available IP addresses and any additional options. Client machines then connect to the DHCP server to lease a configuration.

As with static IP address assignment, dynamic configurations consist of several related values, including the following:

  • IP address and subnet mask.
  • Default gateway.
  • Name servers.

Network nodes require unique IP addresses, and these addresses can be manually assigned by administrators or dynamically assigned by a DHCP server.

Dynamic assignment is appropriate for client machines that don't need a consistent, unchanging identity on the network. For example, 50 workstations might share and connect to a network print device located at 192.168.2.42. The workstations always expect to find that printer at that address, so that printer needs an unchanging identity. Typically, however, client devices don't host services or resources that must be consistently found at the same address.

Further, client devices tend to be much more temporary than servers, routers and printers. Laptops, tablets and phones come and go on the network daily or even hourly, especially in environments such as coffee shops or libraries.

Static addressing pros and cons

Static IP address configurations are usually for unchanging network devices.

Advantages of static IP addresses include the following:

  • The network identity does not change.
  • The node can be connected to regardless of name resolution issues.
  • Administrators retain tight control over identities.
  • Network resources can be mapped to unchanging IP addresses.

Static IP addresses have their disadvantages as well:

  • Mistakes cannot be made during static assignment.
  • Administrators must not accidentally assign duplicate addresses.
  • Setting and changing the IP address configuration is manual and time-consuming.

Dynamic addressing pros and cons

Dynamic IP assignments are best for nonpermanent devices and those that don't often need to be found by other network nodes.

Dynamic IP addresses offer the following advantages:

  • The server does not make typographical errors.
  • Duplicate IP address assignments are reduced.
  • Changing the IP address configuration is quick and efficient.
  • Network nodes are easy to identify.

Disadvantages of dynamic IP addresses include the following:

  • Nodes will have different identities over time.
  • It is more difficult to identify specific nodes on the network.

Tracking IP address configurations

Administrators must track IP address configurations . Tracking doesn't have to be complex, and network services can help.

At its most basic, tracking may consist of a simple spreadsheet that clearly notes the statically assigned IP addresses and the nodes on which they are configured. The spreadsheet should also list the range of addresses included in the DHCP scope that will be dynamically assigned.

Spreadsheet example for tracking static IP address configurations

Various network services also help administrators track IP address settings. For example, IP address management can track which nodes have which IP addresses. Regardless of which method sys admins use, it's essential for them to document the IP address configuration of their network.

Lease generation and renewal

DHCP clients go through a four-step process to lease an IP address configuration: discover, offer, request and acknowledge -- or DORA.

Because the client devices don't yet have a valid IP address, the entire process takes place via broadcasts. Below is a breakdown of the lease process:

  • The client broadcasts a discover message asking for DHCP servers to provide an IP address.
  • The DHCP server offers an unassigned IP address from its scope.
  • The client formally requests the use of the IP address from the first DHCP server to respond.
  • The server acknowledges the request and logs the IP address leased to that network device.

DHCP handshake process

Note that the clients initiate the process, not the server. DHCP servers are passive, awaiting lease requests from clients.

One parameter set by a DHCP server is the lease duration. The leased IP address is not permanent, meaning the client must periodically attempt to renew the address. This enables administrators to update the DHCP configuration and the updates to eventually reach the client devices.

Windows DHCP servers use an eight-day lease by default. This means clients that lease an IP address from the server have a valid configuration for eight days. At the halfway point in the lease -- in this case, four days -- the client attempts to renew its configuration. The renewal is steps three and four of the DORA process: request and acknowledge. The renewal will likely be successful, and the lease duration will reset.

So, why wouldn't a renewal attempt be successful? The DHCP server may have an updated configuration, meaning the client is attempting to renew outdated settings. In this case, the DHCP server fails the renewal attempt, which causes the client device to initiate an entirely new lease generation attempt. Such an attempt provides it with the updated settings.

Automatic private IP addressing

If a client computer cannot lease an IP address configuration from a DHCP server, it uses Automatic Private IP Addressing ( APIPA ) to create a self-generated address.

APIPA addresses use the Class B range 169.254.0.0. The client will generate random values between 1 and 254 for the last two octets. While these addresses may enable a little network connectivity, they are more like error messages. If a client has an APIPA address, sys admins can know the lease generation process failed and begin troubleshooting based on that information.

ipconfig command results with APIPA address

Sys admins can use tools such as Nmap to identify nodes on the network. These nodes will be displayed by their IP addresses, and admins can use that information for tracking and documenting IP address configurations.

Another useful exercise is to capture the DORA process as it happens by using Wireshark . This is a great way to learn and visualize the lease generation process.

Intrusion detection systems often identify utilities such as Wireshark and Nmap as hacker tools. Such systems may send a warning to the organization's security administrators. Do not run these tools on a production network without express authorization.

We'll dive deeper into troubleshooting in another article. But sys admins can use ipconfig /release and ipconfig /renew on Windows to force the lease generation process. The ipconfig command and its related switches can be helpful for troubleshooting. Use dhclient -r and dhclient on Linux systems to accomplish the same result.

Most network environments rely on a combination of the two approaches: Admins directly configure devices such as servers and routers, while DHCP configures client devices. Each method has its advantages and disadvantages, with benefits centered around convenience and consistency.

The dynamic method uses the four-step DORA process in which a client leases a configuration from a DHCP server and must periodically renew that address. If this process fails, the client assigns itself an address from the reserved Class B range, 169.254.0.0 -- the APIPA range.

Setting up a Windows DHCP server is relatively straightforward, as is managing DHCP client configurations. We'll provide details on both those topics in future articles.

IPv4 vs. IPv6: What's the difference?

How organizations can migrate from IPv4 to IPv6

Related Resources

  • CoIP® Platform Zero Trust Architecture –Zentera Systems Inc.
  • How Service Desks are Transforming to Deliver Value in the Wake of Their Next ... –Wanstor Ltd
  • Engineering in Cloud App Access and Network Uptime With SD-WAN –Meriplex
  • The Impact of Gen AI on Networks –Console Connect

Dig Deeper on Network infrastructure

ip address assignment best practices

How to back up a Windows DHCP server

DamonGarn

Use advanced DHCP logging techniques for troubleshooting

ip address assignment best practices

How to set up a Windows Server 2022 domain controller

BrienPosey

DHCP (Dynamic Host Configuration Protocol)

AlexanderGillis

Part of: DHCP configuration and troubleshooting

The main steps to configure DHCP on Windows Server are installation, Active Directory authorization, creating a scope and DNS registration. Learn about other tasks and best practices.

Network and system admins can use command-line and GUI tools to configure DHCP clients in Linux, Windows and macOS. Here's a breakdown of which commands to use for each OS.

DHCP errors often result from DHCP server failure, a lack of available IP addresses and network failure. But troubleshooting steps vary depending on the error symptoms and causes.

Microsoft 365 Copilot, an AI assistant, offers several promising features. Find out how to configure Copilot with Teams workflows...

With its AI capabilities, Microsoft Copilot provides several enhancements to Microsoft Teams functionality, including meeting ...

Organizations have ramped up their use of communications platform as a service and APIs to expand communication channels between ...

Auditing is a crucial part of mobile device security, but IT admins must ensure their approach is thorough and consistent. Learn ...

With the right software, almost any mobile device can be a payment terminal. Learn about the mobile point-of-sale options beyond ...

To keep corporate and user data safe, IT must continuously ensure mobile app security. Mobile application security audits are a ...

AT&T claims its negotiated contract with VMware pre-Broadcom acquisition is not being honored, casting light on recent product ...

Rocky Linux and AlmaLinux are new distributions created after Red Hat announced the discontinuation of CentOS. These ...

The Broadcom CEO says public cloud migration trauma can be cured by private cloud services like those from VMware, but VMware ...

Popular pricing models for managed service providers include monitoring only, per device, per user, all-you-can-eat or ...

Global IT consultancies take a multilayered approach to GenAI training by developing in-house programs, partnering with tech ...

IT service providers are upskilling a large portion of their workforces on the emerging technology. The campaign seeks to boost ...

  • Is the Big Tech Boys Club Failing Women?
  • Watch! Wolverine Movies in Order
  • When to Use a Static IP Address

Are static IP addresses better than dynamic addresses?

  • Massachusetts Institute of Technology
  • University of Illinois

ip address assignment best practices

Static IP Address Uses

  • Static vs. Dynamic
  • When Not to Use a Static IP

Static IP Address Assignment on Home Networks

  • Getting a Static Public IP Address

A static IP address, or fixed IP address , is an IP address that never changes. Not everyone needs a static IP address, but knowing how they differ from dynamic IP addresses can help you understand whether you should use a static IP address.

Here are some example situations for when you might need a static IP address:

  • Setting up a home file server .
  • Adding a second router to a network.
  • Enabling access to a computer when away from home or work.
  • Forwarding ports to certain devices.
  • Sharing a printer over a network.
  • Connecting to an IP camera when away from home.

Static & Dynamic: What They Mean

The terms static and dynamic are simple to understand. At the core, the only real change you'll notice between static versus dynamic IP addresses is that the former never changes, while the latter does.

Most people don't care if their IP address changes. If you never know what your IP address is and never have a reason to keep it the same, then dynamic addresses are fine for you.

However, if your network or computer is set up in a specific way where some devices would work easier and set up would be smoother for you as the admin if an IP address always stayed the same, then static addressing is what you want.

Static IP addresses are assigned manually by an administrator. In other words, the device receiving the static IP is given a specific address (such as 192.168.1.2 ), and from then on, the address never changes.

Dynamic IP addresses are not assigned manually. They are assigned automatically by DHCP (Dynamic Host Configuration Protocol).

When Static IP Addresses Are Used

Static IP addresses are necessary for devices that need constant access.

For example, a static IP address is necessary if your computer is configured as a server, such as an FTP server or web server. If you want to ensure that people can always access your computer to download files, force the computer to use a static, never-changing IP address.

If the server was assigned a dynamic IP address, it would change occasionally, preventing your router from knowing which computer on the network is the server.

If you want to access your home computer while you're on a trip or your work computer when you're at home, setting up the computer to use a static IP address lets you reach that computer at any time without fearing that the address will change and block your access to it.

A shared printer is another example of when to use a static IP address. If you have a printer that everyone in your house or office needs to share, give it an IP address that won't change no matter what. That way, when every computer is set up to connect to that printer, those connections remain indefinitely because the address never changes.

Here are some other reasons to use static IPs:

  • They provide slightly better protection against network security problems than DHCP address assignment provides.
  • Some network devices don't support DHCP.
  • They help avoid potential  IP address conflicts where DHCP might supply an address already assigned elsewhere.
  • They provide geolocation that's more accurate than a dynamic IP address.

When Not to Use a Static IP Address

Because a static IP address is assigned manually, it's less efficient for a network admin to give it out, especially in mobile situations. Someone must visit the device in person to give it an IP address instead of letting DHCP assign the address automatically.

For example, you wouldn't set a static IP address on a smartphone because the moment it reaches another Wi-Fi network, the address might not be supported on that network, meaning that it won't be able to access the internet.

Dynamic addressing is more convenient in this situation because it's easy for administrators to set up. DHCP works automatically with minimal intervention needed, allowing mobile devices to move between different networks seamlessly.

Businesses are more likely to use static IP addresses than home networks. Implementing static IP addresses isn't easy and frequently requires a knowledgeable technician.

However, you can have a static IP address for your home network. When making static IP assignments for local devices on home and other private networks, the address numbers should be chosen from the  private IP address  ranges defined by the Internet Protocol standard:

  • 10.0.0.0–10.255.255.255
  • 172.16.0.0–172.31.255.255
  • 192.168.0.0–192.168.255.255

These ranges support thousands of IP addresses. It's common for people to assume they can choose any number in the range and that the specific choice doesn't matter much. This is untrue.

To choose and set specific static IP addresses suitable for your network, follow these guidelines:

  • Do not choose any addresses that end with .0 or .255 . These addresses are usually reserved for use by network protocols .
  • Do not choose the addresses at the beginning of a private range. Addresses like 10.0.0.1 , 192.168.0.1 , and 192.168.0.100 are commonly used by network routers and other consumer devices. These are the first addresses hackers attack when trying to break into a private computer network.
  • Don't choose an IP address that falls outside the range of your local network. For example, to support all addresses in the 10.x.x.x private range, the subnet mask on all devices must be set to 255.0.0.0. If they aren't, some static IP addresses in this range don't work.

How to Get a Static Public IP Address

Internet service providers (ISP) traditionally assign all their IP addresses to customers dynamically, due to historical shortages of available IP numbers.

Contact your service provider if you prefer a static IP address. You can't get a static public IP address without requesting it from your ISP. Customers can sometimes obtain a static IP by subscribing to a special service plan and paying extra fees.

Get the Latest Tech News Delivered Every Day

  • What Is a Static IP Address?
  • What Is an IP Address Conflict?
  • How to Change Your IP Address
  • What Is DHCP? (Dynamic Host Configuration Protocol)
  • How to Set Up Port Forwarding
  • How to Work With IP Address 192.168.100.1
  • How to Obtain a Fixed IP Address
  • What Is a Dynamic IP Address?
  • How to Work With the 10.1.1.1 IP Address
  • What Is the 10.0.0.1 IP Address?
  • Understanding the 192.168.1.100 IP Address
  • What Is a Public IP Address? (and How to Find Yours)
  • How to Fix a 169 IP Address Error
  • How to Use an IP Address to Find a MAC Address
  • How Is 192.168.1.2 Used?
  • Computer Networking Tutorial - Internet Protocol

Number Resources

We are responsible for global coordination of the Internet Protocol addressing systems, as well as the Autonomous System Numbers used for routing Internet traffic.

Currently there are two types of Internet Protocol (IP) addresses in active use: IP version 4 (IPv4) and IP version 6 (IPv6). IPv4 was initially deployed on 1 January 1983 and is still the most commonly used version. IPv4 addresses are 32-bit numbers often expressed as 4 octets in “dotted decimal” notation (for example, 192.0.2.53 ). Deployment of the IPv6 protocol began in 1999. IPv6 addresses are 128-bit numbers and are conventionally expressed using hexadecimal strings (for example, 2001:0db8:582:ae33::29 ).

Both IPv4 and IPv6 addresses are generally assigned in a hierarchical manner. Users are assigned IP addresses by Internet service providers (ISPs). ISPs obtain allocations of IP addresses from a local Internet registry (LIR) or National Internet Registry (NIR), or from their appropriate Regional Internet Registry (RIR):

RegistryArea Covered
Africa Region
Asia/Pacific Region
Canada, USA, and some Caribbean Islands
Latin America and some Caribbean Islands
Europe, the Middle East, and Central Asia

Our primary role for IP addresses is to allocate pools of unallocated addresses to the RIRs according to their needs as described by global policy and to document protocol assignments made by the IETF . When an RIR requires more IP addresses for allocation or assignment within its region, we make an additional allocation to the RIR. We do not make allocations directly to ISPs or end users except in specific circumstances, such as allocations of multicast addresses or other protocol specific needs.

IP Address Allocations

Internet protocol version 4 (ipv4).

  • IPv4 Address Space
  • IPv4 Multicast Address Assignments
  • IPv4 Special Purpose Address Registry
  • IPv4 Recovered Address Space Registry
  • Bootstrap Service Registry for IPv4 Address Space

Internet Protocol Version 6 (IPv6)

  • IPv6 Address Space
  • IPv6 Global Unicast Allocations
  • IPv6 Parameters (Parameters described for IPv6, including header types, action codes, etc.)
  • IPv6 Anycast Address Allocations
  • IPv6 Multicast Address Allocations
  • IPv6 Sub-TLA Assignments (DEPRECATED)
  • IANA IPv6 Special Registry
  • Bootstrap Service Registry for IPv6 Address Space
  • Announcement of Worldwide Deployment of IPv6 (14 July 1999)
  • RIR Comparative Policy Overview

Autonomous System Number Allocations

  • Autonomous System Numbers
  • Special-Purpose AS Number Assignments
  • Bootstrap Service Registry for AS Number Space
  • Internet Number Resource Request Procedure

Regional Internet Registry Creation

  • Criteria for Establishment of New Regional Internet Registries (ICP-2) (4 June 2001)
  • IANA Report on Recognition of LACNIC as a Regional Internet Registry (7 November 2002)
  • IANA Report on Recognition of AfriNIC as a Regional Internet Registry (8 April 2005)

Technical Documentation

  • RFC 4632 — Classless Inter-domain Routing (CIDR): The Internet Address Assignment and Aggregation Plan
  • RFC 1918 — Address Allocation for Private Internets
  • RFC 5737 — IPv4 Address Blocks Reserved for Documentation
  • RFC 4291 — Internet Protocol Version 6 (IPv6) Addressing Architecture
  • RFC 3587 — IPv6 Global Unicast Address Format
  • RFC 6177 — IPv6 Address Assignment to End Sites
  • RFC 6890 — Special-Purpose IP Address Registries
  • RFC 7020 — The Internet Numbers Registry System
  • RFC 7249 — Internet Numbers Registries
  • Locally Served DNS Zones
  • Dedicated Servers

Allocating IP Addresses: 7 Best Practices for 2024

  • January 31, 2024
  • - 2 mins read

Try this guide with our instant dedicated server for as low as 40 Euros

IP Addresses

By providing your email address, you agree to our Terms of Service and that you have reviewed our Privacy Policy .

ip address assignment best practices

  • Key Takeaways
  • An IP address is a unique set of characters identifying every device using the Internet to communicate over a network.
  • IP addresses are important for enabling communication over the Internet, security, location tracking, etc.
  • IPv4 is the original Internet protocol used since the 1980s.
  • IPv6 is a 128-bit address system developed to address IPv4 exhaustion.
  • IP addresses are assigned either dynamically by a DHCP server, which allocates them as devices connected to a network, or statically, where a network administrator manually sets them.
  • Best practices for IP address allocation include planning an IP address scheme, using subnetting, implementing Dynamic Host Configuration Protocol, having a record, etc.
  • Types of IP addresses include Consumer IP addresses, Private IP addresses, Public IP addresses, etc.
  • Two types of website IP addresses include shared IP addresses and dedicated IP addresses.
  • When deciding between shared vs dedicated IP address, factors include SEO, security, budget, and site traffic.
  • Strategies to protect your IP address include using a VPN, enabling private browsing, using public Wi-Fi with caution, etc.

With abundant new devices connecting to the internet daily, efficient IP address allocation has become critical today. According to a report, there are nearly 17.08 billion connected IoT devices in 2023. This noteworthy number shows the need for effective IP address management strategies.

Proper allocation of IP addresses is more than just a matter of operational convenience. It is critical for ensuring smooth connectivity, security, and scalability in any network infrastructure. This blog takes readers through the best practices for allocating IP addresses.

Table Of Contents

  • What is an IP Address?

Enables Communication Over the Internet

Identifies devices, facilitates location tracking and customization, helps in network organization and management.

What Does IPv4 Do?

Why is IPv4 Important?

The Limitation of IPv4

How IPv4 Works in Everyday Life

Why Do We Need IPv6?

The Importance of IPv6

Challenges in Adopting IPv6

IPv6 and Everyday Internet Use

Connecting to a Network

DHCP Server Response

For Static IPs

For Public IPs

Plan Your IP Address Scheme

Use Subnetting

Implement Dynamic Host Configuration Protocol (DHCP)

Maintain a Record

Implement IP Address Management (IPAM) Tools

Regularly Review and Optimize Your IP Address Scheme

Follow Security Best Practices

Depletion of IPv4 Addresses

Efficient IP Address Management

Addressing Security Concerns

Compatibility and Transition Challenges

Dynamic IP Address Allocation Issues

Consumer IP Addresses

Private IP Addresses

Public IP Addresses

Dynamic IP Addresses

Static IP Addresses

Shared IP Address

Dedicated IP Address

Website Traffic and Resources

Security Needs

Use a Virtual Private Network (VPN)

Employ a Proxy Server

Enable Private Browsing

Use Public Wi-Fi (With Caution)

Consider Using TOR

Consider Using a Secure ISP

Regularly Change Your IP Address

Understanding Dedicated Servers

Key Characteristics of Dedicated Servers

The Role of Dedicated Servers in IP Allocation

What is an IP Address ?

What is an IP Address

Credits: FreePik

An IP address (Internet Protocol address) is like a home address but for your computer or device on the Internet. It’s a unique set of numbers separated by periods, like 192.168.1.1. This helps identify each device connected to the Internet.

Just as your home address lets people send you mail, an IP address lets computers send and receive information from other computers. It’s essential because it ensures that when you go online and ask to visit a website or send an email, the Internet knows where to send the website’s data or where the email needs to go.

There are two types of IP addresses. The first is IPv4, the traditional format with around four billion addresses. The second type is IPv6, a newer format created to provide many addresses as the Internet grew and more devices needed unique addresses. An IP address is an important aspect of how the Internet works, acting to identify and locate billions of devices on the vast Internet network.

Are you struggling to find your IP address in Linux? Read our informative piece, ‘ How To Find Your IP Address In Linux | 4 Easy Ways ,’ for answers.

What is the Importance of IP Address ?

What is the Importance of IP Address

The importance of an IP address in the digital world is multi-faceted and extends beyond the basic functionality of enabling internet connectivity. Below are factors to help you understand its significance:

An IP address acts like a digital postman. Imagine you’re sending a letter. You need the correct address to reach the right person. Similarly, when you’re online and want to send an email or visit a website, your device requires the correct IP address of the recipient or website.

This address ensures your email reaches the right inbox or lands on the correct website. The internet could not function without IP addresses, as there would be no way to direct information to its proper destination.

The IP address is your device’s identity on the internet. Similar to how your fingerprint identifies you, your device’s IP address identifies it on the internet. This unique identification is critical, especially today, where countless devices are connected to the internet. It ensures that when you request information or a service online, your device receives it, not someone else’s.

Have you ever wondered how websites know what language to display or why some online services are unavailable in your region? This is where IP addresses come in. They provide an approximate location of where your device is accessing the internet.

This location-specific data is used by websites to edit their content, like showing you news relevant to your area or items that can be shipped to your location. This customization improves your online experience by making it more relevant.

IP addresses are critical for online security. They can be used to track down devices involved in malicious activities. For instance, if a specific IP address is consistently involved in cyber attacks or spamming, it can be blocked by websites or internet service providers, acting as a defense against cyber threats. Monitoring IP addresses also helps pinpoint unusual patterns that could signal security breaches.

IP addresses are crucial for network organization in multiple-device environments. Some examples include corporate offices or university campuses. They enable network administrators to find and manage devices on the network easily.

If a specific computer is causing network issues, the administrator can use its IP address to locate and address the problem. In setting up networks, assigning IP addresses to each device helps organize the network systematically, ensuring smooth operation and easy troubleshooting.

Also Read: Learn About IP Addressing Schemes And Subnet Masks .

What is IPv4?

What is IPv4

Credits: Freepik

IPv4 stands for Internet Protocol version 4. It’s slightly comparable to postal service of the Internet, a set of rules that helps direct data to the right place. An IPv4 address is made up of four numbers separated by dots.

For instance, an IPv4 address might look like this: 189.168.1.1. Every number set can range from 0 to 255. This format offers many possible combinations. It still does have a limit on how many addresses it can form.

IPv4 is a specific version of this Internet Protocol. It’s responsible for identifying devices on a network and routing data between them. Every device linked to the Internet requires its own IP address, a series of numbers that identifies that specific device. IPv4 creates these addresses.

IPv4 is important because it’s one of the main protocols for the Internet. Sending and receiving data over the Internet would be chaotic and disorganized without IPv4. It can be compared to having a mail system without addresses or postal codes.

One issue with IPv4 is that it can only create about 4 billion unique IP addresses. This may sound like a lot, but we are running out of these addresses due to the high number of devices linking to the Internet. This limitation has led to the development of a new version called IPv6. This version can create a much larger number of addresses.

IPv4 is operational in the background whenever you visit a site, play a video, or send an email. It directs the data you send and receives to the right devices. Whether using Wi-Fi at home or data on your phone, IPv4 keeps you linked to the online space.

Also Read: Add Additional IPv4 Addresses on Windows Server 2019 in 4 Simple Steps

What is IPv6?

What is IPv6

IPv6 stands for Internet Protocol version 6. It’s the newest Internet Protocol version. IPv6 was developed to replace IPv4, the previous version, which has existed since the Internet’s early days.

The main reason for IPv6 is the need for more IP addresses. Think of it like phone numbers. We need more unique phone numbers as more people get phones. Similarly, we need more unique IP addresses as more and more devices link to the Internet.

IPv4, a 32-bit addressing scheme, can support about 4.3 billion addresses. That sounds like a lot, but we’ve already run out of them. IPv6, with its 128-bit addressing, can support a greater number – 340 undecillion addresses.

IPv6 addresses is important because of the enormous amount of IP addresses it can generate. This means we’re unlikely to run out of IP addresses anytime soon. IPv6 allows internet data to be routed more efficiently. This can lead to quicker and more reliable internet connections.

IPv6 was also designed with a focus on security. It has built-in features for encrypting data and ensuring data packets are authentic, which is not standard in IPv4. With IPv6, devices can automatically configure themselves when connected to an IPv6 network, thanks to a process called address auto-configuration.

The adoption of IPv6 has been slow despite its many benefits. One reason is the sheer scale of replacing or upgrading countless internet-connected devices and systems to be IPv6 compatible. Since IPv4 and IPv6 aren’t directly compatible, running them side-by-side can be complex.

The transition to IPv6 won’t be noticeable for most Internet users. It’s more of a behind-the-scenes change. It’s a necessary evolution to ensure that the Internet can keep growing and adding new devices. Over time, more websites and online services will transition to IPv6. This transition is gradual, and many systems now use both IPv4 and IPv6.

Also Read: Dedicated IP Vs Shared IP: Differences, Benefits & Their Impact .

How are IP Addresses Assigned?

How are IP Addresses Assigned

Connecting to the internet is similar to getting an entry ticket at an event. Each ticket (or IP address) lets you access the event (or network). Here is how this “ticketing” process works when devices connect to a network:

Imagine entering a Wi-Fi network like walking into a cafe with free Wi-Fi. When you enter (connect), you ask for a ticket to use their service.

  • Requesting an IP Address : As soon as your device connects to the network, it looks for an IP address. This is like asking the cafe staff for a Wi-Fi code.

Most home and office networks have a DHCP (Dynamic Host Configuration Protocol) server. This is usually part of your router.

  • Assigning an IP Address : The DHCP server listens to your device’s request and gives it an IP address. This address is usually temporary (dynamic). It’s like the cafe staff giving you a Wi-Fi code that changes daily.

Sometimes, a device needs a special, unchanging IP address. This is called a static IP address.

  • Setting Up a Static IP : An IT professional can manually set this up on your device. Once set, this IP address doesn’t change, even if you disconnect and reconnect to the network. It’s similar to having a VIP pass to the cafe, where you have a special code that consistently works just for you.

Things are slightly different when it comes to accessing the broader internet. Your Internet Service Provider plays a significant role here.

  • ISP Assigning Public IPs : The ISP has several IP addresses, like a stack of tickets. When your home or business network connects to the internet, the ISP gives your network a public IP address. This address represents your entire home or business on the internet. It’s like the cafe having its unique address in the city.

Best Practices for Allocating IP Addresses

Best Practices for Allocating IP Addresses

IP address allocation is an important aspect of network management. It involves assigning unique identifiers to each device within a network, ensuring efficient communication and management.

IP address allocation can become chaotic without proper planning and practices. This can lead to IP conflicts and network inefficiency. This makes it critical for one to follow IP address allocation best practices. Here are a few best practices for allocating IP addresses:

  • Understand Your Network’s Size and Scope : Assess the number of devices that will be connected to your network. This understanding will help guide your IP address allocation strategy.
  • Choose Between IPv4 and IPv6 : IPv4 is the most commonly used IP version but has few addresses. IPv6 offers a much larger address space. Choose the appropriate version depending on your network’s size and future growth.

Subnetting is dividing a network into smaller, manageable parts (subnets). This makes network management more efficient and improves security.

  • Organize Network into Subnets : Divide your network logically based on departments, usage types, or geographical locations.
  • Allocate IP Addresses to Subnets : Assign each subnet a range of IP addresses. Doing so helps in managing traffic and improving security.

A subnet calculator can be an invaluable tool here. It helps you divide your network logically and allocate IP ranges accordingly.

Why Use a Subnet Calculator ?

  • Efficiency : It automates the calculation process, reducing errors.
  • Optimization : Helps in optimal utilization of IP address space.
  • Simplification : A subnet calculator makes subnetting more accessible, especially for complex networks.

DHCP automatically allocates IP addresses to devices on a network. This automation reduces the chances of IP conflicts and eases the management burden.

  • Configure DHCP Server : Set up a DHCP server to manage and automate the IP address allocation process.
  • Reserve IP Addresses : For critical devices like servers and printers, reserve static IP addresses to ensure they always receive the same IP.

A detailed record of IP address allocations is essential for troubleshooting and managing the network effectively.

  • Document IP Addresses : Maintain an up-to-date record of all allocated IP addresses, including static and dynamic assignments.
  • Regularly Update Records : Whenever a change is made in the network or a new device is added, update the documentation accordingly.

Also Read: How To Assign Floating IP In Leaseweb With Your Subnet In New Ways .

IPAM tools can automate many aspects of IP address management, making the process more efficient and less prone to errors.

  • Choose an IPAM Solution : Select a tool that suits your network’s size and complexity.
  • Integrate IPAM with Your Network : Ensure your IPAM tool fully integrates with your network infrastructure for optimal performance.

The network needs to evolve with time. Regularly review your IP address allocations to meet your network’s needs. Always be prepared to restructure your IP addressing scheme to adapt to changes in your network.

Securing your IP addresses is crucial to protect your network from threats.

  • Use Private IP Addresses for Internal Network : Use private IP ranges for devices within your network to enhance security.
  • Implement Firewalls and Network Segmentation : Use firewalls to control traffic and segment your network to contain potential breaches.

Challenges in IP Address Allocation and their Solutions

Challenges in IP Address Allocation and their Solutions

IP address allocation has its challenges. Below, we explore some of the top challenges linked to IP address allocation and their solutions.

IPv4 addresses are limited to approximately 4.3 billion unique combinations. This number seemed plentiful when the Internet came about. However, the exponential growth in Internet users and the addition of smart devices and IoT (Internet of Things) changed this. It brought us to a situation where the availability of new IPv4 addresses became critically low.

Regional Variations

The rate of IPv4 exhaustion varies globally. Some regions have already drained their allotment, while others are quickly approaching this point. This difference can cause operational challenges and inequities in internet accessibility and growth potential across various geographical areas.

Solutions for IPv4 Depletion

Let’s discuss some possible solutions to IPv4 Problems.

Adoption of IPv6

IPv6 addresses this limitation with its 128-bit address space. It offers around 340 undecillion unique IP addresses. This enormous capacity is more than sufficient to accommodate future growth.

  • Gradual Transition : The shift to IPv6 isn’t instantaneous. It requires strategic planning and investment in IPv6-compatible hardware and software.
  • Benefits : IPv6 offers enhanced security features and more efficient routing besides addressing depletion.

IP Address Sharing

Network Address Translation is important in mitigating IPv4 depletion. It enables multiple devices in a private network to share the same public IPv4 address.

  • Types of NAT : Solutions like Port Address Translation enable multiple internal requests to be translated into a single IP address with varying port numbers.
  • Limitations : Although NAT helps conserve IP addresses, it can complicate network configurations and restrain certain services, like peer-to-peer applications.

Efficient management of existing IP addresses becomes necessary as we navigate the challenges of IPv4 depletion.

IP Address Management (IPAM) Tools

These tools offer a centralized platform for locating and managing IP addresses, ensuring optimal resource use.

  • Features : Advanced IPAM solutions offer automated address allocation, real-time tracking, and detailed reporting.
  • Integration with DHCP and DNS : Effective integration with Dynamic Host Configuration Protocol and Domain Name System management is essential for streamlined operations.

With the growing complexity of IP address allocation, security risks also escalate.

Enhancing Security Measures

Implementing vigorous security protocols is critical to protect against IP spoofing and hijacking dangers.

  • Access Control : Implementing stringent access controls and authentication mechanisms for accessing IPAM tools and network configurations.
  • Regular Audits and Monitoring : Continuous monitoring for unusual activities and routine audits of IP address allocations are essential in specifying and mitigating potential security threats.

The transition from IPv4 to IPv6 is a necessary evolution in internet technology. It is, however, not without its challenges, especially regarding compatibility and infrastructure readiness. Let’s discuss these challenges and the solutions to facilitate a smoother transition.

The Complexity of Transitioning

  • Technical Incompatibility: IPv4 and IPv6 are incompatible. This means devices and services created for IPv4 cannot communicate with those developed for IPv6. This is a major challenge in terms of maintaining connectivity and services during the transition period.
  • Legacy Systems: Many organizations have legacy systems and infrastructure that only support IPv4. Upgrading or changing these systems can be expensive and complex. It may demand considerable planning and resources.

Solutions for Transition Challenges

Let’s discuss some possible solutions for Transition Challenges.

Dual Stacking

Dual stacking involves running IPv4 and IPv6 simultaneously on the same network. This approach allows for a gradual transition by maintaining compatibility with both protocols during migration.

  • Operational Flexibility : It offers flexibility, allowing network administrators to manage the transition at a pace that suits their organization’s needs.
  • Challenges : Running two protocols requires careful configuration to avoid complexities and potential security vulnerabilities.

Upgrading Infrastructure

Investing in new infrastructure is critical for a complete transition to IPv6. This includes upgrading routers, switches, and servers to support IPv6 natively.

  • Long-Term Investment : Although the initial cost can be high, this is a long-term investment in the network’s future scalability and security.
  • Vendor Support : Choosing vendors and solutions that offer robust IPv6 support is crucial.

Want to learn more about IPv4 and IPv6? Read our blog, ‘ Evolving Internet Protocols: IPV4 Vs IPV6 Compared .

While efficient, the dynamic allocation of IP addresses introduces its own challenges.

Problems with Dynamic Allocation

  • IP Conflicts and Connectivity Issues: Dynamic allocation can sometimes lead to IP conflicts, where two devices are inadvertently assigned the same IP address, leading to connectivity issues.
  • Managing a Fluid Network Environment: Managing a constantly changing set of IP addresses can be challenging in dynamic environments, especially with mobile devices and IoT.

Solutions for Dynamic Allocation

Now Let’s discuss some possible solutions for Dynamic Allocation.

Dynamic Host Configuration Protocol (DHCP)

DHCP is a network management protocol that automates the assignment of IP addresses.

  • Efficient Management : DHCP servers assign IP addresses for a specific lease time, ensuring efficient use of an IP address pool.
  • Flexibility : DHCP allows for easy reconfiguration of IP settings on the network without manual intervention.

Lease Time Management

Proper management of DHCP lease times is essential in dynamic environments.

  • Optimizing Lease Times : Setting appropriate lease times can reduce the likelihood of IP conflicts and address exhaustion.
  • Adapting to Network Needs : Lease times can be adjusted based on the network’s specific needs, such as shorter leases for guest networks.

Types of IP Addresses

Types of IP Addresses

There are different types of IP addresses. Each serves a specific purpose. It is important to understand each IP address, its role, functionalities, etc. Here is an in-depth breakdown of the different types of IP addresses:

Consumer IP addresses are IP addresses usually assigned to individuals or households by Internet Service Providers. These addresses are used for personal internet access. Consumer IPs can be dynamic or static (discussed below). The key feature of consumer IP addresses is that they are allocated for standard, non-commercial internet usage, like browsing, streaming, or gaming.

Private IP addresses are used in a private network. They are not visible on the public internet. These are the addresses your home or office router assigns to each device on your local network. Examples include your computer, smartphone, or printer. Private IP addresses allow multiple devices in the same network to communicate.

The range of private IP addresses is defined in the IPv4 and IPv6 standards and cannot be routed through the public internet. This makes private IPs ideal for internal network security and efficiency.

Public IP addresses are used globally and must be unique across the entire internet. These are the addresses that ISPs assign to each of their customers. When you open a website, your public IP address is how the website knows where to send the data. Public IP addresses are essential for external communication over the internet. They enable different networks worldwide to connect and interact.

Dynamic IP addresses are temporary. They are assigned to a device each time it connects to the network. A Dynamic Host Configuration Protocol server within the network typically manages and distributes these addresses. This includes your router or ISP.

A standout advantage of dynamic IPs is their efficiency in reusing addresses. They are ideal for consumer and business networks where devices frequently come and go. This is because they eliminate the need for manual IP configuration and reduce the risk of IP address conflicts.

Static IP addresses are fixed. They do not change over time. These addresses are manually assigned to a device and remain constant until changed manually. Static IPs are beneficial when a device needs a constant address. For instance, a server hosting a website or a remote access system.

They provide reliable and consistent remote access. This makes them ideal for businesses with fixed network infrastructure. However, static IPs require more management and are more likely to experience security risks if not properly secured. This is because their constant nature can make them easier targets for malicious activities.

What are The Two Types of Website IP Addresses

What are The Two Types of Website IP Addresses

Credits: Pexels

Website IP addresses come in two main types: shared and dedicated. Both have their unique characteristics, advantages, and drawbacks. Here is more information on the two types of website IP addresses:

A shared IP address refers to an IP address used by numerous websites or domains. This setup involves a server hosting several websites with the same IP address. This is a common practice in web hosting, especially in shared hosting environments where multiple clients’ websites are hosted on the same server infrastructure.

How Shared IP Addresses Work

Here’s how shared IP addresses work:

  • Hosting Structure : Multiple websites are hosted on a single server. They share the server’s resources.
  • Traffic Routing : The server uses the same IP address to route traffic to all these websites. The HTTP/HTTPS header contains the domain name. This tells the server which website the user is trying to access.
  • Domain Name Resolution : The server’s software, like Apache , reads the request and serves the appropriate website content based on the domain name.

Advantages and disadvanatges of shared IP

Shared IP addresses offer several advantages in various network and web hosting scenarios:

  • Cost-Effective : Shared IP addresses are generally more affordable. This makes them best for small businesses or personal websites with limited budgets.
  • Easy to Manage : Shared IPs offer a seamless experience. This is mainly because the hosting provider handles the technical aspects. This includes server maintenance and software updates.
  • Reduced Complexity for Small-Scale Deployments: Shared IP addresses offer a simplified setup. This is ideal for individuals or small businesses with basic website requirements. The setup eliminates the need for complex network configurations.

Disadvantages

While shared IP addresses offer advantages, they also come with some disadvantages:

  • Neighbor Effect : If one website on a shared server is blacklisted by search engines or flagged for spamming, it can negatively impact all other websites sharing the IP. This is because the IP address’s reputation affects all associated domains.
  • Limited Control and Customization : Users have less control over server settings and resources. For instance, installing specific software or performing custom server configurations is often not possible.
  • Small business websites with limited traffic.
  • Personal blogs or portfolio websites.
  • Startups looking for cost-effective hosting solutions.
  • Websites without the need for SSL certificates for online transactions.
  • Users with basic hosting needs and minimal technical expertise.

Also Read: Your Ultimate Cheat Sheet To IP Transit .

A dedicated IP address is a distinctive Internet Protocol address exclusively assigned to a single hosting account or website, offering higher control and stability. This exclusivity distinguishes it from shared IP addresses, where multiple websites use one IP.

How Dedicated IP Addresses Work

Here’s how dedicated IP addresses work:

  • Unique Assignment : Each website or hosting account is assigned a unique IP address.
  • Hosting Flexibility : While typically associated with dedicated servers, a dedicated IP can also be used in a shared hosting environment, providing a unique identity to a website on a shared server.
  • Direct Access : Websites with a dedicated IP can be accessed directly via the IP address, facilitating specific tasks and setups.

Advantages and disadvantages of Dedicated IP

Dedicated IP addresses offer several advantages in various network and hosting scenarios:

  • Improved Performance and Security : Being the sole occupant of an IP address means the site’s performance isn’t affected by other sites’ traffic and usage patterns. Dedicated IPs also reduce the risk of IP blacklisting due to other websites’ malpractices.
  • Required for Certain Applications ( SSL Certificates) : Essential for websites that handle sensitive transactions, such as e-commerce sites. A dedicated IP facilitates the installation of these certificates.
  • Greater Control : Dedicated IPs enable more control over DNS settings. This is important for larger websites or those with specific technical requirements. They also allow for more advanced server configuration and customization options.

Despite the advantages, dedicated IP addresses also come with some disadvantages:

  • Higher Expense : Dedicated IPs come with additional costs. They also often require more expensive hosting plans with dedicated IP functionality. This can be major drawback for small businesses or individual users.
  • Technical Knowledge : Dedicated IPs require more technical expertise to manage and configure. Users also need to be more involved in server administration. This may be stressful for those lacking technical backgrounds.
  • E-commerce websites requiring SSL certificates for secure transactions.
  • Large corporate sites with high traffic volumes.
  • Websites needing specific server customizations.
  • Services requiring a stable and consistent IP address (example: email servers).
  • Websites under compliance regulations that demand heightened security measures.

Factors to Consider: Shared vs Dedicated IP Addresses

Several critical factors should be considered when contemplating a dedicated and a shared IP address for your website. Doing so helps ensure your choice aligns with your website’s needs and goals. Below are a few factors to consider when choosing between shared vs dedicated IP addresses:

SEO Impact

The impact of IP addresses on SEO is a topic of much debate. Having a shared IP address does not negatively impact a website’s SEO. Search engines have become capable enough to recognize different websites on a shared IP.

If a website on a shared IP is involved in malicious activities and gets blacklisted, this can have a collateral impact on other sites sharing the IP. A dedicated IP eliminates this risk, as the IP reputation is entirely in your control.

Website Traffic and Resources

The traffic volume your website expects to handle is a significant consideration. For high-traffic sites, a dedicated IP address can be beneficial. It offers more stability and can handle higher traffic loads without the risk of being affected by other sites’ traffic surges, a common issue with shared IPs.

A dedicated IP can provide these resources more reliably if your website requires significant resources. This includes higher bandwidth, more storage, or enhanced processing power.

Security Needs

Security has become a need instead of a choice for websites handling sensitive transactions. This includes e-commerce sites, financial services, or any platform dealing with personal user data. A dedicated IP address offers a higher level of security.

It allows for installing SSL certificates , crucial for encrypting data and secure communication. Having a dedicated IP can help implement more stringent security measures and protocols. This might be necessary to comply with various data protection regulations.

Budget

The website hosting and management budget is crucial for many, especially for small businesses or personal websites. Shared IP addresses are more cost-effective, as the server and IP address cost are distributed among all the websites hosted on that server.

This makes it an attractive option for those who are budget-conscious. However, weighing the cost savings against potential drawbacks, such as the ‘neighbor effect’ and limited control, is important.

Also Read: How To Check Server Location? 8 Simple Steps Guide .

Strategies to Protect Your IP Address

Strategies to Protect Your IP Address

Below are a few strategies to protect your IP address:

A Virtual Private Network is one of the best ways to protect your IP address. With a virtual private network (VPN), your internet connection is encrypted and routed via a server located wherever you choose.

This process masks your IP address and replaces it with one from the VPN server. While keeping your IP address hidden from websites and ISPs, a VPN also helps secure your data from eavesdroppers.

Proxy servers function similarly to VPNs . They act as intermediaries between the internet and your device. When you use a proxy server, your internet traffic is routed through the proxy server, hiding your actual IP address.

Proxies can be helpful in bypassing geo-restrictions or for basic IP masking. However, they often lack the strong encryption VPNs offer, making them less secure.

Most modern web browsers offer a private browsing mode (like Incognito in Google Chrome). While private browsing doesn’t hide your IP address from external entities, it can prevent your browser from storing information about your browsing session. This includes cookies, history, or form inputs. This is particularly useful for protecting your privacy from others who might use the same device.

Connecting to public Wi-Fi can effectively mask your home IP address. It does, however, come with considerable security risks. Public Wi-Fi networks are often unsecured, making them hotbeds for cybercriminal activity. If you choose this method, using a VPN to encrypt your connection and shield your data from potential threats is crucial.

The TOR (Onion Router) is a free network for anonymous communication. It routes your internet traffic through several servers, encrypting it at every step. This makes tracing your IP address extremely difficult. Although TOR is a powerful tool for maintaining anonymity, it can slow down your internet connection. It is also unsuitable for data-heavy activities like streaming or downloading large files.

Some Internet Service Providers prioritize user privacy and offer services like dynamic IP allocation. This is where your IP address constantly changes. These ISPs may also provide built-in VPN services. This makes it essential to research reliable ISPs in your area and consider switching to one that offers these privacy-boosting features.

If you’re using a dynamic IP address common with many ISPs, you can change it by restarting your router. This might not be a foolproof method for protecting privacy. However, regularly changing your IP address can make it more challenging for someone to track your online activities over time.

Challenges and Considerations

Although the above strategies offer protection for your IP address, they also come with challenges, including:

  • VPN and Tor Network : Tor and VPN sometimes slow internet connections due to the encryption process and traffic routing through several servers.
  • Proxy Servers : Not all proxies are secure. Using reputable proxy services is crucial to avoid exposing your data to potentially malicious operators.
  • Public Wi-Fi : Public Wi-Fi can still come with risks, even when using a VPN. This makes it essential to be cautious about the activities you perform on these networks.

Also Read: Quick Guide To Modify CentOS 7 Network Config Files .

The Role of Dedicated Servers in IP Allocation

Below, we shed light on the role of dedicated servers in IP allocation becomes crucial:

A dedicated server is a remote server completely dedicated to an organization, individual, or application. Unlike shared servers, it is not shared with multiple users or organizations. These servers offer robust performance, security, and control over the server environment.

Here are some key characteristics of dedicated servers:

  • Exclusive Use : Dedicated servers provide exclusive resources to the user, including CPU, memory, and disk space.
  • Performance and Reliability : They offer high performance and reliability since resources are not shared.
  • Security and Control : Users have greater control over server configuration, enhancing security and customization.
  • Cost : Dedicated servers are more costly than shared hosting solutions, reflecting enhanced capabilities.

The Role of Dedicated Servers in IP Allocation (Infographics)

Below are a few ways a server plays an integral role in IP allocation:

Unique IP Address Allocation

Each server is typically assigned a unique IP address. This exclusive allocation ensures that the server can be easily identified on the network. It boosts security, and lowers the risk of IP address conflicts common in shared hosting environments.

Enhanced Security and Reputation

Having a unique IP address associated with a server can improve security. It allows for more precise control over inbound and outbound traffic. A unique IP reduces the risk of being blacklisted or negatively impacted by other users’ actions, a common concern in shared IP environments.

Facilitating Secure Connections

Dedicated servers often host websites that require secure connections (HTTPS). A unique IP address is necessary for SSL certificates, which are crucial for encrypting data and ensuring secure transactions.

Supporting Multiple Websites

Organizations using servers can host multiple websites, each with a unique IP address. This setup is beneficial for SEO and helps differentiate traffic and analytics for each site.

Customization and Configuration

Organizations can customize IP address allocation with servers based on their needs. This flexibility includes setting up subnet masks, configuring gateways, and managing DNS settings, which is vital for larger organizations or those with specific network requirements.

Scalability and Growth

As organizations grow, their need for IP addresses increases. Dedicated servers allow for scalable IP allocation strategies, ensuring that businesses can expand their online presence seamlessly without the constraints of limited or shared IP resources.

Efficient IP address allocation is an important aspect of robust network management. Following the best practices in IP address management discussed above can offer several advantages. Some major benefits include operational efficiency, fortifying network security, and preparing the infrastructure for scalable growth.

Regarding efficient IP address allocation, the expertise and solutions offered by RedSwitches are invaluable. We are known for offering premium hosting services and providing many solutions tailored to businesses’ unique needs in managing their network resources.

RedSwitches is a reliable partner in optimizing your IP address allocation strategy. We ensure your network’s integrity and help your business in the evolving digital space. Explore our services now to discover how we can help transform your network management practices.

Q. What is the IP address?

An IP address is a distinctive numerical designation assigned to every device linked to a computer network that communicates using the Internet Protocol. It fulfills two primary purposes. It gives the host’s location within the network and locates the host or network interface.

Q. How to find IP address?

Those wondering how to find my IP location, can follow the directions below:

  • On Windows : Access Command Prompt and type ipconfig to find your IP address location under the relevant network adapter.
  • On macOS : Go to System Preferences > Network. Choose the network connection, and your IP address will be visible.
  • On Linux : Use the ifconfig or ip addr command in the terminal.
  • On Mobile Devices : Go to your device’s network settings. Find your IP location listed under the Wi-Fi or mobile data network information.
  • Online : You can also lookup an IP address using various online tools and websites specifically designed for IP address lookup.

Q. What is the IP address code?

The term IP address code is not a common term. In the context of the format of an IP address, it’s a numerical label. For IPv4, it is a 32-bit number (e.g., 192.168.1.1). For IPv6, it is a 128-bit number (e.g., 2001:0db8:85a3:0000:0000:7a2e:0380:7534).

Q. How do IP addresses work?

IP addresses work by serving as a distinctive identifier for devices on a network. This allows them to communicate with each other and exchange data.

Q. What is the difference between IPv4 and IPv6 addresses?

IPv4 addresses are 32-bit numerical addresses. IPv6 addresses are 128-bit hexadecimal addresses. They offer a larger address space for devices and improved security features.

Q. Why is IP address security important?

IP address security is crucial for protecting personal privacy. It prevents unauthorized access and protects against cyber threats and malicious activities.

Q. How can I hide my IP address?

You can hide your IP address using a proxy server or VPN (virtual private network). Doing so enables you to browse the internet anonymously.

Q. What is a MAC address?

A MAC address, or Media Access Control address, is a unique number linked to network interfaces so that they can communicate with one another inside a network segment.

Q. How can I find the physical location of an IP address?

You can find the physical location of an IP address using IP location tools or services available online that provide geolocation data.

Q. Why use a static IP address?

Using a static IP address ensures that the assigned address remains constant. This is beneficial for certain network configurations and applications requiring a fixed IP.

Q. What is the best way to check my IP address in Windows?

The best way to check your IP address in Windows is to use the command prompt and enter “ipconfig” to view the assigned IP address and related network information.

blog-banner

Check Out Pre-Configured Bare-Metal Servers

Custom Solutions

  • Instant dedicated server
  • Dedicated Server
  • Managed Dedicated Server
  • Bare Metal Server
  • Dedicated Server with Bitcoin
  • 10Gbps Dedicated Server
  • Crypto Hosting
  • Storage Dedicated Server
  • High Availability Cluster
  • Video & IPTV Streaming
  • CDN & VPN Hosting
  • Ad-Tech & Mar-Tech Hosting
  • Server Locations

Support Tools

  • Affiliate Program
  • Client Area
  • Support Ticket
  • Service Agreement
  • Terms Of Service
  • Acceptable Use Policy
  • Privacy Policy

RedSwitches Is a global hosting provider offering Dedicated Servers, Infrastructure As a Service, Managed Solutions & Smart Servers in 20 global locations with the latest hardware and premium networks.

20 Collyer Quay, #09-01, Singapore 049319

[email protected]

  • WHY REDSWITCHES?

Follow us on

2024 RedSwitches Pte. Ltd. All Rights Reserved.

Made With ♥ In Singapore

web analytics

Free Dedicated Server Trial

Unlock Your Potential with Our High-Performance Bare Metal Servers and Join 10,000+ satisfied customers who stay with us an average of 2-4 years!

  • Achieve your goals with cutting-edge server technology
  • Enjoy exceptional performance with 99.99% uptime guarantee
  • Get free server management and 24/7 expert support
  • Choose from 20+ global data center locations

For more details, go to the Privacy Policy .

Our sales team will contact you shortly to discuss your needs and set up your trial server. We also accommodate custom server requests.

  • Skip to content
  • Skip to search
  • Skip to footer

Cisco Catalyst 9800 Series Configuration Best Practices

Available languages, download options.

  • PDF (7.1 MB) View with Adobe Reader on a variety of devices

Bias-Free Language

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

  • Open a TAC Case Online
  • US/Canada 800-553-2447
  • Worldwide Support Phone Numbers

Feedback

Table of Contents

Revision History

Introduction

Notes about this guide

Prerequisites

Cisco Catalyst 9800 Series new configuration model

Assigning tags

Moving APs between controllers and preserving tags

Roaming between policy tags

Designing with site tags in mind (Local mode APs)

Designing with Site Tags in mind (FlexConnect mode APs)

Enhance your design with the site-tag “load” command

Enhanced your design with the RF based Automatic AP Load Balancing

Requirements and Recommendations:

Install vs. bundle mode

Wireless management interface

Configuration requiring controller reload or network down

Enabling NTP

Configuration file management

Core dump export

Debug bundle

Web user interface (WebUI)

C9800-CL considerations

Checking configuration errors

Configuration: special characters

SNMP recommended settings

Configure predictive join: Primary/Secondary/Tertiary controller

Primary/secondary/tertiary versus backup primary/backup secondary

Set AP syslog destination

Access Point Console Baud Rate

Spanning Tree Protocol (STP) setting on uplink ports

Prune VLANs on controller uplink ports

Use of the service port

Address Resolution Protocol (ARP) proxy

DHCP bridging and DHCP relay

Internal DHCP server

DHCP timeout

Wireless management IP addressing

Wireless management interface VLAN tag

Use of VLAN 1 in a Policy Profile

Wireless client interfaces

Virtual IP address

Link aggregation mode

Preventing traffic leaks for guest or AAA override scenarios

APs and Wireless Management Interface VLAN

AP-to-controller round-trip latency

Use PortFast on AP switch ports

Prune VLANs for FlexConnect mode AP switch ports

Enable TCP MSS across all APs

Use broadcast SSID

Voice Cisco Centralized Key Management timestamp validation

VLAN groups

Multicast VLAN

Enable client profiling

  • Application Visibility and Control 

Enable 802.11k for optimal roaming

Sleeping Client feature

Client Timers

Anchoring an SSID and broadcasting it to local APs

Passive Clients

Third party WGB

Dealing with trustpoints

Trustpoint and Cisco Catalyst Center

Local management password policies

User Login Policy

Password Encryption

Disable Management via Wireless

Default AP console username and password

802.1X authentication for AP ports

Enable secure web access

Secure SSH/Telnet

Enable 802.11r Fast Transition

DHCP Required option

Client exclusion

Peer-to-peer blocking

Wireless management VLAN mapping to WLAN (via policy profile)

AAA override

AAA VLAN and Fabric VNID Override

EAP Identity request timeout and maximum retries

EAP request timeout and maximum retries

EAPoL key timeout and maximum retries

RADIUS Server Timeout

TACACS+ management timeout

SNMP Communities

Rogue Policies

Rogue monitoring channels

Define appropriate malicious rogue AP rules

Identify and update friendly rogue AP list

AP Rogue Detection Configuration

Enable ad hoc rogue detection

Enable rogue client AAA validation

Rogue Location Discovery Protocol

Rogue notifications and telemetry

Stateful switchover (SSO)

Mobility MAC

SSO HA with C9800-CL and vMware vMotion

Other SSO best practices

Returns & Replacements (RMA) replacement procedure for SSO pair

Site survey

Low data rates

Reducing the number of SSIDs

Band select

RF profiles

Aggregated probe response optimization

Optimized roaming

Aggressive load balancing

Enable CleanAir

Event-driven RRM

Spectrum Intelligence

Dynamic Channel Assignment

DCA interval

Channel width

Wi-Fi interference awareness

DCA and Dynamic Frequency Selection

DCA restart

DCA Cisco AP Load

DCA and Flexible Radio Assignment

DCA interval vs. FRA interval

Coverage hole detection

Mobility group connectivity

Seamless and fast roaming

Mobility group size

Inter-controller Layer 2 versus Layer 3 roaming

Reduce the need for inter-controller roaming

Inter-release controller roaming

Seamless Layer 3 roaming

Mobility groups and Secure Mobility

Moving APs between an AireOS WLC and the C9800

FlexConnect mode on the C9800

Local Switching

FlexConnect site tag

Split Tunneling

VLAN-based central switching

Wireless QoS for the Catalyst 9800 Wireless Controller

Metal QoS profiles

Wireless QoS recommendations

Verifying the QoS settings on the Catalyst 9800

Multicast Forwarding Mode

IGMP and MLD snooping

Multicast DNS (mDNS)

Perform an RF active site survey

Estimate coverage area using the Cisco Range and Capacity Calculator

Outdoor AP deployments

Avoid selecting DFS channels for backhaul

Deploy multiple RAPs in each BGN

Recommended mesh settings

C9800 Managed by Prime and Catalyst Center

May 3, 2024

●      New sections

◦     Designing for Large Scale Deployments : APs to WNCd mapping, site tags design, features, and recommendations

◦     Access Point Console Baud Rate : Changes and recommendations with changes coming in 17.12.1 and above

●      Updated sections

◦     Use of the Service Port : Updated the list of supported protocols

◦     Wireless client interfaces : Recommendations around access control lists (ACLs) on Client SVIs

◦     Client Timers : Revised recommendations for session and exclusion timeout

◦     Enable 802.11r Fast Transition : Updated recommendation to set 802.11r mixed mode instead of adaptive 802.11r

The Cisco® Catalyst® 9800 Series (C9800) is the next-generation wireless LAN controller from Cisco. It combines RF excellence gained in 25 years of leading the wireless industry with Cisco IOS® XE software, a modern, modular, scalable, and secure operating system.  The Catalyst Wireless solution is built on three main pillars of network excellence: Resiliency, Security, Intelligence.

Compared to the AireOS WLC, the C9800 software has been rewritten from scratch to leverage the benefits of Cisco IOS XE, and the configuration model has been made more modular and flexible. This means that, although most AireOS features are retained, there might be changes in the way you configure certain functionalities.

Related image, diagram or screenshot

This document covers the best practices recommended for configuring a typical Cisco Catalyst 9800 Series wireless infrastructure. The objective is to provide common settings that you can apply to most wireless network implementations. But not all networks are the same. Therefore, some of the tips might not be applicable to your installation. Always verify them before you perform any changes on a live network.

The first part of the document focuses on some important configuration and design concepts of the Catalyst 9800 Wireless Controller. These will be useful to understand the best practices presented in the rest of the document. The guide is a list of recommended configurations organized in sections: General, Network, Radio Frequency (RF), Security settings and more.

Related image, diagram or screenshot

This will open up another window where you can compare the existing and new configuration. The commands that are different are highlighted: green indicates new commands, orange modified commands, and red deleted commands. Below is an example for a new rogue management setting.

A screenshot of a cell phoneDescription automatically generated

Each recommended setting will be highlighted if there are some known restrictions or if it applies to a specific release of code. The differences with AireOS will also be underlined.

The information in this document is derived from tests on devices in specific lab environments. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.

Cisco recommends that you have knowledge of these topics:

●      Cisco wireless compatibility matrix for the latest on the supported compatible releases: https://www.cisco.com/c/en/us/td/docs/wireless/compatibility/matrix/compatibility-matrix.html and the latest on the features supported on access points: https://www.cisco.com/c/en/us/td/docs/wireless/access_point/feature-matrix/ap-feature-matrix.html

●      Cisco publishes the list of IOS XE recommended releases here: https://www.cisco.com/c/en/us/support/docs/wireless/catalyst-9800-series-wireless-controllers/214749-tac-recommended-ios-xe-builds-for-wirele.html

●      Always check the release notes for the specific software you plan to implement: https://www.cisco.com/c/en/us/support/wireless/catalyst-9800-series-wireless-controllers/products-release-notes-list.html

●      New Cisco Catalyst 9800 Wireless Controllers Configuration Model. More information can be found here: https://www.cisco.com/c/en/us/support/docs/wireless/catalyst-9800-series-wireless-controllers/213911-understand-catalyst-9800-wireless-contro.html

●      Most of the features covered in this document are documented either in the configuration guides: https://www.cisco.com/c/en/us/support/wireless/catalyst-9800-series-wireless-controllers/products-installation-and-configuration-guides-list.html

or in the technical references:

https://www.cisco.com/c/en/us/support/wireless/catalyst-9800-series-wireless-controllers/products-configuration-examples-list.html

The information in this document is based on the following software and hardware versions:

●      Cisco Catalyst 9800 Series Wireless Controller platforms: All platforms unless explicitly called out.

●      Cisco Catalyst 9800 Series Wireless Controller software: The recommendations are valid for every release starting with 16.10.1e (the first release) unless explicitly called out.

●      Cisco 802.11ax (Wi-Fi 6 and 6E) and 802.11ac (Wi-Fi 5) access points.

A quick recap first. The Cisco Catalyst 9800 Series new configuration model is based on two constructs: profiles and tags. Profiles group a set of features and functionalities, and tags allow you to assign these features and functionalities to APs. There are five types of profiles:

●      AP Join profile or AP profile: Contains general AP settings such as Control and Provisioning of Wireless Access Points (CAPWAP) timers, 802.1X supplicant, SSH/Telnet settings, and many more. These settings in AireOS are usually global configurations for all the APs.

●      WLAN profile: Defines the SSID name and profile and all the security settings.

●      Policy profile: Contains policy to be associated with the WLAN. It specifies the settings for client VLAN, authentication, authorization, and accounting (AAA), access control lists (ACLs), session and idle timeout settings; and so on.

●      Flex profile: Groups all settings to be assigned to a Flex AP: native VLAN, ACL mapping, and so on.

●      RF profile: As in AireOS, it defines the RF characteristics of each band.

The tag allows you to bind the settings in the profiles to an access point. There are three types of tags:

●      Policy tag: Ties together the Policy profile and the WLAN.

●      Site tag: Assigns the AP Join profile settings to the AP and determines if the site is a local site, in which case the APs will be in local mode, or not a local site, in which case the APs will be in Cisco FlexConnect® mode.

●      RF tag: Binds the 5-GHz and 2.4-GHz profiles to the AP.

An access point is always assigned three tags, one for each type. If a tag is not explicitly defined, the AP will get the default policy, site, or RF tag.

The C9800 configuration model allows the customer to have much more flexibility in tweaking the configuration to fit a specific wireless deployment. Let’s take the TCP MSS Adjust setting as an example: In AireOS this is a global setting, so the same value is either applied to all the APs at each location or is left as the default. With the new configuration model, the TCP MSS Adjust value is set at the AP Join profile level, so the customer can evaluate the transport network at each site and decide the value that is best for a specific group of APs. This applies to all the settings, and it’s a great value add.

Cisco Catalyst 9800 Series profile and tag considerations

As just described, with the C9800, some configurations are done differently than in AireOS, with the intent of making the settings more flexible and easier to use. Functionalities that you are used to in AireOS wireless controllers are also supported in the C9800, but you need to get familiar with the configuration model in order to have them. Plus the new configuration model is made to be extended to the new differentiating features supported by the C9800.

The following sections describe best practices for profiles and tags and give some tips on how to best use them.

Each access point needs to be assigned three unique tags: a policy, site, and RF tag. By default, when an AP joins the C9800 wireless controller, it will get the default tags, namely the default policy tag, default site tag, and default RF tag. The user can make changes to the default tags or create custom tags. To know what tag has been configured on each AP, you can go to the GUI:

Assigning tags

On the CLI, use the show ap tag summary command:

Related image, diagram or screenshot

This command clearly indicates whether there is a misconfiguration involving tags and profiles. A typical example of tag misconfiguration is assigning the same WLAN to two different Policy profiles with different Application Visibility and Control (AVC) settings. In this case the   show avc status <WLAN name>  command will flag it as an error, with a related explanation. 

Notice the Tag Source field in the output of the command above; this tells you how the AP got the tags. The possible sources, in order of priority, are:

●      Static : You select the AP and assigns it specific tags. The configuration is saved on the controller based on the AP’s Ethernet MAC address. When an AP joins that specific controller, it will always be assigned the specified tags.

●      Location : This is a configuration construct internal to the C9800 (it’s not the AP location that you can configure on each AP), and it’s used primarily in the Basic Setup flow. A location allows you to create a group of three tags (policy, site, and RF) and assign APs to it.

●      Filter : You can use a regex expression to assign tags to APs as they join the controller. As of today you can set a filter based only on AP name, so this method cannot be used for out-of-the-box APs.        

●      AP : The AP itself carries the tag info learned through Plug and Play (PnP) or pushed from the controller

●      Default : This is the default tag source.

The first two sources (static and location) are static mapping configurations to assign APs to tags and hence have the highest priorities. The filter allow you to define a dynamic mapping of APs to tags based on regex expressions. When the source is the AP, it means that this information is saved on the AP itself and will be presented to the controller when the AP joins. Finally, if there is no tag mapping configuration on the C9800, and if the APs doesn’t carry any tag information, the AP is assigned the default tags.

A simple way to assign multiple APs to a set of tags is to use the Advanced setup in the GUI ([Configuration] > [Wireless Setup] > [Advanced]); click Start Now on the main page and then go to the Apply section and click the icon to display the AP list:

simple way

On the following page, select the APs you want and click + Tag APs, then assign the tags in the popup window:

A screenshot of a computerDescription automatically generated

Starting software release 17.6, the tags can be automatically saved on AP leveraging the “AP tag persistency” feature. This is enabled globally on the controller with the CLI command:

C9800(config)#ap tag persistency enable

In 17.6 the feature is disabled by default for backward compatibility with previous releases, but Cisco recommends enabling it. When the tag persistency feature is enabled, APs joining a C9800 wireless controller will have the configured tags saved on the AP automatically.

Before AP tag persistency was introduced, to push and save the tags to the AP, you had to use a CLI command in exec mode, per single AP:

c9800-1#ap name <APname> write tag -config

The operational advantages of AP tag persistency feature are clear when you need to move APs between wireless controllers. This can be in the context of APs migration or in a primary/secondary (N+1) high availability deployment. Since the tags are saved on the AP, when the AP joins the second WLC, it will present the tags and as long as these exist on the controller, the mapping will be honored. Of course, the tag source priorities still apply, and the AP tag source is considered only if no static or filter-based mapping are present for that AP.

Another way to preserve tags when moving APs from one controller to the other is to use an AP tag filter. Let’s say you want to move APs that are on floor 1 from WLC1 to WLC2. Let’s assume that you have named the AP accordingly as “APx_floor1,” where “x” is the AP number. You need to configure the desired tags on both controllers and then, on WLC2, configure a filter rule to match any AP name that ends with “floor1” and assign it to the desired tags. Go to Configuration > Tags & Profiles > Tags, and click Filter:

A screenshot of a cell phoneDescription automatically generated

You can add a new rule by clicking +Add in the page above. Here is an example of a rule that matches any AP name ending with floor1:

A screenshot of a cell phoneDescription automatically generated

Finally, you can ensure the AP is assigned the right tags when joining another controller by pre-configuring the AP to tag mapping using a CSV file. This is easily done in two steps:

●      Create the CSV file first. It needs to be in a specific format: “AP Ethernet MAC, Policy Tag name, Site tag name, RF tag name”. Here is an example:

Graphical user interface, text, applicationDescription automatically generated

●      Load the CSV file in Configuration>Tags & Profiles>Tags as indicated in the following screenshot:

Graphical user interface, applicationDescription automatically generated

Since you can modify the existing tags, create new ones, and attach them to the APs in different ways, it’s recommended that you validate the tag configuration using the following command in exec mode to catch any inconsistencies:

C9800#wireless config validate

The previous paragraph describes how the C9800 handles the mapping of tags to APs. Given this information, the following should be considered when moving APs between two C9800 wireless controllers (C9800-1 and C9800-2):

●      If the AP on C9800-1 doesn’t hold any tag information (either via the ap tag persistency feature or via the command “ ap name <APname> write tag-config”) and there is no mapping configured for that AP on C9800-2, the AP will be assigned default tags when moved to C9800-2.

●      The AP will retain the tag information when moving between the controllers, if both have the same mapping of AP to tags. This can be done via static configuration, by assigning the AP to a location, or via tag filters.

●      The AP will also retain its tags when moved between the two controllers if the tags are saved to the AP itself (either via the ap tag persistency feature or via the command “ ap name <APname> write tag-config” ), the tags are defined on both controllers, and there is no higher priority mapping defined (i.e., the AP is assigned another set of tags on C9800-2 via static configuration).

●      If the AP has saved tags and joins a controller where those tags are not defined, it will be assigned to the default tags (assuming no other mapping is configured on the controller that the AP is joining).

●      In all cases, if the AP retains its tag name assignment but the settings within the tag are different on the two controllers, the AP will be configured based on the settings present on the currently joined controller.

Note:      The above information applies to N+1 redundancy as well.

When moving an AP from an AireOS controller to a C9800 controller, since the AP doesn’t carry any tag information from AireOS, it will be mapped to the default tags; this is true unless a static or dynamic tag preassignment has been done on the C9800 controller, as explained above.

Policy tags are used to decide which SSID is being broadcasted by which AP and with what policy, so they define the broadcast domain for a group of APs. In this, the policy tag is very similar to the concept of AP group in AireOS.

Currently, a client roaming between two APs configured with the same SSID but different associated policies will result in a slow roam. In other words, roaming across two different policy tags (same SSID, but different policy profile name) will force client to go through a full authentication and DHCP process to renew its IP address. This is true even if doing intra-controller roaming, and it is meant to prevent clients from jumping from one policy to another without a full reauthentication. 

Note:      If the policy profile associated to the SSID is the same (same name and content) in different policy tags, then roaming for that SSID is seamless. The slow roam happens if there is a change in the policy profile associated to the SSID.

This needs to be considered when designing your wireless network with the C9800. Consider a customer use case in which a university has a rule to use /22 subnets across the campus. It uses one network-wide faculty SSID, and since it has more than 1022 users, it needs to assign multiple client subnets to the SSID.

In AireOS, there are three common ways of implementing this:

1.      Using a VLAN override from the AAA server to assign different groups of users to different subnet/VLANs. 

2.      Using VLAN Select (a.k.a. the interface group feature) to map multiple client subnets to the same SSID and assign clients in a round-robin fashion to the available VLANs in the group.

3.      Using AP groups to map a specific VLAN to the SSID for each group of APs. This also allows the user to know deterministically which IP subnet the client will belong to as it joins that location (group of APs).

Option 1 is fully supported with the C9800. You can also use option 2 by using a feature similar to AireOS’s VLAN Select, which is called VLAN groups. Recall that the Cisco Catalyst wireless controller doesn’t need a Layer 3 interface associated to the client VLAN, so you can actually group the Layer 2 VLANs. Configure the VLAN group first and assign the VLANs (VLANs 210 and 211 in this example):

A screenshot of a social media postDescription automatically generated

Note:      It is not recommended to mix clients with DHCP and static IP address on the same SSID when this is associated to a VLAN Group

Then configure the Policy profile to map the SSID to the defined VLAN group:

A screenshot of a computerDescription automatically generated

And then assign all the APs to the same policy tag where the SSID is mapped to this policy.

For option 3, you would have to define two Policy profiles, one with VLAN 210 and one with VLAN 211, and map them to the same SSID using a different policy tag. Then you apply the different policy tags to the different groups of APs. In this case, you need to consider the limitation of slow roam across policy tags mentioned earlier: if the two locations are separated and have an air gap, there is no problem, as the client will have to disconnect anyway. But if the locations are in the same roaming domain, you need to consider that the client will go through a full reauthorization as it roams across the two policy tags with different VLANs. This is different from AireOS behavior: An AireOS WLC would allow seamless roaming across two AP groups mapped to different VLANs.

Starting with Cisco IOS XE Release 17.3, if the policy profiles differ only for certain parameters (VLAN and ACL being the most important), then seamless roaming is allowed across policy profiles (and related policy tags). To configure the feature, enter the following command in global config mode:

c9800(config)#wireless client vlan-persistant

Even if the command only mentions “VLAN”, in reality there are many other parameters that can differ between the two policy profiles and still result in a seamless roam. For a complete list of these attributes, visit: https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/17-3/config-guide/b_wl_17_3_cg/m_client_roaming_policy_profile.html .

The recommendation is to consider this behavior as you design your policy tag assignment: All APs in the same roaming domain should have the same policy profile; if you need to assign different policies, then we recommend you deploy release 17.3 and newer and use the wireless client vlan-persistant  feature.

Designing for large scale deployments

With the high-end model, Catalyst 9800 Wireless LAN Controller supports up to six thousand APs and 64k clients on one single platform; this is a lot of APs and clients. When dealing with large, high-density deployments, you may want to make sure that you keep the load of your WLC under control.

Most of the time, when you hear the word “load” this refers to the CPU load. Catalyst 9800 physical appliances has data plane acceleration in hardware, so what may stress the multi-CPU software architecture is mostly the control plane related activity: handling AP CAPWAP messages, client onboarding, client roaming, rogue management, interference detection, client CPU intense applications like mDNs, and so on.

The system load totally depends on the specific type of deployment and scaling factors, for example: number of APs, density of clients, client authentication and roam rate, client roaming type, key caching mechanisms, applications being used, these are all factors that would impact the system capacity; it is hard to provide upfront a recommended scale number for your specific deployment. 

Even if it’s difficult to estimate in advance what would be the load on your network, in system design its s good practice not to utilize a single box to its maximum capacity, but instead to leave some head room to handle “rainy days” situations and peak of utilization.

C9800 design is no different and, generally, Cisco recommends limiting the load to around 80% of the AP and client scale.

The 80% scale is just a recommendation to start planning the design and deployment of a catalyst wireless network as this is tested and validated number.

For C9800-80, for example, this means 4800 APs and/or around 50k clients. Does this mean that you cannot have six thousand APs on a single C9800-80? No, not really; Cisco has a lot of successful deployments at maximum scale. The 80% scale is just a recommendation to start planning the design and deployment of a catalyst wireless network.

The opposite is also not true: you can stress a C9800 multi-cpu system with a much smaller number of APs and clients in certain situations. High CPU issues are not always due to product scale and performance limitations; instead, there are multiple factors to consider like client probing and roaming behavior, client applications behavior, nature of the client traffic and many more. Wi-Fi being an evolving and growing technology, client traffic is getting scaled vertically and horizontally. It’s always best to cater for anticipated changes and have your system capacity optimized to get head room to handle situation when there is spike in utilization.

It is important then to monitor the CPU load of the processes and make sure your system operates under the recommended load: if the CPU is < 70% for 5 mins, then you are good; CPU spikes (under a minute of duration) to 80/90% are absolutely normal.

You can monitor the internal processes with the CLI command:

show processes cpu platform sorted | inc Name|---|wncd

or directly on the main Dashboard page, under CPU & Memory Pressure Graph dashlet:

A screen shot of a graphDescription automatically generated

If your CPU load is constantly higher than 70%, then you may start looking deeper and find ways to optimize the use of the resources on the box. A good starting point would be to look at your site tag design, as explained in the next section.

Catalyst 9800 Wireless LAN Controller is based on IOS XE multi-process architecture.

There is a process dedicated to the most crucial functions: Mobility and Roaming, Radio Resource Management (RRM), Rogue Management, etc. The main process responsible for AP and client sessions is called Wireless Network Controller processes (WNCd); the number of internal WNCd processes varies from platform to platform as you can see in this table:

Table 1.            Number of WNCd processes per platform

Platform

WNCd Instances

EWC (on AP and Catalyst 9k switches)

1

C9800-L

1

C9800-CL (small)

1

C9800-CL (medium)

3

C9800-40

5

C9800-CL (large)

7

C9800-80

8

You can verify the number of WNCd in your platform using the following CLI command:

C9800#sh processes platform | inc wncd

This is different from AireOS-based WLC, where you had only a single, multi-thread process handling not only AP and client sessions, but all the WLC functions. The advantages of the C9800 multi-process software architecture are multiple:

●      Each process is single threaded, non-blocking

●      There is no single fault domain (e.g. memory separation)

●      Data separation & data externalization per process

●      Easier to scale horizontally by adding multiple WNCd

●      Process patchability

This software architecture has allowed Cisco to introduce important innovations for Catalyst Wireless like In-Service Software Upgrade (ISSU), Software Maintenance Updates (SMU) and many more.

Having a multi-process software architecture means that to best utilize the platform, you would need to make sure that all processes are equally used. Since WNCd handles all the AP and related client sessions, it’s clearly a good starting point and you want to make sure that the AP load is balanced across the different internal processes.

When APs join the C9800, they are distributed among the available WNCds (of course, this applies to the platforms where multiple processes are present). As you can imagine, load balancing APs (and the related clients) among the available WNCd processes, result in improved scale and performances as it better exploits the available resources on the C9800.

AP distribution among the internal processes is based on site tags: APs associated with the same site tag join and hence are managed by the same process. The AP mapping is done on the first AP that joins from a specific site tag.

As you design your Cisco Catalyst Wireless network for best performances, it becomes important to understand how to assign APs to site tags and hence to internal processes. Let us start with some general recommendations that you need to keep in mind as you deploy Catalyst 9800 wireless controller with local mode APs (some specific FlexConnect recommendations are highlighted in a later section):

1.      Use custom site tags and not the default-site-tag, especially when roaming and fast roaming is a requirement.

2.      Assign the same site tag to all the APs in the same roaming domain. Roaming domain is defined as a logical group of APs that share the same RF domain and broadcast the same SSID.

3.      Limit the number of APs you assign to a single site tag (a value of 500 APs per site tag is recommended).

4.      Whenever possible, do not exceed the following maximum number of access points per single site tag as per table below:

 Platform

Maximum number of APs per site tag*

C9800-80, C9800-CL (medium and large)

1600

C9800-40

800

Any other C9800 platform

Maximum number of APs supported

*These numbers are for local mode APs. For FlexConnect APs and related remote site tags, if seamless roaming is required, the limit is 100 APs per site tag (the same as for AireOS). As of release 17.8.1, the limit has been increased to 300 APs per site tag leveraging the “Pairwise Master Key (PMK) propagate” feature, also called “FlexConnect High Scale Mode”.

1.      If dealing with large deployments, high density scenarios, it’s recommended to use a number of site tags equal the number of WNCD processes for that specific platform and evenly distribute APs among these. If you have more site tags for whatever reason, it’s recommended to keep it as a multiple of the number of WNCDs (e.g., site tags = 5,10,15, etc. for the 9800-40) and still distribute the APs evenly.

Note:      the recommendations above are just that: recommendations. For example, if you have more than 500 APs in the same site tag, things will still work but you would probably not get the best performance out of your network.   

The first recommendation tells you not to use the default-site-tag and helps improving the way the resources are used internally on the C9800, optimizing for intra-process vs. inter-process communication. By using custom site tags, all the APs that belong to the same site tag will be assigned to the same internal process.

By making the roaming domain match the site tag, as mentioned in the second recommendation, you make sure that most roaming happens within the same process. If using the default-site-tag, the APs would be distributed among the available processes in a round robin fashion, increasing the chances of inter-process communications when clients roam from one AP to the other.

Before release 17.6, assigning the same site tag to all the APs in the same roaming domain is also particularly important if you require optimized fast roaming for applications that are delay sensitive, such as voice over WLAN (Wireless LAN). “Optimized” here means that C9800 would leverage protocols such as 80211k/v to pass additional information to the client and assist the roaming process; for example, the list of neighbor APs the client could roam to, is provided via 802.11k neighbor list.  When roaming between two APs in different site tags, and hence across WNCd processes, the AP neighbor information was lost, and hence protocols such as 802.11v and 802.11k that rely on this information are not optimized. This is another reason to assign all the APs in the same roaming domain (where seamless and fast roaming is needed) to the same site tag. This affects only 802.11k/v and doesn’t affect fast and seamless roaming, which is supported across site tags. 

Important : This limitation is removed starting release 17.6.1, so clients roaming across site tags can benefit from 802.11k/v. In this release the user will have to manually check if the SSID is enabled on the neighboring APs by making sure that it’s included in the policy tag. Starting release 17.7.1, the check is automatic.

Note:      Talking about roaming support…For APs in local mode (so SSIDs with central association), seamless roaming with 802.11r, Cisco Centralized Key Management and opportunistic key caching (OKC) works across site tags. No limits.

Why not assign all the APs in one single site tag and get over with it? Here is where the third suggestion comes into the picture: For the best performance you should limit the number of APs per site tag and hence per WNCd. By having multiple site tags and limiting the number of APs per site tag, you reduce the chances of overloading a single process. The number Cisco recommends is around 500 APs per site tag. This is just a reference number that can be used for all the different Catalyst 9800 platforms.

Let us be clear: Nothing will break if you assign more than 500 APs per site tag, if you stay within the limits that have been tested and hence officially supported and that are specified in the table shown above.

Note:      500 AP is also the default maximum number of APs that Catalyst Center would place in a single site tag. Starting release 2.2.2, the user can configure custom site tags in Cisco Catalyst Center and hence design according to the specific deployment.

Going beyond those maximum limits (e.g., 800 APs per site tag in a 9800-40) is not recommended and you will start seeing some undesired performance effects: the client roaming per second may decrease, same for the authentication per second. Customer may also see syslog events indicating an overload in a WNCd process. These are all effects of the overloading of a single WNCd process. Imagine if C9800 was a car’s engine and the WNCd processes its cylinders: if you drive your car with just one cylinder, the results will not be great and optimized, right?

What if you have a large deployment (large hospital, conference center, stadium, big enterprise campus, etc.) which is one big roaming domain? How do you design your site tags? How would you distribute the APs among multiple custom site tags?

First, let’s clarify one important concept: the site tag does not have to coincide with a geographical physical site, even if the name would suggest that. The site tag is a logical group of access points that allows you to assign certain common settings (the ones contained in the AP join profile). It’s also used internally to optimize the processing of AP and client events related to that group of APs.

For a high-density (HD) deployment, where you have a lot of clients, and these clients can roam seamlessly everywhere, in order to optimize the performance of C9800, it’s recommended that you choose the number of site tags according to the specific platform, as listed in the table below:

Platform

Recommended number of site tags

C9800-80

8

C9800-CL (large)

7

C9800-40

5

C9800-CL (medium)

3

Once you have selected the number of custom tags, you also need to evenly distribute APs across these site tags. Again, remember that the site tag doesn’t have to correspond to a physical site, but you would have to create virtual areas where you group APs.

Here are some examples to understand how we can implement these recommendations:

●      You need to design a large venue (i.e., a stadium) with 3000 APs and 10s of thousands of clients. Roaming is required everywhere, so this is indeed a large roaming domain. You have selected a C9800-80 to manage this deployment.  The recommendation is to identify eight virtual roaming areas (grouping sectors in the stadium, for example) where you know that most roaming will happen and define a site tag for each one. In this case it’s 3000 APs across eight site tags, it would be 375 APs per site tag. Of course, it does not have to be a precise cut, but the recommendation is to have an equal distribution of APs, and avoid overloading few site tags, even if it would make sense from a physical location/site point of view. On the other side, if you have small areas (e.g., the ticketing areas) where you have few APs, merge them with other APs to get to a site tag size that is close to the recommended one, 375 APs in this case.

●      You have a small campus with three buildings with 600 APs on a C9800-40. Most of the time there would be no Wi-Fi coverage (air gap) between the buildings and there is no roaming across; in this case you can configure three site tags, one per building. This means 200 APs per site tag which is well within the recommended settings.

●      You have a large campus and multiple buildings for 1200 APs on a C9800-40, and this time roaming must be across the entire campus (i.e., Hospital campus). Since 1200 exceeds the maximum number of APs per site tag, and this a large roaming domain, it is recommended that you use five site tags (grouping buildings together in five virtual areas). In this case you would have an exceptionally good balanced system with 240 APs per site tag. Remember: seamless roaming is fully supported across the site tags; from 17.7 also 802.11k/v works across site tags.

For FlexConnect deployments, site tag identifies the fast-roaming domain as client key caching and key distribution only happens within a single Flex site-tag. Normally and naturally, you would have a site tag for each remote location where fast roaming is required, so the chances of overloading a single internal process for FlexConnect deployments are much smaller than for local mode.

Here are the FlexConnect specific recommendations when it comes to design your site tags:

●      The default-site-tag is a no-go for Flex deployments where fast roaming is a requirement and hence the use of custom site tags are always recommended. Reasons being that the client key is not distributed among the FlexConnect APs in default-site-tag You should configure at least one site-tag per Flex site.

●      If support for Fast Seamless Roaming (802.11r, CCKM, OKC) is needed, then the max number of APs per site-tag for a Flex site is 100 (the same as for AireOS). As of release 17.8.1, the limit has been increased to 300 APs per site tag leveraging the “Pairwise Master Key (PMK) propagate” feature, which is disabled by default.

            This can be configured under the FlexConnect profile with the following command:

WLC(config)#wireless profile flex NAME

WLC(config-wireless-flex-profile)#pmk propagate

●      Don't use the same site tag name across multiple FlexConnect sites (this includes the default-site-tag). The C9800 doesn’t know about your physical locations and there is no point in distributing client keys across APs in different physical locations as roaming will never happen. Also, different site tag names are a requirement to support client overlapping IP addresses across Flex connect sites for local switching SSIDs.

●      In order to save WAN bandwidth and make the software download more efficient for APS in remote sites, it’s recommended to turn on efficient upgrade under the FlexConnect Profile. For each site tag with FlexConnect APs, one AP per model is selected as the master AP, and downloads the image from the WLC through the WAN link. Once the master AP has the downloaded image, the APs in that site tag start downloading it from the master AP.

One last consideration about site tag design. What if you are forced to have a mix of large and small size site tags and you cannot distribute the APs evenly as recommended? This would be the case where you have a deployment with a campus (with Local mode APs) and many small remote sites (with FlexConnect APs). As explained earlier, for FlexConnect every site should be its own site tag, as it defines the fast secure roaming domain, so you don’t have much choice around the number of tags; In this case, to have the best load balanced system and follow the recommendations for local mode APs, it’s probably best to have two WLCs, one to manage the campus APs and a different one dedicated to the branches, maybe using a 9800-CL to optimize costs.

In the previous sections, you have learnt the best practices around designing your site tags to optimize the resources on Catalyst 9800.  This can create an operational burden on the IT team, as the definition of the number of site tags, identifying which APs must be mapped to which tag and then implementing the configuration, may require planning and time.

Starting 17.9.2 and 17.10, a new “load” command under the site tag configuration has been introduced, to help further optimize your site tag-based design and simplify your IT operations. Think of the “load” as the processing power quota that you allocate and reserve in the internal process for a certain site tag. The WLC will remember this allocation and keep the designed balanced of APs to WNCds across reboots.

Prior to this enhancement, the C9800 had no indication about the size of the site tag and the AP to internal processes load balancing decisions were made only considering the number of site tags and not the actual number of APs and hence the load they could generate. The system still works well if the APs are evenly distributed across the site tags, as recommended in the previous sections.

However, in case where you have site tags of disparate sizes and if the number of the site tags is greater than the number of WNCd processes, it is possible to end up with an unbalanced systems configuration where some processes are heavily loaded, and others are underutilized.

The Enhanced Site Tag-Based Load Balancing feature allows you to configure a site load, thus allowing the system to take better load balancing decisions. The load is configured under the site tag using the following CLI:

C9800(config)#wireless tag site <name>

C9800(config-site-tag)# load <1-1000>

It is recommended to reboot the WLC after configuring the load and after all your site tags are active, meaning they have at least an AP joined. The behavior of the load balancing feature in the controller reboot case is as follows:

●      After you have configured the feature and rebooted the controller, even before any APs join, the load balancing feature retains the site tags that are used actively in persistent memory and load balances them during bootup. The load balancing during bootup occurs in descending order of the configured site load.

●      After you have configured the load balancing feature in a site tag with APs already joined, the load balancing remains unchanged unless all APs, including those not in the site tag, disconnects or the controller reboots.

How to choose the value for the load parameter? Load is an estimate of the relative WNCd capacity reserved for that site tag and hence group of APs and related clients.

All control plane activities contribute to the “load” of the internal processes: client probing, client joining, client authentication, roaming, but also features like mDNS that require CPU time. The busier the AP, the bigger the “load”. The most common option would be to set the load equal to the number of APs in the site tag. This a good option with office buildings where you estimate that each AP would have a similar number of clients and hence activity.

If you have a building/area/floor with a higher expected activity (e.g., lot of clients joining, leaving and roaming) like in a conference/training center, cafeteria, then set a higher weighted “load” for that specific site tag. For instance, if 10 APs are present at the conference center area, configure the load to be 20.

●      It makes sense to use the load only if the number of site tags configured is greater than the number of WNCD processes.

●      All the site tags need to have the load configured.

●      Configuring the load is recommended for both Local and FlexConnect mode deployments.

●      The configured load is only an estimate. It will only be used for site tag load balancing. Specifically, it does not prevent APs, or clients from joining or associating.

●      How to choose the load? For site tags with normal client density and activity, you can use the AP count of the site tag as a good approximation of the site load. Examples of such sites are office floors and buildings. For sites with high client density and roaming load, you can use a higher load configuration than the number of APs. For example, if the number of APs in such a site is 200, you can use a load factor of 300 or 400 to compensate for higher client load. Examples of such sites include cafeterias, auditoriums, conference centers floors, etc.

●      For the AP distribution algorithm to take into consideration the load, and be independent of AP joining order, configure the load parameter under the site tags and reboot the C9800

For a site tag to be considered for load balancing, it needs to have at least one joined AP. This information is saved and remembered by the system for subsequent runs.

If you have a new installation, since AP join times can vary, the system waits for an hour from its last boot, for APs to come up before saving the active tags and consider those in the calculation. This is the reason why the WLC reboot should be triggered after at least one hour of uptime.

If the C9800 is not rebooted, the load balance algorithm is still improved as it takes into consideration the site load with the configured load parameter; but it’s going to be dependent on the order of AP joining the WLC.

Starting release 17.12, the RF based Automatic AP Load Balancing feature may improve the existing site tag-based load balancing described in the previous sections. Unless properly planned, the site tag-based method may lead to uneven distribution of APs across the internal instances, which in turn may result in higher memory and CPU usage. Though enhanced by the load command, the site tag-based method may still lead to suboptimal performances if the AP load limit is not correctly configured, or the customer has decided to put most of the APs in one large site tag.

The RF based Automatic AP Load Balancing feature uses Radio Resource Management (RRM) to automatically group APs and load-balancing across WNCd instances. When this feature is enabled, it forms AP clusters based on the RSSI received from AP neighbor reports. These AP clusters or neighborhoods are further split into sub-neighborhoods and smaller areas. The resulting groups of APs are then distributed evenly across the internal processes. The AP load balancing takes effect only after a controller reboot or through an AP CAPWAP reset triggered by the ap neighborhood load-balance apply command. When the RF based Automatic AP Load Balancing feature is active, it overrides other site tag-based load balancing.

For enabling and configuring RF based Automatic AP Load Balancing, please refer to the configuration guide: https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/17-12/config-guide/b_wl_17_12_cg/m_auto-wncd-lb.html

●      This feature is recommended for better load balancing when the number of site tags is greater than the number of WNCD for that specific platform.

●      This feature is supported only on APs in Local and FlexConnect mode.

●      For a new deployment, it is still recommended to use the site tag-based method and follow the recommendations to evenly distribute the APs, together with the site tag load command. Why? Using site tags, you can ensure that all the APs of the same site tag go to the same WNCd, which helps in troubleshooting and optimizes for intra-WNCd roaming.

●      For a new or existing deployment, if you are unable design around site tags because you cannot group APs (for example: APs don’t have a representative name and/or you don't know where they are located), or you do not want to spend time designing site tags, then you can use the default site tag or any named site tag and turn on the RF based Automatic AP Load Balancing feature. Keep in mind that you may have performance impact when compared to a evenly load balanced system using site tags and load.

●      In an existing deployment, if you have high CPU issues because of an unbalanced system, use the auto RRM load balance system instead of redesigning the site tags.

●      Remember the golden rule: if you do not have any CPU load issues despite having an unbalanced system, do not change anything.

●      It’s not recommended to turn on this feature when the overall load on the system is high.

General C9800 Wireless Controller settings

These settings apply to the C9800 wireless controller at a box level.

There are two ways in which you can run a Cisco IOS XE image on a C9800 WLC:

●      Install mode: The install mode uses pre-extracted files from the binary file into the flash in order to boot the controller. The controller uses the packages.conf file that was created during the extraction as a boot variable. Install mode is the default mode.

●      Bundle mode: The system works in bundle mode if the controller boots with the binary image (.bin) as a boot variable. In this mode the controller extracts the .bin file into the RAM and runs from there. This mode uses more memory than install mode, since the packages extracted during bootup are copied to the RAM.

You can check the mode using this show command:

9800#show version | i Installation mode

Installation mode is INSTALL

Note:      Install mode is the recommended mode to run the Cisco Catalyst 9800 Series wireless controller because it provides the following advantages: support for high-availability features like In-Service Software Upgrade (ISSU), software maintenance upgrade (SMU)/patching (hot and cold), faster boot time, less memory consumption, and Cisco Catalyst Center support for upgrades.

If for some reason the box is in bundle mode, follow these steps to boot in install mode:

1.      Check if you have enough space in flash to download an image:

9800#dir flash:

2.      Clean up old installation files that are not used, to free up space:

9800#install remove inactive

3.      Copy the image to flash, for example, using the TFTP transfer.

9800#copy tftp://<path> flash:

4.      Delete the current boot variable and set it to point to packages.conf. Use the following commands:

9800(config)#no boot system

9800(config)#do write

9800(config)#boot system bootflash:packages.conf

5.      Install the image to flash and then activate and commit the code. This moves the C9800 from bundle mode to install mode. You can do this in one command:

9800-40#install add file bootflash:<image.bin> activate commit

There is only one wireless management interface (WMI) on the C9800, and this is a Layer 3 interface. The WMI terminates all the CAPWAP traffic from APs and is the default source interface for all the control plane traffic generated from the box. It is recommended that you use a Switched VLAN Interface (SVI) as the WMI for all deployments, including Foreign -Anchor for guest traffic. The only exceptions would be for C9800-CL in a public cloud, where it is mandatory to use a Layer 3 port for wireless management; and for the embedded wireless in Cisco Catalyst 9000 switches, where a loopback interface is recommended.

Note:      The C9800 doesn’t have multiple AP Manager interfaces, as AireOS does. It uses only one interface for CAPWAP termination: the WMI.

Thanks to the new software architecture of the C9800, there are no features that require a box reload to make them effective. This is important for increasing the uptime of the whole wireless network. The only exceptions to this are when changing the licensing level on the box and configuring stateful switchover (SSO) redundancy.

Furthermore, compared with AireOS, the number of functionalities in the C9800 that require shutdown of the wireless network (both 5-GHz and 2.4-GHz networks) in order to apply changes has been reduced as well. It is mainly the radio resource management (RRM) settings that require a shutdown of the wireless network.

When assigning APs to an AP Group in AireOS, the APs would reboot causing a network down for the area covered, for at least 3 minutes. With C9800, changing the assignment of APs to policy tags, which would be the equivalent of AP Group in AireOS, only requires a CAPWAP tunnel reset which takes less than 30 sec, minimizing the network downtime.

Enabling Network Time Protocol (NTP) is very important for several features. NTP synchronization on controllers is mandatory if you use any of these features: Location, Simple Network Management Protocol (SNMP) v3, access point authentication, or 802.11w Protected Management Frame (PMF). NTP is also very important for serviceability.

To enable the NTP server via the CLI, use this command:

c9800-1(config)#ntp server <IP or dns name>

Via the GUI, do the following:

A screenshot of a cell phoneDescription automatically generated

It is possible to specify the source interface for NTP traffic. On the physical appliance, this might be useful to configure NTP to go out of the service port (SP), which is the out-of-band management port. On the 9800 Series physical appliance, the SP is mapped to a separate management Virtual Route Forwarding (VRF) instance (Mgmt-intf ). In order to configure this, use the following CLI command:

ntp server vrf Mgmt-intf <ip or dns name>

The C9800 also supports synchronization with NTP using authentication. To enable NTP authentication, use the following commands:

c9800-1(config)#ntp authentication-key 1 hmac-sha2-256 <key value> c9800-1(config)#ntp authenticate c9800-1(config)#ntp trusted-key 1

To confirm that the status of the NTP server is synchronized, use the following command:

c9800-1#sh ntp status

Clock is synchronized, stratum 9, reference is 172.16.254.254

For the C9800, all the different form factors have the same base software code. This is important and simplifies customer deployments when there is a mix of physical and virtual appliances, or even wireless controllers embedded in Cisco Catalyst switches and APs (EWC). This means that the user interface is the same and the features are the same. This is true as long as the feature is supported; for example, the 9800 Series wireless controller embedded on the Cisco Catalyst 9000 switches supports only Software-Defined Access (SD-Access) architecture, so only the functionalities related to fabric deployment mode will be supported.

The customer may want to take the configuration from WLC1 and use it on WLC2, performing a “backup and restore” procedure. Here are the recommended steps:

●      Copy the configuration from WLC1 to a text file and upload to a TFTP/FTP server

●      Copy the configuration file onto the startup-config file of WLC2 using the CLI command copy tftp://<server>/config.txt startup-config.

●      Reload the WLC2 box (without saving)

●      If password encryption was enabled on the original configuration, all keys and passwords would have to be reconfigured. Once the keys/passwords are reconfigured enable password encryption back again. The command is below:

key config-key password-encrypt <private-key> password encryption aes”

●      SNMP v3 users are not part of the configuration file so will not be copied. Add snmpv3 users back using the below command:

snmp-server user <username>  <group> v3 auth sha <password>  priv aes 128 <password> 

●      Add the management interface MAC address as wireless mobility mac address as a best practice. Since this is a new instance/hardware, the MAC address of the SVI will change. Use the command: wireless mobility mac-address <new MAC> 

●      (get the mac from command “ show wireless interface summary ”)

●      Add the token for smart licensing “ license smart register idtoken <TOKENID>

There are extra considerations needed for the 9800-CL as the virtual appliance doesn’t come with a Manufacture Installed Certificate. It needs a Self Signed Certificate (SSC) to  terminate CAPWAP tunnel from the AP.  Follow the steps below to generate an SSC for a 9800-CL:

●      Delete the certificates which were copied along with the configuration. To do this, first check the existing certificates using the command “show crypto pki trustpoint”

●      Delete the existing certificate authority “WLC_CA”: no crypto pki server WLC_CA

●      Delete existing device certificates:

no crypto pki trustpoint "<hostname>_WLC_TP"

●      Create a new SSC for the management interface using the exec command:

wireless config vwlc-ssc key-size 2048 signature-algo sha256 password 0 <password>

Note:      If the customer imported third-party certificates on their Catalyst 9800, it is important to note that the private keys won't be copied by simply copying the configuration. Therefore, the customer will need to import the certificates again on the new WLC. The same is true for the customer’s webauth pages; these would also not be copied this way.

If you are migrating from AireOS WLC to the Catalyst 9800, the configuration file needs to be translated, as the operating systems are different. The Configuration Migration tool is recommended for doing that. A web-based version can be found at:

https://cway.cisco.com/wlc-config-converter/

Note:      cisco.com credentials are needed to access the configuration tool.

A screenshot of a phoneDescription automatically generated

Use the following steps:

1.      Get the AireOS configuration file, either uploading it via TFTP or using the “ show run-config commands” CLI command, and save it in a text file.

2.      Upload the AireOS configuration file to the tool.

3.      Select the conversion from AireOS to 9800.

4.      Click Run.

            The tool output has four different sections:

A screenshot of a cell phoneDescription automatically generated

            Here is a description of each configuration file:

●      Translated : Contains the supported CLI commands with the translation from the AireOS CLI to the Cisco IOS XE CLI. This is also useful to see how the same configuration is done on the 9800 Series.

●      Unsupported : Contains the CLI commands related to unsupported features (please confirm any unsupported features with your Cisco representative).

●      Not Applicable : Contains the list of CLI commands that are not applicable to Cisco IOS XE because things are done differently on the Catalyst 9800 or because the command is deprecated.

●      Unmapped : Contains commands related to features that are supported but not yet translated by the tool.

5.      Download the translated configuration and edit as needed; you may need to retype passwords for SSID and the RADIUS configuration, and you may need to evaluate the need for SVIs, etc. This file is NOT meant to be blindly copied to the Catalyst 9800.

6.      Copy the configuration to the Catalyst 9800 running-config. We recommend you copy and paste directly in the CLI. Alternatively, you can use the CLI tool in WebUI under Administration > Command Line Interface.

There is also a version of the tool embedded in the C9800 GUI:

A screenshot of a cell phone screen with textDescription automatically generated

The online version at https://cway.cisco.com/wlc-config-converter/ is the recommended one because it is always updated with the latest fixes.

In case of a controller crash, there is enough local storage on the 9800 Series controller to save the file locally, so there is no need to automatically upload it somewhere off-box. In the Troubleshooting section of the C9800 GUI, there is a section where you can easily download the system report file (core dump):

A screenshot of a cell phoneDescription automatically generated

The 9800 Series supports a single file download option to easily collect the most important support data in a simplified way. This will provide a bundle covering crash information, core files, configuration, output of specific CLI commands, etc. It is advisable to always include this file when opening a TAC case, to have a good starting data set.

It’s very easy to access the support bundle from the GUI:

A screenshot of a computerDescription automatically generated

WebUI uses VTY lines for processing HTTP requests. At times, when multiple connections are open, the default number of VTY lines of 15 set by the device might get exhausted. Therefore, it is strongly recommended that you increase the number of VTY lines to 50. Use the following configuration commands to do this:

C9800#config t

C9800(config)#line vty 5-50

Another best practice is to configure the service tcp-keepalives to monitor the TCP connection to the box:

C9800(config)#service tcp-keepalives in

C9800(config)#service tcp-keepalives out

Starting with Release 17.3, it is possible to configure HTTP/HTTPs independently for WebUI access and for redirection for Web Authentication SSIDs. For securing access to the box, it is recommended to disable HTTP for WebUI access. For more information on the configuration options,  see the “Configuring HTTP and HTTPS Requests for Web Authentication” section in the Web-Based Authentication chapter in the configuration guide .

The Dashboard page is a dynamic page, with information being updated automatically. This will prevent the session idle timeout from kicking in and logging the user out (as happens to all other pages). It is recommended that you enable the Dashboard Session Timeout to prevent this. When the dashboard timeout is turned on, then the session idle timeout configured under Administration > Management > HTTP/HTTPS/Netconf/VTY page is in effect. When the dashboard timeout is turned off, the session will expire after 4 hours.

To enable Dashboard session time out, click the settings (gear) icon on the top right corner of any page and toggle this setting:

A screenshot of a cell phoneDescription automatically generated

The latest releases include inline guided assistance to help customers with the GUI configuration. The function is embedded into every page in the lower right corner of the screen. Just look for a light blue vertical tab that says, “Guided Assistance” and click on it. If you need to turn it off, you can do so directly from the dashboard preferences (gear icon):

A screenshot of a computerDescription automatically generated

The Cisco Catalyst 9800-CL (CL stands for “cloud”) is the virtual machine form factor that can be deployed on a private or public cloud. There are a few deployment considerations when dealing with the 9800-CL.

When setting up the 9800-CL on a private cloud, using one of the supported hypervisors, it’s important that, if using multiple interfaces, these are mapped to different virtual networks/VLANs on the virtual switch side:

A screenshot of a cell phoneDescription automatically generated

In the example above, GigabitEthernet1 is mapped to an out-of-band network, GigabitEthernet2 is the main interface for wireless management and client VLANs, so it’s configured as a trunk, and GigbitEthernet3 is used for the redundancy port (RP) and has its dedicated Layer 2 VLAN. If you are not using the port, you should still map it to a dedicated network.

When configuring the trunk, it’s a best practice to make sure that you allow only the VLANs that are in use:

A screenshot of a cell phoneDescription automatically generated

Finally, the security settings: Both Promiscuous mode and Forged Transmits need to be set to Accept on the port group where the 9800-CL is connected. This is needed both for both trunk and nontrunk connections:

A screenshot of a cell phoneDescription automatically generated

These security settings can be restricted to the single port group where the 9800-CL is connected, and as long as the VLANs are available only on this port group, these settings will not affect other VMs connected to other port groups. Please bear in mind that within the port group, setting Promiscuous mode to Accept will result in flooding traffic to all the other VMs on the same VLAN, so it’s recommended that you limit the number of VMs per port group.

Note:      The examples above are for ESXi, but the other hypervisors have similar settings and recommendations. Please check the deployment guides for more information.

For the 9800-CL it is recommended that you use the VGA integrated console (the default) and not the serial console.

If you want to shut down the 9800-CL it is recommended that you do it gracefully following this simple procedure:

●      Before you power off the VM from the hypervisor, run the exec command reload pause – this command will reload the box and then pause, waiting for the user input to start.

●      At this point, go ahead and power off the VM.

Pushing configuration via CLI or GUI may not flash errors to the user if any of the settings  are not applied correctly. It is always recommended recommend to check any errors by viewing the logs generated by the box. This can be done via CLI using "show logging" or checking on the web interface under Troubleshooting > Syslog section.

For any setting that requires the user to configure an open string (AP name, SSID name profiles and tags, etc.), the Catalyst 9800 supports a specific list of characters: these are the printable ASCII characters (ASCII 32-126) without leading or trailing whitespaces. The only exception is for a leading space (ASCII character 32) only in the SSID name. Please also ensure that SSID and AP names do not exceed 32 characters. A list of the printable ASCII character can be found here: https://en.wikipedia.org/wiki/ASCII

Quick tip: what if you need to type the character “?” in the CLI? This special character, for example, could be part of a url that you want to configure in your parameter map; if you try to type this character directly on CLI, you will see that it will not print it (but list available keywords or arguments depending on the mode you are); in this case to enter “?” on CLI, you would use Ctrl+v and then type “?”.

Note:      Always ensure that SSID and AP names do not exceed 32 characters.

With Catalyst 9800 Wireless LAN Controller, the focus has been on telemetry. Telemetry works in a "push" model where WLC sends out relevant information to the server without the need to be queried. Catalyst 9800 still offers SNMP for legacy purposes. Some information can be exclusive to telemetry and some of the SNMP object identifiers (OIDs) previously available on AireOS are not yet available on 9800.

For more information on SNMP on C9800 please refer to this link: https://www.cisco.com/c/en/us/support/docs/wireless/catalyst-9800-series-wireless-controllers/217460-monitor-catalyst-9800-wlc-via-snmp-with.html .

If using SNMP to poll different OIDs, the following CLI needs to be configured as a best practice to reduce the possible impact on the C9800 CPU:

C9800config)#snmp-server subagent cache

With this command the cache will be cleared after 60 seconds; to change the interval use the following CLI:

C9800(config)#snmp-server subagent cache timeout ?

  <1-100>  cache timeout interval (default 60 seconds)

Default should be good for most deployments.

General access point settings

The advantage of the Cisco Catalyst 9800 Series configuration model is that most of the recommended settings that are global in AireOS can be configured on a group of APs in Cisco IOS XE using profiles and tags. This gives you the flexibility to decide which APs will get the settings and choose the appropriate values. Let’s look at the recommended settings.

When configuring access points, always set the primary and secondary (and optionally tertiary) controller names and IP addresses to control the AP selection during the CAPWAP join process. This can prevent APs that are close to each other from joining different controllers (the so called “salt and pepper” scenario) that could affect roaming time. A deterministic assignment of the primary and secondary WLCs would make troubleshooting simpler and provide a more predictive network operation. To configure at the AP level, do the following:

A screenshot of a cell phoneDescription automatically generated

On the CLI, use this command:

c9800#ap name <APname> controller primary/secondary <WLCname> <WLC_IP>

There is an important difference between primary/secondary/tertiary and backup primary/backup secondary:

●      Primary/secondary/tertiary WLCs are configured and saved at the AP level. When the primary is set or changed, the AP will do a CAPWAP reset and join the new configured controller.

●      Backup primary/backup secondary settings are configured at the WLC level. The AP will evaluate the backup WLCs only if it loses connection to the currently joined WLC.

It is important to understand the different behavior between the two types of redundancy controllers:

●      If an AP’s currently joined controller fails, the AP chooses an available controller from the list in this order: primary, secondary, tertiary, primary backup, and secondary backup.

●      AP fallback applies only to the primary controller and no other backup controller.

Different than AireOS, the Catalyst 9800 allows you to configure the backup WLCs at the AP Join profile level, so for a group of APs, AireOS is only at the global level. On the WebUI, go to Configuration > Tags & Profiles > AP Join:

A screenshot of a computerDescription automatically generated

On the CLI, it’s under the AP profile:

c9800(config)#ap profile <name>

c9800(config-ap-profile)#capwap backup primary <name> <IP>

Access points will generate syslogs about important events for troubleshooting and serviceability. By default, they will use a local broadcast destination (255.255.255.255), to ensure that even when the AP is new out of the box, it is possible to obtain some information about possible problems by doing a local capture. For performance, security, and ease of troubleshooting, it is recommended that you set a unicast destination and store the AP logs for later analysis in case of problems.

To configure for all access points that will join the controller, set the syslog server IP address in the default AP profile:

A screenshot of a cell phoneDescription automatically generated

On the CLI, it’s under the default AP profile:

c9800-1(config)#ap profile default-ap-profile

c9800-1(config-ap-profile)# syslog host <IP>

The user can also decide to use a custom AP profile and tag to set the syslog server for a group of APs (for example, a different syslog server per location).

Note:      If for some reasons, you want to disable syslog messages from the AP, then set the IP address to 0.0.0.0 in the AP Join profile.

Traditionally, the AP console port used a default baud rate of 9600 bps for all connections, and this was the case for all C9800 IOS XE releases prior to 17.12.1. Starting with IOS XE 17.12.1, the default baud rate for all new APs and factory reset APs is now 115200 bps.

This was done to allow the APs to speed up the AP boot times, allowing for shorter wait times when APs need to reload (new AP boot, software upgrade, etc.). APs that were joined to the controller prior to 17.12.1 will maintain the default baud rate.

This leads to the case where deployments will have APs with one of two baud rates:

1.      9600 bps – all existing APs joined to the C9800 prior to upgrade

2.      115200 bps – all new APs and factory reset APs that join to the C9800 after upgrading to 17.12.1

Because of this, it’s recommended for the network admins to have separate settings to connect to the AP console. If the setting for one baud rate does not work, they can easily switch to the other.

A screenshot of a computerDescription automatically generated

If a single baud rate is required, the recommendation is to move all APs to the 115200 bps to take advantage of the quicker boot times. There are currently 2 methods in which to do so:

1.      Clear the config on existing APs to change the baud rate and have one way to console to all APs. However, this requires the APs to have static tag mapping with MAC address as the APs will lose their configured name and location, leaving the MAC address as the only persistent information.

2.      Connect to each AP (via console, telnet, or SSH) and set the baud rate to 115200 bps.

AP# config boot baudrate 115200

This can be done manually or automated via WLAN Poller, an automation tool you can find on Cisco DevNet site: https://developer.cisco.com/docs/wireless-troubleshooting-tools/#!wlan-poller-wlan-poller

Network controller settings

This section covers the recommended settings for the controller as a network device.

The C9800 wireless controller, like AireOS WLC, is meant to act as a Layer 2 host from a network perspective. This means that it doesn’t participate in Spanning Tree, for example. To speed up network convergence, it is recommended that you enable PortFast or PortFast trunk configuration for the uplinks on the switch where the C9800 is connected.

To avoid unnecessary work by the controller data plane and prevent network loops, it is advisable to configure the trunk links between the WLC and the uplink switch(es) to only allow the required VLANs; specifically the wireless management interface VLAN and the centrally switched client VLANs. All the other VLANs should be pruned from the trunk links.

On the C9800 physical appliances, the service port (SP) is the out-of-band management port; it is the GigabitEthernet0 interface and is mapped to the Mgmt-intf VRF. This means that for traffic to be routed out of this interface, you have to configure a route in this VRF. This can be a default route or a specific route, depending on the network. Here is an example for the default route:

ip route vrf Mgmt-intf 0.0.0.0 0.0.0.0 <gateway>

In addition to WebUI and SSH access, it is possible to source control plane traffic from the SP, but you need to set the source interface instructing the C9800 to use the Mgmt-intf or the interface in that VRF.

This is sample configuration for TACACS+; it can be configured either globally:

ip tacacs source-interface GigabitEthernet0/0 vrf Mgmt-intf

or under a specific group server:

aaa group server tacacs+ demo

 server name ISE

 ip vrf forwarding Mgmt-int

Use the Cisco IOS XE configuration guide for the other protocols.

Note:      As of release 17.6, the following protocols and features are supported through the service port (SP): Cisco Catalyst Center, Cisco Smart Services Manager, Cisco Prime Infrastructure, Telnet, Controller GUI, DNS, File Transfer, GNMI, HTTP/HTTPs, LDAP, Licensing for Smart Licensing feature to communicate with CSSM, Netconf, NetFlow, NTP, RADIUS (including CoA), RESTCONF, SNMP, SSH, SYSLOG, TACACS+.

By default, the Catalyst 9800 forwards ARP traffic by changing the destination MAC from broadcast to unicast. For example, if a wireless client-A sends an ARP packet to another wireless client-B, the Catalyst 9800 will forward the ARP packet using the unicast destination MAC B; client-B will reply and will also learn client-A’s MAC address. This default behavior optimizes the exchange of ARP packets between the two clients.

In Release 17.3, the Catalyst 9800 can be configured to act as a proxy for ARP traffic and respond on behalf of a registered client. The configuration is under the policy profile:

C9800(config)#wireless profile policy <name>

C9800(config-wireless-policy)#ipv4 arp-proxy

This is the recommended setting as it will save battery life on the wireless devices because the WLC will answer ARP on behalf of the device.

In AireOS, enabling DHCP proxy for wireless clients is a best practice. For the C9800, DHCP proxy is not required, as Cisco IOS XE has embedded security features such as Dynamic Host Configuration Protocol (DHCP) snooping, Address Resolution Protocol (ARP) inspection, etc. that don’t require being a proxy for DHCP traffic. So there is not an equivalent setting in the 9800 Series wireless controller.

DHCP bridging is the recommended and default mode of operation for the C9800. This means that the client DHCP traffic gets bridged at the controller in the client VLAN mapped to the SSID or to the client via AAA override. If the DHCP server is not present on the client VLAN (which is usually the case), it’s recommended that you enable the DHCP relay function on the upstream switch. Here is a sample configuration for a Cisco Catalyst 9500 Series Switch acting as default gateway and DHCP relay for the wireless client traffic in VLAN 210:

interface Vlan210

 description c9800-guest-vlan

 ip address 172.16.210.254 255.255.255.0

 ip helper-address 172.16.3.10

DHCP relay can be configured on the C9800 as well, but in that case a Layer 3 VLAN interface (SVI) needs to be configured to source such traffic. You may want to configure DHCP relay on the C9800 for multiple reasons. For example:

●      The wireless team doesn’t have access to the next-hop switch configuration.

●      You want to add option 82 information to the DHCP server.

The recommended way to configure DHCP relay on the Catalyst 9800 is under the “Advanced” tab of the SVI configuration: Configuration > Layer2 > VLAN; you can also define multiple DHCP servers and the option 82 relay settings:

Configure predictive join: Primary/Secondary/Tertiary controller

When using the relay function, the DHCP traffic will be sourced from the IP address of the client SVI and routed out of the interface that matches the destination (IP address of the DHCP server) in the routing table. In other words, the source IP and the IP of the outgoing interface might be different.

There are situations where you want to specify the source interface for the DHCP traffic instead of relying on the routing table to avoid possible issues in your network. This is the case when the next-hop network device (Layer 3 switch or firewall) is configured with Reverse Path Forwarding check. For example, let’s assume you have the wireless management interface configured on VLAN 201 and the client SVI on VLAN 210, acting as a DHCP relay for the client DHCP traffic. The default route points to the gateway on the wireless management VLAN/subnet. Here would be a snip of the config:

interface Vlan201

 description Wireless Management

 ip address 172.16.201.5 255.255.255.0

 description Employee-SVI

 ip address 172.16.210.21 255.255.255.0

ip route 0.0.0.0 0.0.0.0 172.16.201.1

The traffic to the DHCP server 172.16.3.10 will be sourced from VLAN 210 (172.16.201.5) as the result of the ip helper-address command. The DHCP packet GIADDR is also set with the same IP. The outgoing interface is then chosen according to the IP routing table lookup and in this case, it would be the wireless management interface (WMI) VLAN.

The uplink switch configured with RFP check sees a packet coming from VLAN 201 but sourced from an IP of another subnet (VLAN 210) and will drop the packet.

To avoid this, the first step is to configure a specific source interface for the DHCP packets using the “ ip dhcp relay source-interface ” command: in this case you want DHCP packets to be sourced from the WMI interface (VLAN 201):

 ip dhcp relay source-interface vlan 201

Note:      To support the command “ip dhcp relay source-interface” in conjunction with option 82 parameters, you need to be using Release 17.3.3 or higher.

When using this command, both the source interface of the DHCP packets and the GIADDR are set to the interface specified in the DHCP relay command (Vlan 201, in this case). This is a problem, as this is not the client VLAN where you want to assign DHCP addresses. How does the DHCP server know how to assign the IP from the right client pool?

When the “ip dhcp relay source-interface” command is used, C9800 automatically adds the client subnet information in a proprietary suboption 150 of option 82 (called “link selection”), as you can see from the capture:

Graphical user interface, text, applicationDescription automatically generated

You need to make sure that the DHCP server used can interpret and act on this information. The recommendation is to change the C9800 configuration to use the standard option 82, suboption 5 to send the link selection information. You can do this by configuring the following global command:

   C9800(config)#ip dhcp compatibility suboption link-selection standard

As you can see from the new capture below, the option for link selection has changed:

A screenshot of a computerDescription automatically generated

What do you have to do on the DHCP server? For Windows 2016 server, you have to create a dummy scope to “authorize” the IP of the relay agent. In our example, it’s the IP of the VLAN 201, the WMI (172.16.201.11). You have to add the IP to the scope and then exclude it from the distribution. Full instructions can be found here:

https://docs.microsoft.com/en-us/windows-server/networking/technologies/dhcp/dhcp-subnet-options   

The controller has the ability to provide an internal DHCP server via the Cisco IOS XE software’s built-in functionality. The best practice is to use an external DHCP server, as this would be a box dedicated to this function. Nevertheless, if you want to use the internal DHCP server, this has been tested and hence is supported across all platforms for a maximum of 20% of the box’s maximum client scale. For example, for a 9800-80 that supports 64,000 clients, the maximum DHCP bindings supported is around 14,000. To verify the status of the internal DHCP:

C9800#show ip dhcp server stat

Memory usage         6840697

Address pools        11

Database agents      0

Automatic bindings   14780

Other important guidelines for the internal DHCP server:

●      The internal server provides DHCP addresses to wireless clients, indirectly connected APs (the C9800 doesn’t support directly attached APs on any model), and DHCP requests that are relayed from APs. When you want to use the internal DHCP server, ensure that you configure SVI for the client VLAN and set the IP address as the DHCP server’s IP address.

●      When clients use the internal DHCP server of the device, IP addresses are not preserved across reboots. As a result, multiple clients can be assigned to the same IP address. To resolve any IP address conflicts, clients must release their existing IP address and request a new one.

Related documentation:

https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/dhcp-for-wlans.html

The C9800 has a timeout for each client state (authentication, DHCP address negotiation, WebAuth pending, etc.). For DHCP, the controller has been configured with a default timer to allow for a client to complete a successful address negotiation. This timeout, called the IP-Learn timeout, is a fixed value, and it’s 120 seconds.

The only required IP address for the C9800 wireless controller is the one assigned to the wireless management interface (WMI). This is the interface used for terminating CAPWAP traffic to the AP and to source any other management traffic.

Assigning an IP address to the service port (SP) is optional but remember that the SP on the physical appliance belongs to the Management VRF, so an IP address has to be assigned accordingly. Here is a sample configuration for the SP with a route to connect to the out-of-band network:

interface GigabitEthernet0/0

 description SP_out_of_band

 vrf forwarding Mgmt-intf

 ip address 10.58.55.246 255.255.255.0

 negotiation auto

ip route vrf Mgmt-intf 10.58.0.0 255.255.0.0 10.58.55.254

Recommendations for setting the IP address on the WMI:

●      Use an SVI for the WMI for the 9800 physical appliance and the 9800-CL in a private cloud.

●      For the 9800-CL in a public cloud, you must use a Layer 3 port (it is automatically configured during bootstrap), meaning that there is no support for Sniffer mode AP and Hyperlocation.

●      A loopback interface is used for the Cisco Catalyst 9800 Embedded Wireless Controller on the Cisco Catalyst 9000 switch family.

Cisco recommends using VLAN tagging for the wireless management interface of the WLC. To configure the wireless management traffic to be tagged, make sure there is no native VLAN command under the trunk configuration on the port/LAG. For example:

interface GigabitEthernet2

 switchport trunk allowed vlan 201,210,211

 switchport mode trunk

VLAN 201 is the wireless management interface VLAN and 210 and 211 are the client VLANs. Ensure that the corresponding VLAN is allowed on the switch port as well and is tagged by the trunk (non-native VLAN). In this sample configuration, the assumption is that the native VLAN (by default this is VLAN 1) is not used to carry any traffic.

Note:      This should be done in most scenarios, except for small Embedded Wireless Controller (EWC)-based network deployments, in which all devices (AP, WLC, clients) might be on the same VLAN. This is a simple n etwork , but it also has lower security.

To configure the VLAN for client traffic, go to Configuration > Tags & Profiles > Policy. Under the Access Policies you can set the VLAN field. This is an important clarification related to the use of VLAN ID =1 (and VLAN name “default”) in the policy profile for the Catalyst 9800:

Graphical user interface, applicationDescription automatically generated

The behavior is different depending on the AP mode. For an AP in local mode/Flex Central switching:

●      Specifying  vlan-name = default, client is assigned to VLAN 1

●      Using vlan-id 1, a client is assigned to the wireless management VLAN

There is a warning to remind a user of this.

For an AP in FlexConnect local switching mode:

●      Using vlan-id 1, a client is assigned to the FlexConnect native VLAN

By default, if the user does not configure anything under the policy profile, the WLC assigns vlan-id 1 so clients will use the wireless management VLAN in local mode and the AP native VLAN for FlexConnect.

For centrally switched traffic, it is mandatory to configure a Layer 2 VLAN (or a pool of VLANs) mapped to the SSID, but the corresponding Layer 3 interface (SVI) is not needed. This is different from AireOS, in which a dynamic interface (Layer 3 interface and related IP address) is required. The recommendation for C9800 is not to configure an SVI for client VLAN, unless:

●      You need to run DHCP relay on the C9800; this is either because this function cannot be configured on the next hop layer 3 switch (the default gateway for that VLAN) or because you want to add option 82 information (e.g., AP location, AP MAC, etc.) in the DHCP relayed packet.

●      You want to enable mDNS Gateway and are running code before 17.9.1; in 17.9.1 and higher, mDNS gateway feature doesn’t need a client SVI interface anymore.

Note:      If configuring multiple SVIs on the C9800, it is recommended to configure access control lists (ACLs) to prevent unauthorized communication between specific VLANs. For example, if client VLANs are configured, you should allow only client traffic from the correspondent subnet. Also, wired clients should not be able to connect to the box using the client SVI interface.

Compared to AireOS, in the C9800 the use of a virtual IP address (Ipv4 and Ipv6) is limited to Web Authentication, and it’s specifically needed for the redirect function and to install a Web Authentication certificate and have it been trusted. It is recommended that you configure a nonroutable IP address for the virtual interface, ideally not overlapping with the network infrastructure addresses. It is recommended that you set both the Ipv4 and Ipv6 virtual IP. You may use one of the options proposed in RFC 5737 for Ipv4; for example, 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24 networks. For Ipv6 you may use the prefix 2001:DB8::/32 specified in RFC 3849.

The virtual IP address can be set in the global parameter map, and if you go through the Day 0 GUI for the initial setup, this is set to 192.0.2.1 by default for Ipv4.

Virtual IP address

Link aggregation (LAG) mode is the preferred mode of operation, as it provides redundancy and additional network bandwidth. It should be used whenever multiple physical links to the same uplink switch are available. LAG mode is configured via the port channel feature on the C9800, and it doesn’t require a reload of the box to enable it. Here are some important recommendations:

●      When using LAG, make sure all ports of the controller have the same Layer 2 configuration matching the switch side. For example, avoid filtering some VLANs in one port and not the others.

●      For optimal load balancing among the physical ports of the port channel, use the src-dst-mixed-ip-port option. It is important to set the same option on the C9800 controller and the neighbor switch as well:

c9800(config)#port-channel load-balance src-dst-mixed-ip-port

●      On a standalone C9800, both static (mode ON) and dynamic (Link Aggregation Control Protocol [LACP]/Port Aggregation Protocol [PagP]) port channel negotiation is supported. The mode has to be chosen on all interfaces that participate in the port channel group:

c9800-1(config-if)#channel-group 1 mode ?

  active     Enable LACP unconditionally

  auto       Enable PagP only if a PagP device is detected

  desirable  Enable PagP unconditionally

  on         Enable Etherchannel only

  passive    Enable LACP only if a LACP device is detected

●      On an SSO pair, port channel has supported static mode (mode ON) since the initial release. LACP is also supported starting with release 17.1.

A “black hole” VLAN is a specific configuration scenario in which the client VLAN configured on the controller is not forwarded on the trunk to the switch, is not present on the switch, or lacks any default gateway. Any client assigned to this VLAN can’t pass traffic or reach any network destination, with the goal of preventing a human configuration error and reducing the possibility of traffic leaks.

This scenario is targeted for:

●      Guest access or mobility auto-anchor: Configure a black hole VLAN on the foreign level, to ensure that there is no traffic leak at the foreign level and that the only connectivity possible is through the anchor-assigned VLAN.

●      AAA override: This requires all clients to get an assigned VLAN from the RADIUS server, or they can’t reach any network destination.

Network access point settings

This section covers the recommended network settings for the APs.

It is a best practice to place the Access Points in a different VLAN than the Wireless Management Interface (WMI) one, and this is usually the case in any production deployment. If for staging or testing purposes you need to configure the APs in the same VLAN as the WMI, it is recommended to limit the number of APs to a number less than 100.

For APs in local and fabric mode, the round-trip latency must not exceed 20 milliseconds(ms) between the access point and the controller. This is the same as in AireOS.

Use PortFast on AP switch ports for APs in local mode, fabric mode, or FlexConnect mode doing only central switched WLANs. To configure the switch port for PortFast, set the port to be connected as a host port, using the switch port host command or directly with the PortFast command. This allows a faster join process for an AP. There is no risk of loops, as the local mode APs never bridge traffic directly between VLANs. The port can be set directly on access mode.

Note:      For APs in Flex mode and local switching, the switch port needs to be in trunk mode for most scenarios. For these, use spanning-tree portfast trunk on the switch port.

For APs in FlexConnect mode, when using locally switched WLANs mapped to different VLANs (the AP switch port is in trunk mode), prune or limit the VLANs present on the port to match the AP-configured VLANs.

To optimize the TCP client traffic encapsulation in CAPWAP, it is recommended that you always enable the TCP maximum segment size (MSS) feature, as it can reduce the overall amount of CAPWAP fragmentation, improving overall wireless network performance. The MSS value should be adjusted depending on the traffic type and maximum transmission unit (MTU) of the WLC-to-AP path. In the C9800, TCP MSS adjust is enabled by default, with a value of 1250 bytes. This is considered a good value for most deployments, although it can be further optimized depending on your setup.

A screenshot of a cell phoneDescription automatically generated

On the CLI, it’s under the AP profile (custom or default):

c9800-1(config)#ap profile custom

c9800-1(config-ap-profile)# tcp-adjust-mss ?

  enable  Enable TCP MSS for all Cisco APs

  size    TCP MSS configuration size

Because this is a setting under the AP Join profile in the C9800, you can decide to have different values for different groups of APs or locations.

SSID/WLAN settings

This section gives the SSID/WLAN-related recommendations. In the C9800, these settings are not always applied to the WLAN configuration itself; most of the time the Policy profile is used. In general, security being the unchangeable part of a WLAN, it is configured on the WLAN profile. Other WLAN properties (QoS, VLAN, etc.) are configured on the Policy profile. This approach allows the user to define a common policy and apply it to multiple SSIDs without reconfiguring it all the time.

WLANs can operate by “hiding” the SSID name and answering only when a probe request has the explicit SSID included (that is, the client knows the name). By default, the SSID is included in the beacons, and APs will reply to null probe requests, providing the SSID name information even if clients are not preconfigured with it. Hiding the SSID does not provide additional security, as it is always possible to obtain the SSID name by doing simple attacks, and it has secondary side effects, such as slower association for some client types (for example, Apple iOS). Some clients don’t work reliably at all in this mode. The only benefit is that it prevents random association requests from devices trying to connect to it. It is recommended that you enable the broadcast SSID option to have the best client interoperability.

Broadcast SSID is enabled by default on the C9800 controllers.

If you have devices that are still using Cisco Centralized Key Management, it is strongly recommended that you change CCKM validation to 5 seconds to avoid roaming issues when using Cisco based clients (such as 8821 IP phones or Cisco workgroup bridges). Use the following command under the WLAN configuration to set this parameter:

c9800(config-wlan)#security wpa akm cckm timestamp-tolerance 5000

5000 is equal to 5 seconds.

VLAN group is the equivalent of the interface group/VLAN Select feature in AireOS. This feature enables you to use a single WLAN that can support multiple VLANs corresponding to different DHCP pools dynamically for load balancing. Clients get assigned to one of the configured VLANs using a hash of their MAC address, so the assignment is preserved over time, unless there is a VLAN group configuration change. The VLAN group pool feature will monitor the DHCP server responses and automatically stop using those VLANs with clients that fail to obtain a DHCP address assignment.

To enable this feature, perform the following steps:

1.      Create a VLAN group and add client VLANs:

A screenshot of a social media postDescription automatically generated

2.      Add the VLAN group to the Policy profile:

A screenshot of a computerDescription automatically generated

If VLAN groups are in use, it is recommended that you enable multicast VLAN to limit multicast on the air to a single copy on a predefined multicast VLAN.

Enable multicast VLAN under the Policy profile:

VLAN groups

Knowing the client type can be extremely useful for troubleshooting scenarios, assigning policies per device type, or optimizing the configuration to adapt to them. Local profiling adds an easy way to detect the client types connected to the controller, without any external server dependencies. The controller will parse DHCP or HTTP requests from clients against a known set of client type rules to make a best-fit evaluation of the device type. The information is available on the WLC GUI or through the CLI.

To enable local profiling on a WLAN, you need to modify its associated Policy profile. Before doing so, you need to enable device classification globally on the controller:

A screenshot of a device classificationDescription automatically generated

After that, client profiling can be enabled in the Policy profile:

A screenshot of a computerDescription automatically generated

Any WLANs associated to this policy profile will have local profiling enabled.

Starting 17.1, C9800 supports Device Analytics feature to enhance the enterprise Wi-Fi experience. This feature, among other things, provides a set of data for analysing wireless client device behaviour. With device profiling enabled on the controller, information is exchanged between the client device and the controller and AP. This data is encrypted using AES-256-CBC to ensure device security. Initially this applied to Apple and Samsung devices; starting release 17.6 the feature is extended to devices with Intel chipset (AC9560, AC8561, AX201, AX200, AX1650, AX210, AX211, and AX1675 chipsets). The C9800 receives additional client information from these devices and can use it to enhance device profiling on the box; the same information is also shared with Cisco DNA-C and displayed in Assurance.

To enable this feature, go to the Advanced tab of WLAN configuration and enable “Advertise Support” and “Advertise PC Analytics Support”, the latter being the one for Intel devices:

VLAN groups

Application Visibility and Control  

Application Visibility and Control (AVC) classifies applications using Cisco’s deep packet inspection (DPI) techniques with the Network-Based Application Recognition (NBAR) engine and provides application-level visibility into and control of the Wi-Fi network. After recognizing the applications, the AVC feature allows you to either drop or mark the traffic. Using AVC, the controller can detect more than 1400 applications. AVC enables you to perform real-time analysis and create policies to reduce network congestion, costly network link usage, and infrastructure upgrades. AVC is supported on all C9800 wireless controller platforms. 

Note:      AVC inspection may have a performance impact of up to 30%. It should be avoided on wireless controller setups that are running close to the maximum forwarding capacity of the platform.

On the C9800, AVC (for baseline application utilization) is enabled at the Policy profile level; the Policy profile can then be mapped to the WLAN (through the policy tag) so that AVC gets applied to the SSID. From the GUI, just click the arrow of the available profiles in the left column; once enabled, the profile with AVC will show up in the right column.

A screenshot of a social media postDescription automatically generated

The 802.11k standard allows clients to request neighbor reports containing information about known neighbor APs that are candidates for roaming. The use of the 802.11k neighbor list can limit the need for active and passive scanning. A common problem that 802.11k helps solve is “sticky” clients, which usually associate with a specific AP and then hold on to that AP strongly, even when significantly better options are available from nearer APs.

The 802.11k feature can be configured directly on the WLAN under the Advanced settings:

A screenshot of a cell phoneDescription automatically generated

It is recommended that you enable 802.11k with dual-band reporting. With dual-band reporting enabled, the client receives a list of the best 2.4- and 5-GHz APs upon a directed request from the client. The client most likely looks at the top of the list for an AP on the same channel and then on the same band as one on which the client is currently operating. This logic reduces scan times and saves battery power.

Note:      Do not enable the dual-list option if using single-band clients or for deployment scenarios that use devices primarily configured for 5 GHz.

Note:      802.11k may cause problems on some legacy devices that react incorrectly to unknown information elements. Most devices will ignore 802.11k information, even if they do not support it, but for some it may lead to disconnections or failure to associate. These are corner cases, but it is advisable to test before enabling this option.

In the C9800, the Web Authentication parameters are under the parameter map, so that’s where you enable the Sleeping Client feature and the timeout. Navigate to Configuration > Service > Webauth and edit the default parameter map or create a new one and set the Sleeping Client status and timeout.

Related image, diagram or screenshot

The parameter map is then associated to the WLAN profile under the Security > Layer 3 tab.

The sleeping timer becomes effective after the idle timeout. If using the Sleeping Client feature for Web Authentication, ensure that your idle timeout is lower than the session timeout, to prevent incorrect client deletion.

There are some client timers that need to be considered. The C9800 offers flexibility by configuring these timers under the Policy profile, so the same SSID could have different values according to the deployment requirements. Client timers are under the Policy Profile > Advanced tab:

A screenshot of a computerDescription automatically generated

These are the recommended values:

●      Session timeout = 28800 seconds (8h) is the recommended value for all SSIDs and policy profiles.

Note:      In AireOS, a session timeout that is set to 0 (zero) means the maximum possible timeout. In the C9800 for releases before 17.4.1, it actually means “no session timeout,” so if you use the same setting as in AireOS, every roam on a C9800 network will be a slow roam and require a full reauthentication.

●      Starting with Release 17.4.1, for WLAN configured for 802.1x authentication, if user configures any value between 0 (included) and 300 seconds, the session timeout is set automatically to 86400 seconds (24 hours), which is the maximum supported value.

●      Set the per-WLAN user idle timeout to 300 seconds (5mins). This is important specifically in high density deployments, such as stadium, conferences, universities, where you have a lot of clients. With more and more devices using random mac addresses (also known as locally administered address), a longer idle time would force the AP to keep these random MAC entries and may cause AP to reject new client association due to maximum station count reached. Also, low idle time out will avoid big accounting updates being sent to AAA server. 

Note:      In scenarios where clients would move in and out of coverage areas or when the client is battery operated and may go to sleep frequently, you may consider increasing the idle time out to 3600 seconds (60 minutes), for example, to reduce the likelihood of client deletion. The exclusion timeout should be enabled, normally with exclusion set to 60 seconds. If a change in idle timeout is required, ensure the EAP-Broadcast Key Interval is higher than the idle timeout, to avoid “bulk” client deletions that could overload your AAA server. You can change the broadcast rotation timer in the Configuration > Security > Advanced EAP section:

For a (guest) SSID to be tunneled from Foreign to an Anchor WLC, you must configure the policy profile accordingly: On the Foreign, you select the Anchor IP under the Policy Profile > Mobility tab and on the Anchor WLC you enable the Export Anchor functionality under the same tab, as shown here:

Graphical user interface, applicationDescription automatically generated

The moment you enable the setting above, the same profile cannot be associated to a WLAN/SSID that needs to be broadcasted on APs that are joined to the Anchor controller. This scenario doesn’t happen very often as the Anchor WLC is usually in the DMZ, dedicated to tunneled traffic and doesn’t have access points locally registered. But if this is the case, and you want the same SSID that is defined on the Foreign to be also broadcasted on the Anchor, then you need to define another policy profile on the anchor WLC, with a different name then the one with “Export Anchor” enabled, and use that policy profile to map it to the SSID in the policy tag to assign to the local APs.

Passive client is a client that it doesn’t send DHCP nor ARP packets after authentication is complete. In other words, it’s a client that doesn’t talk unless it is talked to (passive, precisely). The typical use case is a printer that is configured with a static IP address and is sitting idle.

For this type of client to become operational and be able to receive and then send traffic, you need to configure the Catalyst 9800 with the following settings: Under the policy profile you need to enable the passive-client feature, which basically instructs the WLC to disable the IP learn timeout that would prevent the client from going to RUN state:

wireless profile policy <policy-name>

  passive-client

If the traffic is centrally switched (local mode or FlexConnect central switching deployment), you also need to enable ARP broadcast on the client VLAN: 

vlan configuration <vlan-id>

  arp broadcast

If the traffic is locally switched with the AP in FlexConnect mode, then you need to disable ARP proxy under the Flex profile, so that the ARP traffic can reach the passive client.

Wireless profile flex <flex-policy-name>

  no arp-caching

In case, your Flex deployment also requires overlapping IP addresses across Flex sites (site tags), then you need an additional command on the initial policy profile

  no ip mac-binding

Consider that “no ip mac-binding” disables IP theft detection for static ip passive clients.

Third party Work Group Bridge (WGB) is a network device that allow you to connect wired clients behind it and bridge them onto a wireless network. Differently from a Cisco WGB, a third party WGB does not perform the MAC/IP address registration to the WLC for its clients. This means that multiple wired devices with different IP addresses will be registered with the same MAC address, the one from the WGB itself. Usually this would be considered an IP theft and hence clients would not be allowed to connect. In order for C9800 to support 3 rd party WGB and the wired devices behind it, you need to add the following command under the policy profile configuration:

This command disables IP device tracking on the controller. This command is supported for all modes (Local, FlexConenct, Fabric) starting 17.7.1.

What about older versions? If you are running 17.3.4 or later versions of the 17.3.x train, you should configure instead the “ passive-client” command under the policy profile in order to support 3 rd party WGB. This is because the command “ no ip mac-binding ” is not supported in the 17.3.x train. If you are running 17.4.1, 17.5.1 or 17.6.x train, then you need to add both no ip mac-binding and passive-client under the policy profile to support 3rd party WGB in local/centralized mode.

The above settings disable the client device tracking feature and allow multiple clients behind the WGB with different IP addresses, to connect using the same MAC address. If the client traffic goes through the WLC, so in Local mode or FlexConnect central switching deployment, then you also need to enable ARP broadcast under the client VLAN. This is done with the following command:

C9800(config)#vlan configuration <vlan ID>

C9800(config-vlan-config)#arp broadcast

Security settings

The following sections address best practices for security.

A trustpoint is a certificate authority (CA) that you trust, and it is called a trustpoint because you implicitly trust this authority. Public Key Infrastructure (PKI) provides certificate management in the C9800. When you trust a given self-signed certificate (SSC), the PKI system will automatically trust any other certificates signed with that trusted certificate. This is used for providing certificate management for various functions and protocols such as Datagram Transport Layer Security (DTLS), HTTPS, Secure Shell (SSH), Secure Sockets Layer (SSL), and so on. Trustpoints are used on the C9800 for multiple functions:

●      AP join (DTLS tunnel)

●      HTTPs connection (GUI)

●      WebAuth redirection

●      Mobility tunnel

Let’s examine these one by one. Trustpoint for AP join secures the connection between WLC and AP. You can view this in the CLI by using the following command:

C9800-1#show wireless management trustpoint

All physical appliances use a Manufacturer Installed Certificate (MIC) by default. All virtual appliances use an SSC:

A screenshot of a cell phoneDescription automatically generated

If you have some issues with AP joining, that’s probably the first thing to start troubleshooting, and it’s recommended that you follow these steps:

●      show wireless management trustpoint: verify if the trustpoint is set

●      If not there:

◦     On the physical appliance simply reassign the MIC by using the following commands:

c9800(config)#no wireless management trustpoint

c9800(config)#wireless management trustpoint CISCO_IDEVID_SUDI 

◦     On the virtual appliance you can generate a wireless trustpoint using the internal script in exec mode:

C9800#wireless config vwlc-ssc key-size 2048 signature-algo sha256 password 0 <password>

Note : This command needs to be run at the exec prompt (not in config mode).

◦     Validate the wireless configuration using the following exec command:

c9800#wireless config validate

It’s recommended that you statically assign the trustpoint used for HTTPS GUI access:

1.      For the 9800-CL, identify the IOS-Self-Signed-Certificate using the show crypto pki trustpoint command or GUI:

A screenshot of a cell phoneDescription automatically generated

If this certificate is not present, is corrupted, etc., you can generate it again by restarting the HTTPS process with the config commands: no ip http secure-server followed by ip http secure-server.

            For the appliance you can use the Secure Unique Device Identification (SUDI) certificate.

2.      Assign the certificate to HTTPS (shown for both VM and appliance):

              C9800-CL                                                     Physical appliance

A screenshot of a cell phoneDescription automatically generated

            And the corresponding CLI command:

c9800(config)#ip http secure-trustpoint <name>

3.      Verify the correct assignment (the example below is for the 9800-CL):

c9800#sh ip http server secure status

HTTP secure server status: Enabled

HTTP secure server trustpoint: TP-self-signed-605569762

For WebAuth, you need a trustpoint for the HTTPS redirection. Again, the best practice is to assign it statically to the process; this can be done under the global parameter map (shown for the 9800-CL):

A screenshot of a cell phoneDescription automatically generated

The same settings on the CLI are made as follows:

parameter-map type webauth global

 type webauth

 virtual-ip ipv4 192.0.2.1

 virtual-ip ipv6 FD00::0:2:1

 trustpoint TP-self-signed605569762

Mobility tunnel uses CAPWAP and encrypts the control plane messaging using DTLS by default. WLC uses Wireless Management Trustpoint (AP Trustpoint) to establish this tunnel, so you don’t have to do anything special for this.

Catalyst Center pushes its own self-signed certificate to the managed devices; the default certificate being ‘sdn-network-infra-iwan’. When the Catalyst 9800 has more than one certificates configured on the box (e.g. the self-generated trustpoint and the one pushed by Catalyst Center), it is strongly recommended to specify the certificate to be used for HTTPs access to the device. Not doing this may result in Catalyst 9800 picking the wrong one and breaking access to the graphical interface. As mentioned in the paragraph above, the way to do this is using the CLI command:

c9800(config)#ip http secure-trustpoint <truspoint-name>

or in the GUI going to the Administration > Management > HTTP/HTTPS/Netconf page and then selecting the specific certificate in the “HTTP Trust Point Configuration” section.

You must enforce a strong password. The password policies allow enforcement of strong password checks on newly created passwords for additional management of users of controller and access points. The following are the requirements enforced on the new password:

●      When the controller is upgraded from an old version, all the old passwords are maintained, even when they are weak. After the system upgrade, if strong password checks are enabled, the same is enforced from that time. The strength of previously added passwords will not be checked or altered.

●      Depending on the settings done in the Password policy page, the local management and access point user configuration are affected.

On the C9800 wireless controller, the Password Strength and Management for Common Criteria feature is used to specify password policies and security mechanisms for storing, retrieving, and providing rules to specify user passwords.

For local users, the user profile and the password information with the key parameters are stored on the Cisco device, and this profile is used for local authentication of users. The user can be an administrator (terminal access) or a network user (for example, Point-to-Point Protocol [PPP] users being authenticated for network access).

For remote users, where the user profile information is stored in a remote server, a third-party AAA server may be used for providing AAA services, both for administrative and network access.

To configure a Password policy, go to Configuration > Security > AAA and define a policy for your password:

A screenshot of a cell phoneDescription automatically generated

The user login policy allows you to limit the number of concurrent logins by different devices using the same user credentials. If you want to have this control for security reason, you should  configure a value greater than the default of 0 (unlimited login). But please be aware that this could impact network devices that may be sharing the same username and password, for example, wireless phones using the same user profile for their wireless connection.

Configure user login policies by entering this command:

C9800(config)# wireless client max-user-login ?

<0-8> Maximum number of login sessions for a single user, 0-8 (0=Unlimited)

Verify the user login policies by entering this command:

C9800# show run | I max-user-login

Cisco IOS XE allows you to encrypt all the passwords used on the box. This includes user passwords but also SSID passwords, for example. To use encryption, first define an encryption key:

c9800-1(config)#key config-key password-encrypt <key>

and then use the following command:

c9800-1(config)#password encryption aes

This is recommended for protecting your password information.

Note:      On the C9800, once the passwords are encrypted there is no mechanism to decrypt them, as a security best practice. The only way to recover would be to reconfigure the passwords.

The Management via Wireless feature allows operators to monitor and configure the WLC using wireless clients connected to the wireless controller network. Management via wireless is disabled by default and should be kept disabled if security is a concern. To verify the setting on the GUI, go to Configuration > Wireless > Wireless Global:

Disable Management via Wireless

On the CLI, type:

C9800(config)#no wireless mgmt-via-wireless

Cisco Secure Development Lifecycle (SDL) is a repeatable and measurable process designed to increase Cisco product resiliency and trustworthiness. Within SDL, the Cisco Product Security Baseline (PSB) has mandated the disabling of console access to access points via the default username and password (Cisco/Cisco). Starting with release 16.12.2s, the user must configure the access point credentials before being allowed to use the console, Telnet, or SSH. This is an enforced best practice for security reasons.

To define the custom credentials, go to the AP Join profile:

Default AP console username and password

If the username and password are changed on the default Join profile, they will automatically be assigned to any AP. Then, using custom Join profiles, you can even have different credentials for different groups of APs.

For increased security, configure 802.1X authentication between the AP and the Cisco switch. The AP acts as an 802.1X supplicant and is authenticated by the switch using EAP-FAST, EAP-PEAP, or EAP-TLS (Extensible Authentication Protocol [EAP] – Flexible Authentication via Secure Tunneling [FAST], Protected EAP [PEAP], or Transport Layer Security [TLS]) . This is configurable under the AP Join profile settings:

A screenshot of a cell phoneDescription automatically generated

The new configuration model makes this feature very flexible: The AP 802.1X setting is not global anymore but can be configured only for a certain group of APs (those assigned to a certain AP profile and site tags). The 802.1X AP feature is supported across all supported APs.

The following is a sample configuration to enable 802.1X authentication on a switch port:

Switch# configure terminal

Switch(config)# dot1x system-auth-control

Switch(config)# aaa new-model

Switch(config)# aaa authentication dot1x default group radius

Switch(config)#

Switch(config)# interface gigabitethernet2/1

Switch(config-if)# switchport mode access

Switch(config-if)# dot1x pae authenticator

Switch(config-if)# dot1x port-control auto

Switch(config-if)# end

For increased security, confirm that HTTPS is enabled and HTTP is disabled for management access (these are the default settings):

A screenshot of a computerDescription automatically generated

An SSC trustpoint for HTTPS will automatically be created at boot time when the system enables the secure web server process, but it’s not explicitly assigned for HTTPS. It’s recommended that you assign it explicitly, either via the GUI as shown above or via the CLI with the following command:

c9800-1(config)#ip http secure-trustpoint <trustpointname>

After you have assigned it, it will show up in the configuration:

c9800-1#sh ip http server status 

HTTP server active session modules: ALL

HTTP secure server capability: Present

HTTP secure server port: 443

HTTP secure server ciphersuite:  aes-128-cbc-sha dhe-aes-128-cbc-sha

        ecdhe-rsa-aes-128-cbc-sha rsa-aes-cbc-sha2 rsa-aes-gcm-sha2

        dhe-aes-cbc-sha2 dhe-aes-gcm-sha2 ecdhe-rsa-aes-cbc-sha2

        ecdhe-rsa-aes-gcm-sha2 ecdhe-ecdsa-aes-gcm-sha2

HTTP secure server TLS version:  TLSv1.2 TLSv1.1

HTTP secure server trustpoint: c9800-1_WLC_TP

HTTP secure server peer validation trustpoint:

HTTP secure server ECDHE curve: secp256r1

HTTP secure server active session modules: ALL

Via the CLI, you can also decide to define your own TLS version:

c9800-1(config)#ip http  tls-version ?

  TLSv1.0  Set TLSv1.0 version Only

  TLSv1.1  Set TLSv1.1 version Only

  TLSv1.2  Set TLSv1.2 version Only

and cipher suite:

c9800-1(config)#ip http secure-ciphersuite ?

  3des-ede-cbc-sha            Encryption type tls_rsa_with_3des_ede_cbc_sha ciphersuite

  aes-128-cbc-sha             Encryption type tls_rsa_with_aes_cbc_128_sha ciphersuite

  aes-256-cbc-sha             Encryption type tls_rsa_with_aes_cbc_256_sha ciphersuite

  dhe-aes-128-cbc-sha         Encryption type tls_dhe_rsa_with_aes_128_cbc_sha ciphersuite

  dhe-aes-cbc-sha2            Encryption type tls_dhe_rsa_with_aes_cbc_sha2(TLS1.2 & above) ciphersuite

  dhe-aes-gcm-sha2            Encryption type tls_dhe_rsa_with_aes_gcm_sha2(TLS1.2 & above) ciphersuite

  ecdhe-ecdsa-aes-gcm-sha2    Encryption type tls_ecdhe_ecdsa_aes_gcm_sha2 (TLS1.2 & above) ciphersuite

  ecdhe-rsa-3des-ede-cbc-sha  Encryption type tls_ecdhe_rsa_3des_ede_cbc_sha ciphersuite

  ecdhe-rsa-aes-128-cbc-sha   Encryption type tls_ecdhe_rsa_with_aes_128_cbc_sha ciphersuite

  ecdhe-rsa-aes-cbc-sha2      Encryption type tls_ecdhe_rsa_aes_cbc_sha2(TLS1.2 & above) ciphersuite

  ecdhe-rsa-aes-gcm-sha2      Encryption type tls_ecdhe_rsa_aes_gcm_sha2(TLS1.2 & above) ciphersuite

  rsa-aes-cbc-sha2            Encryption type tls_rsa_with_aes_cbc_sha2(TLS1.2 & above) ciphersuite

  rsa-aes-gcm-sha2            Encryption type tls_rsa_with_aes_gcm_sha2(TLS1.2 & above) ciphersuite

As with secure web access, confirm that SSH is enabled and Telnet is disabled to the controller for better security. You can confirm this by clicking View VTY Options under Administration > Device:

Secure SSH/Telnet

As with any other Cisco IOS XE box, you would follow the same configuration to enable or disable Telnet and SSH. This is easily done in the GUI:

Secure SSH/Telnet

https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/secure-shell.html#ID34

802.11r is the IEEE standard for fast roaming, in which the initial authentication handshake with the target AP (that is, the next AP that the client intends to connect to) is done even before the client associates to the target AP. This technique is called Fast Transition (FT).

Keep in mind that devices without 11r support cannot join an SSID where “FT only” is configured.

To use the 802.11r functionality on a wireless network with different type of devices, you would need to create a separate WLAN with FT enabled and another WLAN with FT disabled to allow the non-11r devices to join the network. This is not very practical; Cisco worked with the device ecosystems partners like Apple and Samsung to support Adaptive FT which allows to have FT and non-FT capable devices on the same SSID. In the C9800, Adaptive FT is enabled by default, and it’s the recommended setting when you have Apple and Samsung devices in your network.

Note:      Adaptive Fast Transition cannot be used in combination with WPA3.

The reality is that in a mixed-client network, some non-FT clients may experience issues in connecting to a WLAN with Adaptive FT, so the recommendation from Cisco is to configure a single WLAN with “802.11r mixed mode”, to allow for compatibility between 802.11r and non-802.11r clients: Set Fast Transition to enabled and select both FT and non-FT Authentication and Key Management (AKM) modes. This is called “802.11r mixed mode” as it allows clients to choose the AKM with or without 802.11r depending on their capability. Below is a configuration example for WPA/WPA2 security and 802.1x AKM:

A screenshot of a computerDescription automatically generated

For best client interoperability, it’s recommended to keep the “over the DS” setting disabled.

Using 802.11r is important besides speeding up roaming; you can lower the total usage of the authentication services, as clients can do secure roaming without incurring full authentication at each AP change; this has benefits both in roaming speed and overall reduced authentication load on the AAA server.

To enhance security, Cisco recommends that all clients obtain their IP addresses from a DHCP server. The DHCP Required option in the Policy profile settings allows you to force clients to request or renew a DHCP address every time they associate to the WLAN before they are allowed to send or receive other traffic in the network. From a security standpoint, this allows for more strict control over the IP addresses in use.

But you need to analyze this setting carefully, as it might have an effect on the total time, during roaming, before traffic is allowed to pass again. Additionally, it might affect some client implementations that do not renew the DHCP address until the lease time expires. This depends on the client type; for example, Cisco 8821 IP phones might have voice problems during roaming if this option is enabled, as the controller does not allow voice or signaling traffic to pass until the DHCP phase is completed. Another example may include Android and some Linux distributions that renew the DHCP address only halfway through the lease time, but not on roaming. This may be a problem if the client entry expires. Some third-party printer servers might also be affected.

In general, it is a good idea not to use this option if the WLAN has non-Windows clients. This is because stricter controls might cause connectivity issues based on how the DHCP client side is implemented.

The option is under the Policy profile, which again gives flexibility to use the setting for a certain group of APs, even when broadcasting the same SSID/WLAN:

Enable 802.11r Fast Transition

Note:      Never enable DHCP Required for a WLAN supporting voice or video services, or when the wireless devices do conservative DHCP renewal on roaming.

Aironet IE is a Cisco proprietary attribute used by Cisco devices for better connectivity and troubleshooting. It contains information such as the access point name, load, and number of associated clients in the beacon and probe responses of the WLAN that are sent by the AP. It’s used by some site survey tools to get more information from the network and also by Cisco Client Extensions clients to choose the best AP with which to associate.

This setting is recommended only when using Cisco voice devices (8821 or 7925 IP phones, etc.) or Cisco workgroup bridge devices that can take advantage of it. For example, Cisco Centralized Key Management requires Aironet IE to be enabled.

It can also be useful when performing a site survey, as the additional information can be captured by the survey tool. But this setting can create issues with non Cisco clients, so the recommendation is to test it first in your environment and then decide based on your client devices. By default, it is turned off.

Aironet IE

Device# conf t

Device(config)# wlan <profile-name> <wlan-id> <ssid>

Device(config-wlan)# no ccx aironet-iesupport

When a user fails to authenticate, the controller can exclude the client. The client cannot connect to the network until the exclusion timer expires or is manually overridden by the administrator. This feature can prevent authentication server problems due to high load, caused by intentional or inadvertent client security misconfiguration. It is advisable to always have client exclusion configured on all WLANs. Client exclusion can act as a protective mechanism for the AAA servers, as it will stop authentication request floods that could be triggered by misconfigured clients. Exclusion detects authentication attempts made by a single device. When the device exceeds a maximum number of failures, that MAC address is not allowed to associate any longer. The C9800 wireless controller excludes clients when any of the following conditions are met:

●      Five consecutive 802.11 association failures

●      Three consecutive 802.1X authentication failures

●      IP theft or IP reuse, when the IP address obtained by the client is already assigned to another device

●      Three consecutive Web Authentication failures

These are configurable at the global protection policies level:

A screenshot of a cell phoneDescription automatically generated

It is possible to configure how long a client remains excluded, and exclusion can be enabled or disabled at the Policy profile level:

Client exclusion

Peer-to-peer (P2P) blocking is a per-WLAN setting, and each client inherits the P2P blocking setting of the WLAN to which it is associated. It enables you to have more control over how traffic is directed. For example, you can choose to have traffic bridged locally within the controller, dropped by the controller, or forwarded to the upstream switch in the client VLAN.

This setting can prevent a client from attacking another client connected to the same WLAN, but it is important to keep in mind that using the drop option will prevent any application that can communicate directly between clients, such as chat or voice services. It makes sense to use P2P blocking on a guest SSID, as you just want clients to talk to the Internet.

The setting is enabled in the WLAN profile:

Peer-to-peer blocking

Disable this feature for WLANs supporting voice or video services, or for any scenario where direct client-to-client communication is required.

Note:      In FlexConnect mode with local switching, as traffic is not going through the controller, P2P blocking is applied only to traffic from clients connected to the same AP. It will not apply to inter-AP traffic. Similarly, in SD-Access mode, this setting really has no effect, as the client traffic is always sent to the fabric edge switch for policy to be applied.

Local EAP is an authentication method that allows users and wireless clients to be authenticated locally on the controller instead of using a RADIUS server. Using local EAP in an enterprise production environment is not recommended for scalability reasons.

To check if a WLAN is configured to use local EAP, look under the AAA settings:

Local EAP

If you do want to enable it, click the checkbox, but first you need to create a Local EAP profile that establishes which EAP protocols to use. In case shown below, it’s configured for EAP-FAST:

A screenshot of a cell phoneDescription automatically generated

To avoid any possible errors that could lead to clients being assigned to the WLC’s wireless management VLAN, it is advisable not to configure any policy profile to use the wireless management VLAN, so that the related SSID will not have traffic forwarded to the management subnet.

In the scenario of an auto-anchored WLAN, in which the foreign controller would forward all traffic to the anchor, it is still recommended that you set the Policy profile on the foreign controller to a “dummy” VLAN, so that traffic that doesn’t reach the anchor controller will be black-holed. This is also important if you have defined the wireless management interface as a layer 3 port, meaning using a configuration like this:

description L3 WMI

no switchport

ip address <ip_address> <mask>

wireless management interface GigabitEthernet2

WMI on a L3 port is not recommended unless using a C9800-CL in public cloud; but in case you have WMI as a L3 port and C9800 is acting as a Foreign WLC, please set the VLAN in the policy profile to something other than VLAN 1.

Also remember that on C9800, for central switching WLANs, when mapping the VLAN to the WLAN in the policy profile, there is a special handling for VLAN 1 and default VLAN:

●      If vlan-name = default, client is assigned to VLAN 1

●      If vlan-id is explicitly set to 1, client is assigned to the wireless management VLAN

There is a warning to remind you of this.

If designing for identity-based networking services, in which the wireless clients should be separated into different groups for security reasons and get, for example, different VLANs, different Scalable Group Tags (SGT), or other security policies, consolidate WLANs with the AAA override feature.

This feature allows you to assign per-user settings or attributes while using one common SSID. Besides the possible security improvements, AAA override can also help in collapsing different WLANs/SSIDs into a single one, with significant improvements in overall RF utilization (fewer beacons and less probe activity).

On the C9800, the AAA override setting is defined on the Advanced tab in the Policy profile. This allow the user to have the same 802.1X SSID configured for AAA override in one location (group of APs = policy tag) and not in another, if desired. Usually, though, the AAA setting will be common among all APs.

AAA override

Also, be advised that for AAA override to work, the Catalyst 9800 needs to be configured to authorize settings received via RADIUS from the server. Make sure you have this line “ aaa authorization network” in your configuration, pointing to an authorization list and a server-group name.

VLAN override is a well-known and commonly used feature in wireless. It allows you to apply basic user group segmentation policies by having one common SSID and returning a different VLAN/subnet based on the group the user belongs to.

In SD-Access, the segmentation is hierarchical and can be at the VRF level (macro segmentation) and at the SGT level (micro segmentation). The WLC (AireOS or Cisco IOS XE based), being a Layer 2 box, doesn’t understand VRF and uses the concept of a Layer 2 virtual network identifier (VNID) instead. So for AAA override in SD-Access Wireless, the user can return a different Layer 2 VNID based on the user group, and that VNID is mapped on the switch to a VLAN interface (SVI) and so to a subnet and a VRF.

Here are important things you need to know about AAA override with the C9800:

●      For non-fabric deployments, VLAN AAA override can be implemented using either the Tunnel-Private-Group-ID or Airespace-Interface-Name. Both work, as the C9800 can take both attributes simultaneously, using the appropriate one and discarding the other

●      For fabric deployments, the C9800 currently supports only Airespace-Interface-Name to pass the Layer 2 VNID information.

Note:      AireOS can work only with Airespace-Interface-Name in fabric and non-fabric deployments.

The default timeout and maximum retries for EAP identity requests are set to address the majority of use cases. You might need to increase these parameters for some client authentication scenarios. For example, you might need to increase them when implementing one-time passwords on smart cards, or in general when a user interaction is needed to answer the initial identity request. You might also need to decrease these parameters to improve the client experience by lowering the recovery time in case of failure.

To verify default EAP identity timeouts and change the values if needed, go to Configuration > Security > Advanced EAP:

EAP identity request timeout and maximum retries

In the CLI, use the following command:

c9800-1(config)#wireless security dot1x identity-request ?   retries  Maximum number of EAP ID request retries   timeout  no description

During the 802.1X authentication phase, in the event of an EAP retry due to packet loss or lack of response from the client, the WLC may retry the EAP request. Some clients may not properly handle fast retry timers, so this setting may need adjustment depending on client types; this is important to facilitate fast recovery for bad RF environments.

It is difficult to give a general recommendation, but acceptable values are around 2 seconds in most cases, and up to 30 seconds for slow clients (phones), so usually this timeout is set to 30 seconds to account for worst-case scenarios. To show the default timeouts and eventually change them:

A screenshot of a computerDescription automatically generated

c9800-1(config)#wireless security dot1x request ?   retries  Maximum number of EAP ID request retries   timeout  no description

The EAP over LAN (EAPoL) timeout should be as minimal as possible for voice clients, such as the 7925 or 8821 IP phones. Normally, 400 to 1000 milliseconds can work correctly in most scenarios.

The maximum retry counter has a direct implication for several of the KRACK attacks reported in 2017 for wireless clients using WPA and WPA2. If the counter is set to zero, it can prevent most attacks against clients that are not yet patched against this vulnerability. But this has implications for authentications performed in bad RF scenarios or over a WAN network with possible packet loss, as using zero may cause a failed authentication process if the original packet is lost.

Note:      For security reasons, it may be advisable to use zero retries for EAPoL, but please validate this setting in your environment, as it may result in failed authentication in bad RF environments.

To show the defaults and change the EAPoL parameters, use the following GUI settings:

EAPoL key timeout and maximum retries

RADIUS authentication and accounting servers should have 5 seconds as the minimum value for server timeout to prevent early expiration of the client authentication process during load. Set the timeout for RADIUS authentication and accounting servers by entering these settings:

RADIUS server timeout

In the Catalyst 9800, it is important to configure the dead-criteria and the deadtime timers, especially when using multiple AAA servers and applying load balancing; with these commands the Catalyst 9800 marks a non-responsive server as “dead” and moves to the backup server. To configure these timers, use the following CLI commands:

radius-server dead-criteria time 5 tries 3 radius-server deadtime 5

“Deadtime” specifies the amount of time the server remains in dead status after dead-criteria marks it as dead. To make sure that the AAA server is actually “alive” after the deadtime, and to avoid sending requests to a still unreachable AAA server, you can configure an active probe under the server definition:

c9800(config)#radius server <name>

c9800(config-radius-server)#automate-tester username <username> probe-on

The username in this command can be a dummy one; it does not need to exist on the AAA server database. Note that if the server is reachable but the backend database (i.e., Active directory) or any other services are not working, WLC would still consider the server alive.

It is a best practice to increase the retransmit timeout value for TACACS+ AAA servers if you experience repeated reauthentication attempts or if the controller falls back to the backup server when the primary server is active and reachable. This is especially true when implementing one-time passwords. The server timeout can be configured when creating the TACACS+ server entry, and usually a value of 1 second is recommended:

A screenshot of a cell phoneDescription automatically generated

Check on the SNMP communities and make sure you don’t use default or very well-known ones such as “private” and “public,” as this could represent a security risk in most deployments.

You may want to delete and re-create new ones if these default ones are configured:

A screenshot of a cell phoneDescription automatically generated

Rogue management and detection

Rogue wireless devices are an ongoing threat to corporate wireless networks. Network owners need to do more than just scan the unknown devices. They must be able to detect, disable, locate, and manage rogue and intruder threats automatically and in real time. Rogue APs can disrupt wireless LAN operations by hijacking legitimate clients and using plain text, denial-of-service attacks, or man-in-the-middle attacks. That is, a hacker can use a rogue AP to capture sensitive information, such as passwords and usernames. The hacker can then transmit a series of clear-to-send (CTS) frames, which mimic an AP informing a particular wireless LAN client adapter to transmit and instructing all others to wait. This scenario results in legitimate clients being unable to access the wireless LAN resources. Thus, wireless LAN service providers seek to ban rogue APs from the air space. The best practice is to use rogue detection to minimize security risks, such as in a corporate environment. However, there are certain scenarios in which rogue detection is not needed, for example, in an OfficeExtend access point (OEAP) deployment, citywide, and outdoors. Using outdoor mesh APs to detect rogues would provide little value while incurring resources to perform the analysis. Finally, it is critical to evaluate (or avoid altogether) rogue auto-containment, as there are potential legal issues and liabilities if left to operate automatically. Some best practices, listed in the following sections, improve efficiency in maintaining the rogue AP list and making it manageable.

At a minimum, the security level should be set to High. Do this in the GUI:

A screenshot of a computerDescription automatically generated

Set “monitor all channels” for better rogue detection. The controller maintains a single channel scan list for the RRM metrics (noise, interference) and for rogue detection monitoring. The list can be configured to focus on Dynamic Channel Assignment (DCA) channels (those channels that will be automatically assigned to APs) or to country channels (those valid only in the configured country), or to scan all possible channels. The latter is the best option to ensure that any rogue using an uncommon channel can be detected properly. The drawback is that with a longer channel list, the AP will have to go off-channel more frequently inside the configured channel scan interval. Given these trade-offs, here are some recommendations:

●      For higher security, choose to scan all channels.

●      Choose DCA channels for higher performance, as the system will scan the least number of channels.

●      For a balance of performance and security, choose the country channel option.

Rogue monitoring channels

Define malicious rogue AP rules to prioritize major and critical rogue AP alarms that require immediate attention and mitigation plans. Critical or major rogue AP alarms are classified as malicious and are detected on the network. Each rogue rule is composed of single or multiple conditions, and you set AND (match all) or OR (match any) logic to match the rule. The recommended malicious rogue AP rules are as follows:

●      Managed SSIDs: Any rogue APs using managed SSIDs, the same as your wireless infrastructure, must be marked as malicious. Administrators need to investigate and mitigate this threat.

●      Minimum RSSI >-70 dBm: This criterion normally indicates that unknown rogue APs are inside the facility perimeters and can cause potential interference with the wireless network. This rule is recommended only for enterprise deployments that have their own isolated buildings and secured perimeters. It is not recommended for retail customers or venues that are shared by various tenants, where Wi-Fi signals from all parties normally bleed into each other.

●      User-configured SSIDs or substring SSIDs: Monitor any SSIDs that use different variations or combinations of characters in your production SSIDs.

For the rule, you need to set a state, which is either Alert, Contain, or Delete. It is recommended that you use Alert. Here is how to configure the rogue AP rule:

A screenshot of a cell phoneDescription automatically generated

Note:      There are legal implications for containing rogue APs. Additionally, containing rogues using infrastructure APs will have a significant negative impact on wireless service during operation, unless dedicated APs are used for containment activities.

Regularly research and investigate, and then remove, friendly rogue APs from the “unclassified” rogue AP list on a regular basis (weekly or monthly). Examples of friendly rogue APs are as follows:

●      Known internal friendly rogue APs, such as those within the facility perimeters, and known AP MAC addresses imported into the friendly rogue AP list.

●      Known external friendly rogue APs, such as those found in vendor shared venues and neighboring retailers.

Go to Monitor > Wireless > Rogues to do this:

A screenshot of a computerDescription automatically generated

It is possible to configure the rogue detection feature on a per-AP basis. For example, it could be useful to disable rogue detection on APs located in public areas. By default, rogue detection is enabled. To verify rogue configuration on the WLC, use this command:

show ap config general

and on the access point use this command:

AP-D6-122#sh rrm rogue detection config

Rogue Detection Configuration for Slot 0:

Rogue Detection Mode : Enabled

Rogue Detection Report Interval : 30

Rogue Detection Minimum Rssi : -90

Rogue Detection Transient Interval : 0

Like general rogue detection, ad hoc rogue detection is ideal in certain scenarios where security is justifiable. However, it is not recommended in scenarios such as open venues/stadiums, citywide, and public outdoor spaces. To enable ad hoc rogue detection and reporting, use this command:

c9800-1(config)#wireless wps rogue adhoc

The reason for enabling AAA validation for rogue clients is that the WLC will reliably and continuously check for the existence of a client on the AAA server and then mark it as either valid or malicious. Here is how to configure it on the GUI:

Enable rogue client AAA validation

If the Rogue Location Discovery Protocol (RLDP) feature is needed, use it only with monitor mode APs, to prevent performance and service impacts to the wireless network:

A screenshot of a cell phoneDescription automatically generated

C9800(config)# wireless wps rogue ap rldp alarm-only monitor-ap-only

Note:      RLDP is supported only on 802.11ac Wave 1 APs. Please check the AP feature matrix for updates.

The Catalyst 9800 has aggressive rogue notification thresholds by default; in certain deployments where RF changes frequently, this may result in the notification receiver (e.g., Cisco Catalyst Center) being overwhelmed by too many messages.

The recommendation is to change the threshold for rogue AP and clients RSSI deviation notification to a higher value than zero (default). Use the following command:

C9800(config)#wireless wps rogue ap notify-rssi-deviation 5

C9800(config)#wireless wps rogue clients notify-rssi-deviation 5

The recommended value is 5 or higher.

High availability

This section presents the recommended settings for high availability.

High availability (HA) with stateful switchover (SSO) is a feature supported on all versions of Cisco Catalyst 9800 Series software and all form factors, including the C9800-CL. The SSO feature allows a pair of controllers to act as a single network entity, working in an active/standby scenario. All configuration and AP and client states are synced between active and standby. HA SSO ensures that wireless clients will not have to reconnect and reauthenticate in case of a failure on the current active controller. Whenever allowed by the controller hardware type in use, it is advisable to take advantage of the HA SSO feature, to reduce any possible downtime in case of failure.

In Cisco IOS XE release 17.1 and higher, the C9800 supports the use of the Redundancy Manager Interface (RMI), which allows you to support the following features:

●      Gateway check

●      Dual active detection

For this reason, 17.1 and higher is the recommended release for C9800 HA SSO. Figure 1 shows the supported topologies.

Figure 1.   Supported HA SSO topologies

A close up of a mapDescription automatically generated

For more information, see the High Availability SSO Deployment Guide: https://www.cisco.com/c/en/us/td/docs/wireless/controller/technotes/8-8/b_c9800_wireless_controller_ha_sso_dg.html

Note:      On the Cisco Embedded Wireless Controller (EWC) on Catalyst Access Points, the HA implementation is slightly different: An active controller and a standby controller are running simultaneously on two Cisco Catalyst 9100 Access Points, so if the active WLC fails, the standby will automatically take over without user intervention. The switchover time is less than 10 seconds but is not stateful, and the controller services will take this time to come back up. Since the EWC operates in FlexConnect local switching mode, the same as with Mobility Express in AireOS, the client traffic is not affected during switchover.

The wireless mobility MAC is the MAC address used for mobility communication. In an SSO scenario, ensure that you explicitly configure the wireless mobility MAC address; otherwise, the mobility tunnel will go down after SSO. The mobility MAC address for the SSO pair can be configured either:

●      Before forming the SSO pair on each standalone controller. This is recommended before software release 16.12.3.

●      On the active controller once the SSO pair is formed.

To configure the mobility MAC address, you can use the GUI:

Mobility MAC

Once you’ve entered the address, click Apply.

Note:      The MAC address on the GUI is automatically derived from the wireless management interface, but you can use any other valid MAC address.

C9800 #wireless mobility mac-address <MAC>

vMware vSphere vMotion is a zero downtime live migration of workloads from one server to another. This feature can be leveraged for the C9800-CL as well.

If you want to use vMotion on a C9800-CL configured in SSO pair, you need to be aware of the following caveats:

●      Due to a current limitation with ESXi switch for Virtual Guest Tagging (VGT mode), there might be an extended data outage during vMotion. As a workaround, you need to initiate traffic (i.e., a continuous ping) from the 9800-CL to update the MAC address in the table on the physical switch connected to the server. The limitation is documented here: https://kb.vmware.com/s/article/2113783?lang=en_US .

●      If using local storage, this should be Solid State Drives (SSD) or Hard Disk Drives (HDD) in RAID 0​ configuration

●      If using remote storage, i.e., Network File System (NFS) or Storage Area Network (SAN), you need to have minimal latency (< 10ms), and i’'s recommended to connect over 10 Gbps link​

●      vMotion and Snapshot are not supported with SR-IOV​ interfaces

●      It’s not recommended to do vMotion on both Active and Standby at the same time

Note:      As of release 17.6, the vMotion feature equivalent for HyperV and KVM have not been validated.

Before forming the SSO pair, make sure:

●      The RP ports are connected, either directly or through a dedicated L2 network, before you turn on HA SSO. You can connect either the fiber SFP or ethernet RJ-45 port. The fiber SFP HA connectivity takes priority over RJ-45. If SFP is connected when RJ-45 HA is up and running, the HA pair reloads.

●      When connecting the RP ports directly, back-to-back, Cisco recommends using a copper cable with a length less than 30m (100ft). If you need to go beyond 30m (100ft), it’s recommended to connect the RP ports using a fiber cable.

●      Both boxes are running the same software and are in the same boot mode (install mode is the recommended one).

●      For physical appliances, use same exact hardware type (for example, you cannot pair a C9800-L-C with a C9800-L-F)

●      For the C9800-CL, also pick the same scale template (large, medium, or small) on both virtual machines.

●      Before forming an HA pair, it is recommended to delete the existing certificates and keys in each of the C9800 which were previously deployed as standalone. This is to avoid the risk that the same trustpoint is present on both WLCs but with different keys. This could cause issues after a switchover.

●      Set the keep-alive retries to 5 (this is the default beginning with release 17.1).

●      Set the higher priority (2) on the chassis you want to be the active WLC.

The following is an example of the settings for the box that will become the active controller:

Mobility MAC

If one of the boxes in a SSO pair fails and must be replaced, Cisco recommends you follow this procedure to put the device back in the cluster while avoiding any disruptions to the wireless network:

1.      Physically disconnect the failed box and send it for RMA

2.      Make sure that the active WLC is configured with a higher chassis priority (= 2)

3.      When you receive the new box, before you connect it to the network and to the existing C9800, please configure the basic parameters offline: login credentials, IP connectivity, and redundancy configuration, including RMI if it applies. Remember to set the chassis priority to 1 so when SSO pair is formed, this box will become the standby and will not disrupt the existing active WLC

4.      Save the configuration on the new box and power it off

5.      Physically connect the new C9800 to the network (uplink and RP ports)

6.      Power on the new box

7.      The box will boot up, the SSO pair will be formed again, with the new box going to standby hot state.

Wireless and RF settings

In this section you can find general recommendations for building a stable and quality RF design, which is the foundation of a stable wireless network.

For any wireless deployment, always do a proper site survey to ensure adequate service levels for your wireless clients and applications. Keep in mind that each application has different requirements: voice deployments have stricter requirements than data services in terms of latency and jitter; location-based deployments require a denser deployment of APs to be able to triangulate each client position; new IoT applications might impose stringent requirements for latency, etc.

RRM is a great tool, and features like Dynamic Channel Assignment (DCA) and Transmit Power Control (TPC) can help automatically set the best channel and power plan but remember: RRM cannot correct a bad RF design. The site survey must be done with devices that match the power and propagation behavior of the devices to be used on the real network. Ideally, the actual device model and operating system/firmware versions should be used in the same condition (with sled or case) and orientation that will be used in the live network. For example, do not use an older 802.11b/g radio with an omnidirectional antenna to study coverage if the final network will use more modern dual radios for 802.11a/b/g/n and 802.11ac data rates. The site survey should match the AP model that you are going to install. The AP should be at the orientation and height that will be typical of the final installation. The data rates on the AP should be set to the rates required by your applications, bandwidth, and coverage requirements. If the primary objective of the network design is for each area of coverage to support 30 users at 5 GHz with 9 Mbps of data rate, perform a coverage test with the primary network device with only the 5-GHz data rate with 9 Mbps enabled. Then measure the -67 dBm received signal strength indicator (RSSI) on the AP for the test network client during active data traffic between the AP and client. High-quality RF links have good signal-to-noise ratios (SNRs) of 25 or better and low channel utilization (CU) percentages. RSSI, SNR, and CU values are found on the WLC’s client and AP information pages.

You must carefully plan the process to disable or enable data rates. If your coverage is sufficient, it is a good idea to incrementally disable lower data rates one by one. Management frames such as ACK or beacons are sent at the lowest mandatory rate (typically 1 Mbps), which slows down the whole throughput, as the lowest mandatory rate consumes the most airtime. Try not to have too many supported data rates so that clients can down-shift their rate faster when retransmitting. Typically, clients try to send at the fastest data rate. If a frame does not make it through, the client will retransmit at the next lowest data rate and so on until the frame goes through. The removal of some supported rates helps the clients that retransmit a frame to directly down-shift several data rates, which increases the chance for the frame to go through at the second attempt.

Things to remember when considering the data rate settings:

●      Beacons are sent at the lowest mandatory rate, defining roughly the cell size.

●      Multicast is sent on the range between lowest and highest priority, depending on associated clients.

●      Do you really have 802.11b clients in your network? If you don’t, consider disabling the 802.11b data rates (1, 2, 5.5, and 11) and leaving the rest enabled.

●      If you are designing for a hotspot, enable the lowest data rate, because the goal is to have coverage gain versus speed.

●      Conversely, if you are designing for a high-speed network and for capacity, with already good RF coverage, disable the lowest data rates.

●      Traffic Specification (TSPEC) and Call Admission Control (CAC) require 12 Mbps to be enabled.

The following configuration serves only as an example and should not be viewed as a strict guideline for every design. These changes are sensitive and heavily dependent on your RF coverage design. To change the data rates, go to Configuration > Radio Configuration > Network and then click on the 5 GHz tab:

A screenshot of a cell phoneDescription automatically generated

And then the 2.4 GHz tab:

A screenshot of a cell phoneDescription automatically generated

Cisco recommends limiting the number of service set identifiers (SSIDs) configured on the controller. You can configure 16 simultaneous WLANs/SSIDs (per radio on each AP), but as each WLAN/SSID needs separate probe responses and beaconing, transmitted at the lowest mandatory rate, the RF pollution increases as more SSIDs are added. Also, some smaller wireless stations such as PDAs, Wi-Fi phones, and barcode scanners cannot cope with a high number of basic SSIDs (BSSIDs) over the air. This results in lockups, reloads, or association failures. It is recommended that you have one to three SSIDs for an enterprise and one SSID for high-density designs. By using the AAA override feature, you can reduce the number of WLANs/SSIDs while assigning individual per-user VLAN/settings in a single-SSID scenario. Enter this command to verify the SSIDs:

c9800-1#sh wlan summary

Number of WLANs: 3

ID   Profile Name            SSID             Status                Security                                                                                            

---------------------------------------------------------------------------1    employee              employee             UP           [WPA2][802.1x][AES]      

2    guest                 guest                UP    [open],[Web Auth]

3    voice                voice               UP    [WPA2][802.1x][AES]       

The 2.4-GHz band is frequently under higher utilization and can suffer interference from Bluetooth devices, microwave ovens, and cordless phones as well as co-channel interference from other APs because of the 802.11b/g limit of three nonoverlapping channels. To prevent these sources of interference and improve overall network performance, you can configure band selection on the controller. Here’s what you should know:

●      Band select is configurable per WLAN and is disabled by default.

●      Band select works by regulating probe responses to clients. It makes 5-GHz channels more attractive to clients by delaying probe responses to clients on 2.4-GHz channels.

●      Do not use band select if you will deploy voice or video services (any interactive traffic), as it may impair roaming performance on some client types.

Most newer clients prefer 5 GHz by default if the 5-GHz signal of the AP is equal to or stronger than the 2.4-GHz signal. This means that on deployments with newer client types, band select may not be necessary. In general, dual-band clients will start scanning on the same band where they first associated. Band select will impact the initial scan, steering clients toward 5 GHz, and so, if the client initially joins the 5-GHz band, it is more likely to stay there if there are good power levels on 5 GHz. To enable this feature, go to the Advanced tab in the WLAN configuration:

A screenshot of a computerDescription automatically generated

There is no general reason to change the default settings, but if you need to tweak the band select operations for a specific environment, do so here:

A screenshot of a cell phoneDescription automatically generated

RF profiles are the main mechanism to customize the RRM and RF parameters for a given set of access points. With the C9800, there are two RF profiles, one for each band, and these are assigned to the AP through the RF tag. The C9800 has six default RF profiles (three for each band), and the Typical one is the default:

A screenshot of a cell phoneDescription automatically generated

You can change one of the defaults or create a custom parameter. There are many RF parameters that can be customized within an RF profile: channel selection, data rates, RRM settings (DCA, TPC, CHD), RX-SOP thresholds, and more. Here are some general recommendations:

●      Set the desired TPC threshold on the RF group, based on the AP density and installed height. For large deployments, there can be significant variations in the RF environment, so it is important to properly adjust TPC to ensure optimal coverage in each location.

●      Together with transmit power, data rates are the primary mechanism to influence the client roaming behavior. Changing which is the lowest mandatory rate can modify when the client may trigger a new roam, which is especially important for large open spaces that suffer from sticky client problems.

When setting up RF profiles, try to avoid configuring adjacent AP groups and RF profiles with different DCA channel sets, as this can negatively impact DCA calculations.

User can add a non-supported channel to the RF profile DCA list, even if the channel is not supported in the configured regulatory domain. The recommendation is to always check if the configured channels are allowed in the country domain. There is no impact on network operations since the DCA would not assign the unsupported channels to the APs but, starting release 17.5, the C9800 has added a validation to check if the added channels are allowed.

For large, high-density deployments, it is advisable to modify the default aggregate probe interval sent by access points. By default, the APs will update every 500 ms about the probes sent by clients. This information is used by load balancing, band select, location, and 802.11k features. If there are a large number of clients and access points, it is advisable to modify the update interval to prevent control plane performance issues in the WLC.

To change this setting, use this command:

C9800(config)# wireless probe limit 50 64000

That would set it to 50 aggregated probe responses every 64 seconds, and these are the recommended settings.

Optimized roaming should be disabled because Apple, Samsung, and other modern devices use the newer 802.11r, 802.11k, and 802.11v roaming improvements. This setting is disabled by default, as you can verify in the GUI:

A screenshot of a cell phoneDescription automatically generated

To set it in the CLI, use the following command:

Device(config)# ap dot11 5ghz/24ghz rrm optimized-roam

If load balancing is required, it can be enabled on the WLAN; ensure that the controller has a global window set to five clients or higher, to prevent association errors. This is true for both the 5-GHz and 2.4-GHz bands:

A screenshot of a cell phoneDescription automatically generated

In the C9800 these settings can also be configured per RF profile, which means that the user has the flexibility to assign a load balancing window to only a certain group of APs by assigning those to a specific RF profile and tag:

A screenshot of a cell phoneDescription automatically generated

It’s recommended that you use this feature only on good coverage environments as it might have negative impact on voice or interactive video traffic.

To effectively detect and mitigate RF interference, enable Cisco CleanAir® whenever possible. There are recommendations for various sources of interference to trigger security alerts, such as generic DECT phones, jammers, etc. To verify the CleanAir configuration on the different bands, do the following:

A screenshot of a cell phoneDescription automatically generated

CleanAir in general does not have an impact on network performance, and hence it should be left on. There have been a few customer installations in which a large presence of Bluetooth beacon devices caused some performance degradation. In these cases it’s recommended that you disable CleanAir detection for these types of devices. To do that, use this command:

C9800(config)#no ap dot11 24ghz cleanair device ble-beacon

This feature enables the WLC to do channel changes when sudden and critical RF interference is detected on the APs’ current operating channel, without waiting for the normal DCA process to perform the modification based on RF metrics. It can leverage the CleanAir information, and use it to force a quick reaction time, for situations in which clients will probably be suffering from bad throughput or connectivity issues.

Event-driven RRM (ED-RRM) is not on by default; it’s a good practice to enable it. This is done in the Configuration > Radio Configuration > RRM settings:

A screenshot of a cell phoneDescription automatically generated

Spectrum Intelligence (SI) is a feature that allows the AP to scan for non-Wi-Fi radio interference on 2.4-GHz and 5-GHz bands. Spectrum intelligence provides basic functions to detect interferences of three types, namely microwave, continuous wave (like video bridge and baby monitor), wi-fi and frequency hopping (Bluetooth and frequency-hopping spread spectrum (FHSS) cordless phone). It is supported on the APs that don’t have a hardware accelerated solution with a dedicated radio:

Since SI is done in software and leverages the client serving radios, Cisco recommends that you disable this feature (done by default starting release 17.6.1) and you consider carefully where and when you want to turn it on.

When a wireless network is first initialized, all participating radios require a channel assignment to operate without interference. Dynamic Channel Assignment (DCA) optimizes the channel assignments to allow for interference-free operation. The C9800 wireless controller does this using the air metrics reported by each radio on every possible channel and providing a solution that maximizes channel bandwidth and minimizes RF interference; interference is from all sources, such as self (signal), other networks (foreign Wi-Fi interference), and noise (everything else).

DCA is enabled by default and provides a global solution to channel planning for your network. Let RRM automatically configure all 802.11a or 802.11b/g channels based on availability and interference. This is the default, but here is the CLI command:

c9800(config)#ap dot11 5ghz rrm channel dca global auto

c9800(config)#ap dot11 24ghz rrm channel dca global auto

All the settings are available on the GUI as well (the example below is for a 5-GHz network):

A screenshot of a social media postDescription automatically generated

By default the interval is set to 10 minutes. After your network has been brought up and is stable, it is recommended that you choose a longer interval, between 4 and 6 hours.

802.11n can operate in a 40-MHz channel by bonding two 20-MHz channels together, which significantly increases throughput. Not all 802.11n devices support 40-MHz bonded channels, so it’s important to check. 802.11ac/ax allows for bonding of 20-MHz channels into an 80-MHz-wide channel for 802.11ac/ax usage, and all clients must support 80 MHz. This is not practical for 2.4 GHz, as there are a very limited number of nonoverlapping 20-MHz channels available. However, in 5 GHz, this can represent a significant increase in throughput and speed, provided you have enough 20-MHz channels available.

Quick overview of channel width:

●      20 MHz: Permits the radio to communicate using only 20-MHz channels. Choose this option for legacy 802.11a radios, 20-MHz 802.11n radios, or 40-MHz 802.11n radios that you want to operate using only 20-MHz channels.

●      40 MHz: Permits 40-MHz 802.11n/ac/ax radios to communicate using two adjacent 20-MHz channels bonded together. The radio uses the primary channel that you choose as the anchor channel (for beacons) as well as its extension channel for faster data throughput. Each channel has only one extension channel (36 and 40 are a pair, 44 and 48 are a pair, and so on). For example, if you choose a primary channel of 44, the Cisco WLC would use channel 48 as the extension channel. If you choose a primary channel of 48, the Cisco WLC would use channel 44 as the extension channel. 40 MHz is the recommended width for Apple iOS-focused deployments.

●      80 MHz: Sets the channel width for the 802.11ac/ax radios to 80 MHz.

●      160 MHz: Sets the channel width for the 802.11ac/ax radios to 160 MHz.

●      Best: Enables dynamic bandwidth selection, to modify the width depending on environmental conditions. This is the default setting.

In case of multitenant buildings, where channel bonding overlap may happen due to other wireless networks working in the same RF space, you can force the Best option to limit the bonding to 40 MHz:

c9800(config)#ap dot11 5ghz rrm channel dca chan-width width-max WIDTH_40Mhz

40 MHz channel width it’s a safe bet and would give you the best compromise between non-overlapping channel availability and performances. In high density deployment you may need to go to 20 MHz. You should use 80 or 160 MHz only when there are no overlapping networks. Few client devices may not perform properly on 80 or 160 MHz, so it should be validated on your environment.

Note:      When enabling Best for the first time, a full DCA restart is recommended, using the c9800# ap dot11 5ghz/24ghz rrm dca restart command.

RRM works in conjunction with CleanAir and spectrum analysis, and ED-RRM is an important function to allow a quicker reaction to interference. To improve handling of Wi-Fi interference, rogue severity has been added to the ED-RRM metrics, via a feature called Wi-Fi interference awareness. If a rogue access point is generating interference above a given threshold, this functionality changes channels immediately instead of waiting until the next DCA cycle.

Note:      Wi-Fi interference awareness should be used when ED-RRM is enabled. It should be avoided in buildings with a very large number of colocated Wi-Fi networks (multitenant buildings) that are 100% overlapping.

To enable Wi-Fi interference awareness and configure the duty cycle to 80%, go to the DCA tab under Configuration > Radio Configuration > RRM, and go to the Event-Driven-RRM section:

A screenshot of a cell phoneDescription automatically generated

Dynamic Frequency Selection (DFS) was created to increase the availability of channels in the 5-GHz spectrum. Depending on the regulatory domain, this can be from 4 to 12 additional channels. More channels imply more capacity. DFS detects radar signals and ensures that there is no interference with weather radar that may be operating on the frequency. Although the 5-GHz band offers more channels, care should be given to the overall design, as the 5-GHz channels have varying power and indoor/outdoor deployment restrictions. For example, in North America, the U-NII-1 channel can be used only indoors and has a restriction of 50 mW maximum power, and both U-NII-2 and U-NII-2e are subject to DFS.

Figure 2.   U-NII channels

A graph of different colorsDescription automatically generated with medium confidence

By default, U-NII-2e channels are disabled in the DCA channel list. To check the channels that are being used and add channels, go to the Channel List section:

A picture containing screenshotDescription automatically generated

Once you have made selections for channels and channel widths, DCA will manage the channels dynamically and make adjustments as needed over time and changing conditions. However, if this is a new installation, or if you have made major changes to DCA such as changing channel widths or adding new APs, you can restart the DCA process. This initializes an aggressive search mode (startup) and provides an optimized starting channel plan. To determine which WLC is currently the group leader, use these commands:

c9800-1#sh ap dot11 5ghz group

c9800-1#sh ap dot11 24ghz group

From the identified group leader, to reinitialize DCA, use these commands:

c9800-1# ap dot11 5ghz rrm dca restart

c9800-1# ap dot11 2.4ghz rrm dca restart

Startup mode will run for 100 minutes, reaching a solution generally within 30 to 40 minutes. This can be disruptive to clients, due to lots of channel changes, if significant changes have been made to channel width, number of APs, and so on.

Note:      DCA restart should not be performed without change management approval for wireless networks that contain real-time-based applications, especially prevalent in healthcare.

Avoid using this option, as it could trigger too frequent changes in DCA due to varying load conditions. It is disabled by default.

DCA Cisco AP load

For Flexible Radio Assignment (FRA) to work properly, it is necessary that the channel change leader (RF group leader) be the same for both 2.4- and 5-GHz bands. To check if they are the same:

DCA and Flexible Radio Assignment

Choose the 2.4-GHz tab to verify for the other network.

The FRA interval needs to be greater than or equal to the DCA interval, even if FRA is not in use. To modify it, simply set the FRA interval to the desired value, then modify the DCA interval. In the example below, assuming that DCA is set to run every 8 hours, you can set FRA to run every 10 hours:

A screenshot of a cell phoneDescription automatically generated

Transmit Power Control

The Cisco WLC dynamically controls the access point transmit power based on real-time wireless LAN conditions. Based on field experience with the C9800 compared to AireOS, you cannot choose to use TPCv2, but only TPCv1. With TPCv1, power can be kept low to gain extra capacity and reduce interference.

The Transmit Power Control (TPC) algorithm increases and decreases the power of an AP in response to changes in the RF environment. In most instances, TPC seeks to lower the power of the AP to reduce interference. But in the case of a sudden change in the RF coverage—for example, if the AP fails or becomes disabled—TPC can also increase the power of the surrounding APs. This feature is different from coverage hole detection, which is concerned primarily with clients. TPC provides enough RF power to achieve desired coverage levels while avoiding channel interference between APs. To configure automatic TPC on either the 5-GHz or 2.4-GHz network, go to Configuration > Radio Configuration > RRM and then select the 5-GHz Band or 2.4-GHz Band tab:

A screenshot of a cell phoneDescription automatically generated

For optimal performance, use the Automatic setting to allow the best transmit power for each radio. While the default values should work for most environments, it is advisable to adjust the TPC thresholds to adapt properly to your RF deployment characteristics.

The controller uses the quality of client signal levels reported by the APs to determine if the power level of that AP needs to be increased. Coverage hole detection (CHD) is run at the single controller, so the RF group leader is not involved in these calculations. The controller knows the number of clients that are associated with a particular AP and the signal-to-noise ratio (SNR) for each client. If a client SNR drops below the configured threshold value on the controller, the AP increases its power level to compensate for the client. The SNR threshold is based on the transmit power of the AP and the coverage profile settings on the controller.

The CHD settings can be found by going to Configuration > Radio Configuration > RRM and then selecting the 5 GHz Band or 2.4 GHz Band tab:

A screenshot of a cell phoneDescription automatically generated

The default settings are recommended for most deployments.

These are the best practices for mobility group configuration.

Ensure that IP connectivity exists between the management interfaces of all controllers. If a controller in the mobility group is permanently down (for replacement, testing, etc.), it is recommended that you remove it from the mobility configuration of all peers.

The mobility group name acts as a discriminator to indicate which controllers share a common cache for fast roaming information (Cisco Centralized Key Management, 802.11r, proactive key caching [PKC], or OKC). It is important to ensure that, if fast roaming is needed between controllers, they share the same mobility group name.

Do not create unnecessarily large mobility groups. A mobility group should contain only controllers that have APs in the area where a client can physically roam—for example, all controllers with APs in a building. If you have a scenario in which several buildings are separated, they should be broken into several mobility groups. This saves memory and CPU, as controllers do not need to keep large lists of valid clients, rogues, and APs inside the group, which would not interact anyway. The C9800 wireless controller, like AireOS, supports a maximum of 24 members in a single mobility group.

Note:      Do not confuse mobility groups with mobility domains. The C9800 supports up to 72 wireless controllers in a mobility domain or list. This is used for mobility across multiple mobility groups (this is NOT fast roaming, as that is available only within the same mobility group) and for setting up for foreign anchor peering for guest tunneling.

On the Catalyst 9800, inter-controller Layer 2 roaming occurs when the client VLAN associated to the SSID is the same on both controllers. When the client associates to an access point joined to a new controller, the new controller exchanges mobility messages with the original controller, and the client database entry is moved to the new controller. New security context and associations are established if necessary, and the client database entry is updated for the new access point. This process remains transparent to the user.

Inter-controller Layer 3 roaming occurs when the client VLANs associated to the SSID are different on each controller. Layer 3 roaming is similar to Layer 2 roaming in that the controllers exchange mobility messages on the client roam. However, instead of moving the client database entry to the new controller, the original controller marks the client with an “Anchor” entry in its own client database. The database entry is copied to the new controller client database and marked with a “Foreign” entry in the new controller. The roam remains transparent to the wireless client, and the client maintains its original IP address.

On the Catalyst 9800 Wireless Controller the decision for Layer 2 versus Layer 3 roaming is independent on the client subnet mapped to the client VLAN; only the VLAN matters in deciding the type of roam. This is because Catalyst 9800 doesn’t require a L3 interface to be configured for each client VLAN. If an inter-controller Layer 2 roaming is desired, then it’s user’s responsibility to make sure that the network is configured so that the same IP subnet is associated to the same VLAN on both wireless controllers.

Note:      This is different from AireOS, where Layer 2 roaming happens if the client VLAN and the associated subnet are the same on both wireless controllers.

When implementing AP distribution across controllers in the same mobility group, try to ensure that all access points in the same RF space belong to a single controller. This will reduce the number of inter-controller roams required. A “salt and pepper” scenario (in which APs from different controllers cover the same RF space) is supported, but it is a more expensive process in terms of CPU and protocol exchanges compared to having a single controller per RF space.

Cisco supports roaming between controllers running different Cisco IOS XE software versions, but in general, it is advisable to use equal code across the controllers in the same mobility group to ensure consistent behavior across the devices. For more information on what software versions support interoperability, check:

https://www.cisco.com/c/en/us/td/docs/wireless/compatibility/matrix/compatibility-matrix.html#pgfId-550562

Cisco supports inter-release controller roaming (IRCM) between the Catalyst 9800 and AireOS wireless controllers. This is important to ensure seamless mobility during brownfield and migration scenarios. For details, review the Cisco Catalyst 9800 Wireless Controller–AireOS IRCM Deployment Guide :

https://www.cisco.com/c/en/us/td/docs/wireless/controller/technotes/8-8/b_c9800_wireless_controller-aireos_ircm_dg.html

Migration from AireOS WLC to C9800

As you design for a migration between AireOS deployment and the new C9800 wireless network, there are some best practices to consider. IRCM guidelines are provided earlier in the Mobility section.

All the roaming between the C9800 and AireOS controllers is Layer 3 roaming. This means that no matter what VLAN the SSID is mapped to on each WLC, the client will always be anchored to the first WLC it joins. In other words, the point of attachment to the wired network doesn’t change with roaming, even if the VLAN on the wired side is the same on both WLCs.

In the migration design phase, when defining a common SSID for roaming, use a different VLAN ID and subnets on the Catalyst 9800 and on the AireOS WLC.

As a result, clients will get a different IP, whether they join the first Catalyst 9800 or AireOS; seamless roaming is guaranteed either way because the client will always keep its IP address on the VLAN/subnet it joined first.

This might not be possible in the following instances because:

●      The customer is not willing to change the subnet design to add another VLAN/subnet for clients that join the newly added Catalyst 9800. This might also involve changes in the AAA and firewall settings.

●      The customer leverages public IP subnets so they do’'t have another spare subnet to assign to clients on the same SSID

●      The customer is using static IP for wireless devices

When you have to use the same VLAN/subnet on both the Catalyst 9800 and AireOS, then is recommended to use the following releases:

●      Cisco IOS XE code: Release 16.12.4a or 17.3.2 and above

●      AireOS code: Release 8.5.17x, which is the seventh maintenance release (expected in January 2021) or Release 8.10.142 and above

The C9800 wireless controller uses a Secure Mobility protocol to build a secure mobility tunnel to the mobility peer. Secure Mobility is based on CAPWAP and by default encrypts all the control plane communication via DTLS. In order to set up a tunnel between the C9800 and AireOS, you need the right AireOS IRCM image, and you need to configure Secure Mobility on the AireOS side, as shown below:

Mobility groups and Secure Mobility

The hash is needed only when peering with a C9800-CL. In that case you need to get the hash with the following command:

c9800#sh wireless management trustpoint

Trustpoint Name  : ewlc-tp1

Certificate Info : Available

Certificate Type : SSC

Certificate Hash : 555c83c89d8fefab2d3601602117566b4e734e8e

Copy and paste the certificate hash into the AireOS mobility peer configuration:

Mobility groups and Secure Mobility

Data link encryption (encrypting client data traffic between controllers) is optional and is recommended if the tunnel is built on top of a nontrusted network. It is disabled by default, and if enabled, it has to be done on both sides. On AireOS:

Mobility groups and Secure Mobility

and on the C9800:

Mobility groups and Secure Mobility

As with two AireOS controllers or two C9800 controllers, the group name must match if you want to create a mobility group for supporting seamless mobility. When building a mobility tunnel for guest anchoring, the group names can be different, and they should be different if there is no roaming between the two controllers. The C9800 does not advertise anchored SSIDs on local APs on a guest anchor. Hence, roaming from foreign to anchor is not possible.

An RF group is a logical collection of wireless controllers that coordinate to perform RRM functions in a globally optimized manner, on a per-radio network basis. Separate RF groups exist for 2.4-GHz and 5-GHz networks. In order to cluster multiple WLCs into a single RF group, you need to set the same RF Group name. In this case, an RF Leader is elected and the RRM algorithms is expanded beyond the capabilities of a single WLC.

In a migration scenario, where you have AireOS and IOS XE based wireless controllers managing a common RF domain, please follow the guidelines below:

●      When forming a single RF group, it is recommended to set the RF leader statically and not relying on the default automatic election. This means that you would have to statically configure the most capable controller to be the leader. Here is the list in priority order:

Related image, diagram or screenshot

If you have an existing 5520 deployment and you add a C9800-40, you want the IOS XE based controller to become the leader. You can do that on the GUI by configuring the C9800-40 as the RF leader; this is under Configuration > Radio Configurations > RRM, then select each band (6GHz, 5GHz and 2.4 GHz), go to RF grouping and click on “Leader” and then apply.

Graphical user interface, applicationDescription automatically generated

●      For very high-density deployment, with a number of APs and clients near to the max scale numbers of the platform, the user should consider configuring each WLC to its own RF group: the advantage is better use of new features and functionalities and better management of newer Catalyst APs that most likely will be deployed only on the Catalyst 9800. It will also help reducing the load on each wireless controller.

Note:      If you configure two separated RF Groups, in order to avoid that the APs on the AireOS WLC would show up as rogues on the C9800, please configure the two WLCs in the same mobility group.

As you move an AP from an AireOS-based wireless controller to a Cisco IOS XE based one, there are a few considerations to keep in mind.

The first time the AP joins a controller based on a different OS, it will have to download the image and reload, so allow for downtime. After the first time, the AP will have both images in memory (the active and backup images), and you can move the AP back and forth between the two controllers without an additional download.

When moving an AP that is assigned to a certain AP group and a certain RF profile from AireOS to the C9800, this information is lost. You need to make sure that the C9800 is configured with the right profiles and tags and AP mapping, so that when the AP joins it will get the right settings.

Use extra caution when moving an AP from an AireOS-based appliance to a C9800-CL. On the appliance, the AP uses a Manufacturer Installed Certificate (MIC) to join the controller securely. On the C9800-CL, since it’s a VM, there is no MIC, and a self-signed certificate (SSC) is used. In order for the AP to join the C9800-CL, you have two options:

1.      Disable SSC validation on the AireOS appliance before moving the AP:

Moving APs between an AireOS WLC and the C9800

            This will make sure that the AP can join any virtual WLC.

2.      Configure a token on both controllers before moving the AP.

config certificate ssc auth-token <token> – on AireOS WLC

wireless management certificate ssc auth-token 0 <token> – on the C9800

            A token is just a string, and it has to match on both wireless controllers.

FlexConnect best practices

FlexConnect deployment is optimized for remote sites or branches for a distributed enterprise. Here are some important considerations:

●      FlexConnect helps reduce the branch hardware footprint, provides capital and operational expenditure savings, and reduces power consumption by eliminating the need for a local controller.

●      The wireless controller function is consolidated at the data center site and provides easy and centralized IT support. FlexConnect is ideal when the customer has a cookie-cutter configuration for multiple locations, as everything is managed centrally.

●      FlexConnect is designed for working across a WAN and provides survivability against WAN failures and reduced WAN usage between the central and remote sites.

●      For FlexConnect APs, the control plane is always centralized to the central WLC, but the data plane is flexible: the client traffic can be either locally switched at the AP or centrally switched at the controller.

Certain architectural requirements need to be considered when deploying a distributed branch office in terms of the minimum WAN bandwidth, maximum round-trip time (RTT), minimum MTU, and fragmentation. These guidelines are captured in the following guide:

https://www.cisco.com/c/en/us/td/docs/wireless/controller/technotes/8-8/b_flex_connect_catalyst_wirelss_branch_controller_dg.html#id_93580

Note:      As the CAPWAP control traffic between AP and WLC traverses the WAN, it is a good practice to set the quality of service (QoS) on the wired infrastructure to prioritize CAPWAP control channel traffic on UDP port 5246.

With the C9800, in order to configure an AP to operate in FlexConnect mode, you need to properly configure the site tag you assigned to the AP. In other words, you don’t have to set the mode to FlexConnect on the AP itself (as you were doing for AireOS), but simply to assign the AP to a site tag that is configured to be a remote site, and the C9800 will do the conversion automatically. The AP will NOT reboot but will simply go for a CAPWAP restart and join back in less than 30 seconds.

Here is an example of a site tag configured for FlexConnect:

FlexConnect mode on the C9800

As highlighted in the screen shot above, you need to uncheck Enable Local Site (which is the default), and this will trigger the AP to be converted to Flex mode. Also notice that the default Flex profile will also be selected. This is where you set all the Flex settings and you can use the default or a custom one if you have different settings in every branch.

Let’s look at an example. The AP initially joined in the default site tag, which is by default a local site, and you can see that the AP is in local mode, as expected:

FlexConnect mode on the C9800

Now assign the AP to the site tag created, the Flex-site one. This can be done by editing the tag assignment on the AP itself:

FlexConnect mode on the C9800

The AP disconnects and comes back in Flex mode, as expected:

FlexConnect mode on the C9800

Enable local switching on the WLAN to provide resiliency against WAN failures and reduce the amount of data going over the WAN, thus reducing the WAN bandwidth usage. Local switching is useful in deployments where resources are local to the branch site and data traffic does not need to be sent back to the controller over the WAN link. Recommendations for local switching are as follows:

●      Connect the FlexConnect AP to the 802.1Q trunk port on the switch.

●      When connecting with a native VLAN on the AP, the native VLAN configuration on the Layer 2 must match the configuration on the AP.

●      Ensure that the native VLAN is the same across all APs in the same location and site tag.

Some features are not available in local switching mode, depending on whether the AP is in connected mode (registered to the WLC) or standalone mode (the AP has lost connection to the WLC). Please check the feature availability using the Flex Matrix:

https://www.cisco.com/c/en/us/td/docs/wireless/access_point/wave2-ap/feature-matrix/b-wave2-ap-feature-matrix/catalyst-controllers.html

With the C9800, the native VLAN is defined in the Flex profile, as this is a setting for that Flex site. In this example the native VLAN is VLAN 10:

A screenshot of a cell phoneDescription automatically generated

And matches the one configured on the switch:

interface TenGigabitEthernet1/0/3

 description to_Flex_AP

 switchport trunk native vlan 10

 spanning-tree portfast trunk

The local switching attribute and the VLAN that clients would use is defined at the Policy profile, as this is a policy associated to the WLAN. For a locally switched WLAN, just disable central switching and central association on the Policy profile. If the DHCP server is available at the local site, also disable central DHCP:

Local switching

The VLAN on the AP for the locally switched traffic can be configured in two ways:

●      Using the VLAN ID (number): Enter the VLAN number directly in the Policy profile. There is no need to define this VLAN on the controller itself, as it’s only for locally switched traffic. This VLAN will be pushed to the APs:

Local switching

●      Using the VLAN name: In this case you create the VLAN name globally on the WLC first and then you must tell the AP which VLAN ID to use for that VLAN name at a specific site. The mapping of VLAN name <> VLAN number needs to be configured under the Flex profile, and in this way the right VLAN ID is pushed to the APs.

            Let’s look at an example: VLAN “branch1” is defined first on the controller as a Layer 2 VLAN:

Local switching

            Then you select the VLAN name on the Policy profile:

VLAN name

The same VLAN name is mapped to the desired VLAN ID in the Flex profile, under the VLAN tab (in this case it’s the same number, 20):

VLAN name

If you have multiple branches and you want to use a different VLAN ID (number) in every branch with the same VLAN name, you can do this by configuring the mapping to the desired VLAN ID in a custom Flex profile assigned to each branch.

Note:      A maximum of 16 locally switched VLANs can be mapped to a Flex profile.

Note:      The VLAN name to VLAN ID mapping needs to be configured under the Flex profile also to use AAA VLAN override, when a locally switched VLAN is returned via the AAA server.

When the site tag is configured for Flex, meaning that it’s disabled as a local site, it becomes the equivalent of an AireOS FlexGroup. For the C9800, it is important to remember that:

●      If seamless fast secure roaming is required, you still have a limit of 100 APs per Flex site tag (the same as AireOS). Starting release 17.8.1 the limit has been increased to 300 APs and 3000 clients, leveraging the “Pairwise Master Key (PMK) propagate” feature

●      The client pair master key (PMK) is distributed among the APs that are part of the same Flex site tag. If you roam between two Flex site tags, the client will be forced to do a full reauthentication (the same as AireOS when roaming across Flex groups).

●      All the settings for the AP in a Flex site tag are done at the Flex profile level, which is then assigned to the site tag.

From a design perspective, these are best practices you should consider when dealing with FlexConnect site tags:

●      With FlexConnect, the site tag defines the perimeter where fast secure roaming is supported. Therefore, you should assign a site tag that equals a roaming domain,  where clients are likely to roam. This means that if you have RF leaking between two floors, it is recommended to configure the APs on both floors as part of the same site tag. Of course, keep in mind the 100 AP limit already mentioned.

●      You should always use custom site tags with FlexConnect. For the default site tag, fast and secure roaming is not supported

●      You should configure at least one custom site tag per FlexConnect location. (Multiple tags might be needed if you plan to exceed the 100/300 APs limit.) It is also important not to re-use the same site tag across multiple Flex locations (this includes the default-site-tag).

●      Starting release 17.3.3, C9800 supports client overlapping IP addresses across different site tags. The site tag in each site should be unique as C9800 uses the combination of site-tag + IP address as a unique ID for the client (called zone-id)

Note:      Client overlapping IP addresses is only available for Flex deployment in local switching with local DHCP server; for all other deployments (local mode, central switching, central DHCP, etc.), overlapping IPs are still not supported.

There are several features that leverage the concept of a FlexConnect profile and site tag:

●      802.11r Fast Transition (FT), Cisco Centralized Key Management, or OKC fast roaming for voice deployments

●      Local backup RADIUS server

●      Local EAP

●      WLAN-to-VLAN and VLAN-to-ACL mapping

●      Cisco Umbrella®

●      Cisco TrustSec®

Configure the split tunneling feature in scenarios where most of the resources are located at the central site and client data needs to be switched centrally, but certain devices local to the remote office need local switching to reduce WAN bandwidth utilization. A typical use case for this is the OEAP teleworker setup, where clients on a corporate SSID can talk to devices on a local network (printers, wired machines on a remote LAN port, or wireless devices on a personal SSID) directly without consuming WAN bandwidth by sending packets over CAPWAP. Central DHCP and split tunneling use the routing functionality of the AP.

Split tunneling in the C9800 is configured under the Policy profile. Use the reference in the configuration guide: https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/flexconnect.html#ID138

The following limitations apply when deploying split tunneling:

●      Split tunneling is supported on 802.11ac Wave 2 and 802.11ax APs starting with Release 17.3.

●      Static IP clients are not supported with central DHCP and local split WLANs. So you need to configure DHCP Required under the Policy profile.

Use VLAN-based central switching in scenarios where dynamic decisions need to be made to locally switch or centrally switch the data traffic based on the VLANs returned by the AAA server and the VLANs present at the branch site. For VLANs that are returned by the AAA server and are not present on the branch site, meaning that they have not been mapped to the AP via the Flex profile, the traffic will be switched centrally. In the C9800, VLAN-based central switching is configured at the Policy profile level.

Quality of service (QoS)

This section provides a quick overview of the Catalyst 9800 Wireless QoS and some key best practices

Wireless QoS refers to the capability of a network to provide better service to selected network traffic over the wireless media. The primary goal of QoS is to provide priority, including dedicated bandwidth, controlled jitter and latency (required by some real-time and interactive traffic), and improved loss characteristics.

When considering QoS on Catalyst Wireless, following are important things you need to know:

●      As with any other Cisco IOS XE device, QoS features on the Catalyst 9800 are enabled through the Modular QoS Command-line interface (MQC) . The MQC is a command-line interface (CLI) structure that allows you to create traffic policies and attach these policies to targets (class-maps, policy-maps, etc.).

●      A target is the entity where the policy is applied. The Catalyst 9800 supports two targets: SSID and client .

●      In terms of Wireless QoS policies for the Catalyst 9800, you will want to consider the following guidelines:

◦     Wireless targets can be configured only with marking and policing policies

◦     One policy per target per direction is supported

◦     Only one marking action (set DSCP) is supported

◦     Only one set action per class is supported

●      Wireless QoS policies for SSID and client may be applied in the upstream and downstream directions. The flow of traffic from a wired source to a wireless target is known as downstream (or egress ) traffic. The flow of traffic from a wireless source to a wired target is known as upstream (or ingress ) traffic.

●      SSID policies : You can create QoS policies on SSID in both the ingress and egress directions. If not configured, a SSID policy will not be applied. The policy is applicable per AP per SSID.

●      Client policie s: Client policies are applicable in the ingress and egress directions. AAA override is also supported.

●      Wireless QoS policies are configured under the Policy Profile .

The main purpose of the Metal QoS profile is to limit the maximum DSCP allowed on the network. The Catalyst 9800 supports four different QoS levels/profiles:

●      Platinum/voice – ensures a high quality of service for voice over wireless

●      Gold/video – supports high-quality video applications

●      Silver/best effort – supports normal bandwidth for clients; this is the default setting

●      Bronze/background – provides the lowest bandwidth for guest services

In general, Metal QoS profiles work the same as in AireOS. However there are some differences in the Catalyst 9800 that you should consider:

●      You can apply a Metal profile on both egress and ingress separately.

●      On the GUI, you can only set the Metal QoS per SSID. On the CLI you can also configure it on client target.

●      On the Catalyst 9800 Metal QoS profiles are not configurable by the user.

●      In the Catalyst 9800 the non-matching traffic goes in the default class and it is marked with best effort.

●      Per-user and per-SSID bandwidth contracts are configurable via MQC QoS policies.

“DSCP trust” is the QoS model supported by the Catalyst 9800. This means that all the QoS processing (queuing and policies) applied to the wireless traffic within the AP and WLC are based on the client DSCP value and not the 802.11 user priority (UP).

For example, for a centralized switching SSID in the downstream direction (wired to wireless traffic) the AP takes the DSCP value from the received CAPWAP header and uses it for internal QoS processing and mapping (received DSCP > UP > Access_Category). The DSCP value is mapped to the UP value in the frame to the wireless client using the data in Table 1 according to RFC 8325.

Table 2.            Number of WNCd processes per platform

IETF DiffServ Service Class

DSCP

802.11 user priority

801.11 access category

Network control

CS6, (CS7)

0

AC_BE

IP telephony

EF (46)

6

AC_VO

VOICE-ADMIT

VA (44)

6

AC_VO

Signaling

CS5 (40)

5

AC_VI

Multimedia conferencing

AF4x

4

AV_VI

Real-time interactive

CS4 (32)

5

AC_VI

Multimedia streaming

AF3x

4

AC_VI

Broadcast video

CS3 (24)

4

AC_VI

Low-latency data (transactional)

AF2x

3

AC_BE

OAM

CS2 (16)

0

AC_BE

High-throughput data (bulk data)

AF1x

0

AC_BE

Best Effort

DF

0

AC_BE

Low-priority data (scavenger)

CS1 (8)

1

AC_BK

Remaining

Remaining

0

AC_BE

Note:      For DSCP values that don’t map to an entry in Table 1, the Catalyst 9800 will use UP = 0, so traffic is sent as best effort.

In the upstream direction it is recommended to configure the AP to map the inner DSCP client value to the outer CAPWAP header. This is done using the following command under the AP Join profile:

ap profile <name>     qos-map trust-dscp-upstream

If not configured, the AP will use the UP value and map it to the DSCP value described in Table 1. Starting with Release 17.4, the qos-map trust-dscp-upstream is the default setting so that client DSCP is, by default, maintained end to end.

For a detailed configuration guide on QoS, review this configuration guide : https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/17-3/config-guide/b_wl_17_3_cg/m_wireless_qos_cg_vewlc.html

Following are some other important considerations and recommendations:

●      SSID level policy – is applied per AP to the aggregate traffic for all clients on that SSID

●      Client level policy ­– this is per-client policy. Metal policies (platinum, gold, silver, bronze) cannot be configured per client on the WebUI, but they can be configured via CLI.

●      If both SSID and client policies are applied, then the client policy is applied first and then the SSID policy

●      QoS policy AAA override is available per client, not per SSID. It is supported for APs in local mode as well as FlexConnect mode. You need to return the policy name as cisco av-pair from the RADIUS server:

●      cisco-av-pair = ip:sub-qos-policy-in=MyPolicy

●      cisco-av-pair = ip:sub-qos-policy-out=MyPolicy

●      QoS policies can also be applied via Auto-QoS. This is a set of predefined profiles that can be further modified by the customer to prioritize different traffic flows. To learn about the different auto-qos profiles and what they do, review this configuration guide : https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/17-3/config-guide/b_wl_17_3_cg/m_wireless_autoqos_cg_vewlc.html

●      For voice SSIDs it is recommended to use the “Fastlane” auto-qos profile (and not the voice profile). Fastlane will trigger the following configuration:

◦     Client QoS policy set to platinum

◦     EDCA parameter set to Fastlane under Radio Configurations > Parameters > 5 and 2.4 GHz bands

◦     The Catalyst 9800’s egress priority queuing is set to prioritize voice and CAPWAP traffic applying the AutoQos-4.0-wlan-Port-Output-Policy service policy

◦     To verify the EDCA settings, use the following command on the AP’s CLI:

sh controllers dot11Radio 1 | begin EDCA

●      For Guest SSID, it’s recommended to set the Metal QoS policy to Bronze

●      Regarding EDCA settings, remember that these settings are global per radio and not per SSID. There is no single recommended value for all networks, so it is important to test different values. For networks with voice and video traffic, it is a good idea to set the EDCA to “optimized-video-voice”.

●      QoS Bi-Directional Rate Limiting (BDRL) policy with AAA override is supported for both local and FlexConnect mode . Please read the QoS BDRL with AAA override on Catalyst 9800 Series Wireless Controllers guide for more details: http://cs.co/BDRL-QoS-example

The main command to use to verify what QoS policy has been configured:

C9800#sh policy-map interface wireless <ssid/client> profile-name <WLAN> radio type <2.4/5GHz> ap name <name> input/output

To verify the client policy:

C9800#show wireless client mac <> service-policy input/output

To verify the EDCA parameters on the AP:

AP#sh controllers dot11Radio 1 | begin EDCA

Note:      As with AireOS, QoS policy is applied at the AP for FlexConnect local switching SSIDs and at the controller for centrally switched traffic. It is the same for upstream and downstream directions.

This section provides best practices for enabling multicast applications on your wireless network.

Use multicast forwarding mode for the best performance with less bandwidth utilization for multicast applications when the underlying switched infrastructure supports multicast. Networks with large IPV6 client counts, multicast video streaming, and Bonjour without mDNS proxy may benefit greatly with multicast mode. If the APs are on different subnets than the one used on the WLC’s management interface and AP multicast mode is enabled, your network infrastructure must provide multicast routing between the management interface subnet and all AP subnets; otherwise all multicast traffic will be lost.

To configure multicast-multicast operations on the WLC WebUI go to Configuration > Services > Multicast

A screenshot of a cell phoneDescription automatically generated

To verify the multicast mode on the controller via the CLI, use the following command:

c9800-1#sh wireless multicast

Multicast                               : Enabled

AP Capwap Multicast                     : Multicast

AP Capwap iPv4 Multicast group Address  : 239.3.4.2

AP Capwap iPv6 Multicast group Address  : FF08::3:4:2

Wireless Broadcast                      : Disabled

Wireless Multicast non-ip-mcast         : Disabled

AP CAPWAP IPv6 Multicast group Address is needed only if you have APs configured with an iPv6 address; if all the access points are on iPv4 addresses, then the iPv6 multicast address is not needed as an iPv4 multicast CAPWAP overlay can carry both client iPv4 and iPv6 multicast traffic.

Starting with Release 17.2, you can use the following CLI command to verify the status of the capwap multicast tunnel for the APs:

c9800-1#sh ap multicast mom

AP Name    MOM-IP TYPE            MOM-STATUS

AP1                  iPv4                              UP

AP2                    iPv4                              UP

“MOM” stands for multicast over multicast.

Multicast-forwarding mode is the recommended setting. Use unicast forwarding only for small deployments and when multicast routing support in the network infrastructure is not possible. Unicast forwarding is not supported on the C9800-80, C9800-40, and C9800-CL medium and large template platforms.

Multicast Address for CAPWAP

The multicast address is used by the controller to forward traffic to APs. Ensure that the multicast address does not match another address in use on your network by other protocols. For example, if you use 224.0.0.251, it breaks mDNS used by some third-party applications.

Cisco recommends that the address be in the private range (239.0.0.0 to 239.255.255.255, which does not include 239.0.0.x and 239.128.0.x, as those ranges will cause a Layer 2 flood). Also ensure that the multicast IP address is set to a different value on each WLC to avoid multicast packet duplication.

If you are using a native iPv6 wireless infrastructures (APs configured with iPv6 address) or a mix of iPv4 and iPv6, then the Multicast group Address needs to be configured with an iPv6 address as well.

Using Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) snooping may provide additional multicast forwarding optimization, as only APs with clients that have joined the respective multicast groups will transmit the multicast traffic over the air, so this is a recommended setting to have in most scenarios. Always check your client and multicast application behavior, as some implementations may not do IGMP group join, or may not refresh properly, causing the multicast streams to expire.

mDNS (Bonjour Protocol) is a protocol to resolve hostnames to IP addresses within small networks that do not include a local name server. It’s used by clients to discovers services like AirPlay, AirPrint, Googlecast, etc. The protocol leverages UDP IP multicast and it’s limited to a Layer 2 broadcast domain.

In C9800 architecture there are two modes of operations regarding mDNS traffic forwarding: Bridging and mDNS gateway

mDNS Bridging refers to same L2 broadcast protocol packet forwarding. C9800 enables mDNS bridging functionality for packets received on the wired ports and wireless interfaces for each WLAN by default; however, you can disable it per WLAN if needed by just changing the mDNS mode at WLAN settings. If Multicast-Multicast mode is enabled, C9800 bridges each mDNS packet to the AP multicast group configured on the controller so wireless clients can receive it, otherwise, it will create a copy of each mDNS packet received, which is then bridged individually to every single AP via CAPWAP unicast tunnel. Both scenarios, C9800 also bridges the mDNS packets into the wired at the VLAN of the client that originated the mDNS packet. Therefore, mDNS will work in C9800 without special configuration if the devices are on the same subnet. Ideally, it is better to filter mDNS traffic with the use of mDNS Gateway.

The mDNS Gateway service on the C9800 listens for Bonjour services (mDNS advertisements and queries) on wired and wireless interfaces, caches these Bonjour services in an internal database and forwards these mDNS packets between different broadcast domains/VLANs while filtering unneeded services. This way you can have the sources and clients of such services in different subnets, and control mDNS traffic in your network.

The C9800 that acts as mDNS Gateway replies to mDNS queries from clients (for cached services) sourcing these mDNS responses with the use of its IP address for the VLAN assigned to the client asking for the service. This is why all VLANs on the C9800 controller where there are clients that require mDNS/Bonjour services must have a valid IP address configured at the Switched Virtual Interface (SVI).

This requirement is no longer applicable starting release 17.9.1 where the mDNS traffic will be sourced from the WMI interface.

Processing of mDNS packets can be quite resource consuming especially when mDNS gateway is enabled; in networks where there are a lot of clients and a lot of services being advertised, it’s recommended to consider the following best practises:

●      Create your own mDNS service list, and don’t use the default profile so you can decide which services you really need. For example, a good practise would be to remove apple-continuity if enabled:

mdns-sd service-list <service list name> OUT

 no match apple-continuity

mdns-sd service-list <service list name> IN

no match apple-continuity

●      Configure the mDNS service policy to use Location Specific Services (LSS) to optimize mDNS responses to clients, using the command:

mdns-sd service-policy <name>

 location lss

●      Configure mDNS transport to be IPv4 or IPv6, not both. IPv4 is recommended:

mdns-sd gateway

       transport ipv4 

Outdoor Deployments

This section explains the outdoor best practices for design, deployment, and security.

The outdoor environment is a challenging RF environment. Many obstacles and interferers exist that cannot be avoided. Prior to designing a network, an RF active site survey is the first step to understand your RF environment.

Once the RF active site survey is performed, you must estimate the number of outdoor access points required to meet your network’s design requirement. The best tool for estimating an access point’s coverage area is the  WNG Coverage and Capacity Calculator.

Outdoor access points can operate in multiple deployment modes, with each deployment mode meeting a different use case.

●      Local mode: This is the best option for an outdoor deployment when mesh is not needed. It provides full feature support and RRM, and allows the 2.4-GHz and 5-GHz radios to be used exclusively for client access. This deployment mode should be used when each access point has a dedicated Ethernet connection.

●      Bridge mode: A common option for an outdoor deployment when mesh deployment is desired because a cable connection is not present for all APs. The AP operates either in root access point (RAP) mode, when the wired backhaul is available, or in mesh access point (MAP) mode when the AP uses the wireless backhaul. The wireless client traffic is CAPWAP tunneled to the WLC.

●      Bridge-Flex mode: Provides flexible and hybrid operation between mesh and Flex. This is recommended for scenarios in which the APs are separated by a WAN link from the WLC; also this mode is useful when you need to have traffic be locally switched at the AP level and not sent centrally to the controller.

Note:      If you want to use an outdoor AP in fabric mode, meaning to broadcast fabric SSID, then local mode is the only mode supported.

If the regulatory domain channel plan allows it, when selecting the backhaul channel for a mesh tree, avoid channels that can be used for radar (DFS channels).

When deploying a mesh network, there should be multiple paths for each access point back to a WLC. Multiple paths can be added by having multiple RAPs per mesh tree. If a RAP fails and goes offline, other mesh access points will join another RAP with the same bridge group name (BGN) and still have a path back to the WLC.

For best results, follow these simple recommendations:

·          Ensure that RAPs are configured on different channels to reduce or avoid co-channel interference. MAPs will use background scanning to identify each RAP.

·          RAPs should be on the same VLAN/subnet to prevent mesh AP address renegotiation on parent change that could delay total mesh convergence time.

·          Ensure that MAPs have background scanning enabled, to facilitate new parent discovery.

On the C9800 wireless controller, the mesh configuration can be done at the global level, at the Mesh profile level, and also at the AP level. Using a Mesh profile is useful, as you can group all the desired settings in one place and then apply them to the group of APs by assigning the Mesh profile to the AP Join profile.

The global configuration is found under Configuration > Wireless > Mesh:

A screenshot of a cell phoneDescription automatically generated

On the same page you can click the Profiles tab to define a custom one or change the default Mesh profile.

Another AP-specific configuration can be done by using the ap exec command:

c9800#ap name <NAME>  mesh ?

  backhaul        Configure mesh backhaul

  block-child     Set mesh block child state

  daisy-chaining  Set mesh daisy chaining

  ethernet        Configures Ethernet Port of the AP

  linktest        Perform a linktest between two APs

  parent          Set mesh preferred parent mac address

  security        PSK provisioned key deletion from AP

  vlan-trunking   Enables vlan trunking for bridge mode AP

Let’s consider a few recommended settings. When operating in bridge mode, each access point should be assigned a bridge group name and preferred parent. This helps the mesh network to converge in the same sequence every time, allowing the network to match the initial design.

The bridge group can be set at the Mesh profile level:

Recommended mesh settings

When deploying a mesh network, each mesh node should communicate at the highest possible backhaul data rate. To ensure this, it is recommended that you enable Dynamic Rate Adjustment (DRA) by selecting the Auto backhaul data rate. DRA has to be enabled on every mesh link by enabling it in the mesh Profile, as shown above.

Setting the preferred parent is a per-AP configuration:

C9800#ap name ap-name mesh parent preferred mac-address

To verify, use this command:

C9800#show ap name ap-name mesh neighbor detail

For a mesh network, a backhaul speed of 40 MHz allows the best equilibrium between performance and RF congestion avoidance. To set the channel width per AP, use the following command:

C9800# ap name <AP-name> dot11 5ghz channel width 40

To ensure optimal performance over your mesh network, make sure the backhaul link quality is good. An optimal link quality would be greater than 40 dBm, but this is not always achievable in a non-line-of-sight deployment or in long-range bridges. Cisco recommends that the link SNR be 25 dBm or greater. To check the link SNR, use the following command:

c9800#sh wireless mesh neighbor

If you want to authenticate APs as they join the mesh network, an external RADIUS server should be configured for MAC authentications. This allows all bridge mode access points to authenticate at a single location, thus simplifying network management. For instructions on how to set up authentication, refer to the configuration guide:

https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/mesh-access-points.html#id_88479

To have the best equilibrium between mesh security and ease of deployment, it is advisable that you enable the Mesh Key Provisioned feature. For more details, see the configuration guide:

https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/config-guide/b_wl_16_10_cg/mesh-access-points.html#id_88480

The Catalyst 9800 Wireless controller supports streaming telemetry to efficiently stream data to an external collector. The collector further analyzes the data and extract relevant information for monitoring and troubleshooting. C9800 supports dial-out telemetry; with dial-out or “configured” telemetry subscriptions, once the configuration is setup by the user, C9800 will maintain the subscription configuration and send telemetry to the subscriber without needing an active session at the collector. Here is a sample configuration of a telemetry subscription:

telemetry ietf subscription 1011

 encoding encode-tdl

 filter tdl-uri /services;serviceName=ewlc/wlan_config

 source-address <source IP on the C9800>

 stream native

 update-policy on-change

 receiver ip address <collector/subscriber IP>  protocol tls-native profile <profile name>

C9800 supports maximum 100 concurrent telemetry subscriptions. In 17.6 this limit has been extended to 128 concurrent sessions. The Catalyst 9800 supports streaming telemetry to one instance, and one instance only, of Cisco Catalyst Center and Cisco Prime Infrastructure (PI). You can have both collectors active at the same time, but you cannot have C9800 streaming telemetry to two different Cisco Catalyst Center collectors, for example. If using an external 3 rd party collector, make sure that the number of sessions doesn’t exceed the max supported number.

To show the existing subscriptions you can use the following command

show telemetry ietf subscription all 

  Telemetry subscription brief

  ID               Type        State       Filter type      

  --------------------------------------------------------

  1011             Configured  Valid       tdl-uri          

  1012             Configured  Valid       tdl-uri          

  1013             Configured  Valid       tdl-uri          

  1014             Configured  Valid       nested-uri       

  1016             Configured  Valid       tdl-uri          

  1051             Configured  Valid       tdl-uri 

Under “State” column you will also know if the subscription is valid.

Catalyst 9800 Wireless LAN Controller can not be simultaneously managed by both Cisco Prime Infrastructure (PI) and Catalyst Center in a read-write fashion. It is possible though to have Prime managing the C9800 for configuration and reporting and use Catalyst Center for Assurance. In a nutshell, only one management platform can be configuring the box and have write access.

For more information on how to configure Prime to manage C9800, please use this link: https://www.cisco.com/c/en/us/support/docs/wireless/catalyst-9800-series-wireless-controllers/214286-managing-catalyst-9800-wireless-controll.html

Here what is important to understand: if you have a plan to move to Cisco Catalyst Center as a network management solution, C9800 needs to be removed from Prime Infrastructure first. When C9800 is removed/deleted from PI, all the configuration that was pushed to C9800 at the time of inventory by PI does not get rolled back and these need to be manually deleted from the system. Specifically, the subscription channels established for C9800 WLC to publish streaming telemetry data does not get removed. 

To identify this specific configuration:

C9800#show run | sec telemetry

To remove this configuration, run the no form of the command:

C9800(config) # no telemetry ietf subscription <Subscription-Id>

Repeat this CLI to remove each of the subscription identifiers.  Repeat this other CLI to remove each of the transform names

C9800(config) # no telemetry transform <Transform-Name>

Troubleshooting tips

Please refer to these documents for the latest on troubleshooting:

https://www.cisco.com/c/en/us/support/wireless/catalyst-9800-series-wireless-controllers/products-tech-notes-list.html

https://logadvisor.cisco.com/logadvisor/wireless/9800/

ip address assignment best practices

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

IPv6 address space layout best practices
I'm comfortable with IPv4 address space allocations. By which I mean: Given services to plan for, or an organization to network, I have a good grasp of how to plan IP address space usage. (or at least, I think I do. :)

Are there any best practices guidance, or case studies, for IPv6 address space layout?

  • best-practices

Craig Constantine's user avatar

  • IPv6 Subnetting - Overview and Case Study Cisco.com –  Ronnie Smith Commented Jan 19, 2017 at 22:47

12 Answers 12

The layout that we are using for our rollout is:

  • /48 per customer
  • /56 per customer site (as a subnet of the other /48)
  • /126 for all point-to-point links in the core, these are all subnets of a /48 used for all core links

These sizes are mostly taken from the RIPE advisory here .

David Rothera's user avatar

  • 7 Though that only goes down to a site. How about the internal LANs, floors, buildings, services, voice LAN , convention of coding the VLAN into the network address, and so on ? –  nos Commented May 8, 2013 at 13:07
  • 1 I'd then use a /64 for each VLAN/floor/building (or however your allocation works). –  David Rothera Commented May 8, 2013 at 13:45
  • does ARIN (the RIR apropos for me) have any recommendations/advisories? –  Craig Constantine Commented May 8, 2013 at 14:23
  • I assume you've got some means of monitoring abuse from the likes of spammers who like burning through their assigned IPs? –  frogstarr78 Commented May 8, 2013 at 16:38
  • 4 ripe.net/lir-services/training/material/… has a pretty good read (thanks to Marco Hogewoning for pointing me to it). –  Andrew Y Commented May 23, 2013 at 14:58

The old recommendation was to use /64 everywhere even on P2P links and assign a /48 per site.

Using large, empty subnets on point-to-point links can lead to a number of potential security issues, (see RFC6164 ,) so it's now best practice to use a /127 for P2P links and /128 for loopbacks.

It's not necessary to give a small customer a /48 although you would have plenty addresses to go around if you choose to do so.

Interfaces that are facing customers should be /64 if you want to use SLAAC. If you don't intend to use it you can use another mask.

Here are some good links to go through:

BRKRST-2301 from ciscolive365.com (create free account) http://www.cisco.com/web/strategy/docs/gov/IPv6_WP.pdf https://www.rfc-editor.org/rfc/rfc5375.html https://www.rfc-editor.org/rfc/rfc6177

Some people take their current v4 assignments and convert second and third octet into hex and use that for v6. There are lots of different ways of doing it so you have to choose what feels best.

Community's user avatar

  • 7 I submit that any IPv6 addressing scheme that is based on an existing IPv4 addressing scheme should be subject to extra scrutiny. This is an opportunity to break free of past shackles, not a rote chore to faithfully reproduce them. –  neirbowj Commented May 23, 2013 at 14:08
  • 2 My understanding is that the smallest subnet that is advisable to create (P2P links aside) is a /64. If I'm a home customer and want to have multiple subnets on my LAN, without using NAT6, I want more than a /64. As someone interested in having IPv6 at my home, and as someone who knows how many quadrillions of /64s there are, I want at least a /60. –  Luke has no name Commented May 23, 2013 at 23:44

With IPv6 you no longer have to worry about allocating space for a given number of hosts. All subnets (other than P2P links) should be assigned as a /64 which gives you a ridiculous number of host addresses. This frees you to focus on other topics such as good network layout & design. (A /48 would give you 65,536 /64 networks)

There are (of course) several schools of thought on this. If you are already pretty happy with your IPv4 design, then doing a IPv6 overlay that mirrors things is probably a good option and eases the transition for everyone.

  • 2001:0DB8:1:1:: /64 --> 10.1.1.0 /24
  • 2001:0DB8:1:2:: /64 --> 10.1.2.0 /24
  • 2001:0DB8:1:254:: /64 --> 10.1.254.0 /24

Play around with some of the IPv6 calculators to help you get your head around all of this. Here is an example one: GestioIP Online IPv4/v6 Calculator

This was the hardest thing for me to get over - don't worry about allocating space for hosts! Plan your network -- focus on locations of layer-3 boundaries, services offered, physical location of devices, etc. It is probably going to be years before you have a pure IPv6 network, but you will begin laying the groundwork of good network design now.

Peter's user avatar

A little precision to earlier responses, based on the RIPE IPv6 training session I had a year ago. Basically their recommendation is to focus on aggregation rather than address space preservation .

That is : don't worry to reserve a large amount of IPs for a Point of Presence even if you only have small amount of subnets here (for now). But you should aggregate every subnet "living" in a POP under the same bigger prefix.

Their main concern, now that we have a very large amount of IP at our disposal, is that if everyone announces small prefixes with a fine granularity, the size of the DFZ routing table might explode.

Here is the training material used in the presentation. Especially the first "Training exercise" PDF gives some example of an addressing plan.

Benjamin A.'s user avatar

In use the following layout myself (datacenter pov)

Colocation customers: one /48.

Dedicated servers: one /64 per server by default.

P2P links (bgp linknets, and so on): /126

As for the IPv4 -> IPv6 transition to a dual stack environment for hosted vlans I match the ipv4 subnet to an ipv6 subnet which is large enough, to contain a /64 for every single ipv4 address.

For example:

Vlan containing one /24 ipv4 (256 ip's), I match that with a /56 Ipv6 (256 unique /64 subnets)

Vlan containing one /23 ipv4 (512 ip's), I match that with a /55 ipv6 (512 unique /64 subnets)

S.Ideler's user avatar

SURFnet wrote a nice IPv6 network plan manual that might be useful

Teun Vink's user avatar

  • This link is now dead; it's a fairly shallow answer, too. Perhaps you could include some highlights from the original source? –  Ryan Foley Commented Feb 14, 2014 at 21:51
  • I replaced the link with one hosted at RIPE (who sponsored the translation). It's quite hard to give a decent summary of the document since it addresses many different scenario's, but it mostly corresponds to what others mentioned here. It's a nice document for helping you make some decisions on how to choose addresses. –  Teun Vink Commented Feb 14, 2014 at 22:01
  • The question asks about the existence of best practices in general, without any specific inquiry. This answer succinctly satisfies this question. Upvoted. –  StockB Commented Apr 25, 2016 at 18:01
  • How to view this answer on Android? What app does work with the file? –  Ferrybig Commented Jun 1, 2018 at 15:29
  • The link is dead, again. –  Craig McQueen Commented Oct 26, 2021 at 20:30

It's a bit intimidating when you see the huge address space available, but in practice, it's not hard to deal with.

Let's say you are allocated a /48. That gives you 65K /64s to play with, each capable of holding rather a lot of addresses. Also the rounding error in 65K gives you a slack handful of other /<64 for other uses.

Personally speaking I call off /64 subnets from the /48 per VLAN. I set the router address as </64>::1 for each VLAN. I use </64>::xxxx for DNS (where xxxx is a repeated digit) and similar for a few other services. It's easier to remember.

Each box gets a SLAAC allocated address and all hosts are encouraged to also set a temporary address. This way we can find a system using the SLAAC address but the system retains a little privacy on the internet - or it would but we generally use a web proxy - ahh but that has a temporary address as well! Still, the ubiquity of IPv4 makes all this moot.

If you have multiple sites then break up the /48 into smaller bits but larger than /64 - enough to cover all eventualities. This will allow you to aggregate routing tables somewhat.

Frankly, assuming you DO have a /48 (I have one for my home, so I don't doubt it) then you should have enough space to cover most eventualities and schemes.

Now, if your setup is bigger - say multi-national and multi site then I suggest you investigate PI and then break that up by country/site/VLAN or country/locality/site/building/VLAN or whatever. You still get plenty of addresses in a /48 for all but the largest set up.

Glorfindel's user avatar

Some network device architectures assume that most of your prefixes will be /64's. Check this column in Ivan Pepelnjak's blog for more info.

RegisD's user avatar

The biggest concern is likely to identify where your bottlenecks are going to be, in terms of route aggregation. The basic parameters are likely going to be: each subnet must be a /64 (dictated by IPv6), and you have a /60, /56, or /48 to play with.

As others have said, a /48 gives you 64k subnets, but it's still easy to paint yourself into a corner if you just assign them randomly. Let's say you have 1000 store locations, and give each one a /64 sequentially from the start. Then you find out that the 43rd store needs a second subnet - that means, either renumbering that network, or giving the store two separate subnets that can't be aggregated.

Incidentally, in the IPv4 world, you also get 64k subnets if you use the 10.x.x.x network and subnet it to /24s. Some of the practices you use in that scenario may translate nicely.

One company I work for uses 10.x.x.x internally for about 150 branch offices (with some 100-500 computers at each location). The second byte is the branch number, and they use /22 instead of /24 for their subnets. So each branch office can have up to 64 subnets, which works nicely for them.

Kevin Keane's user avatar

  • Yes, the best practice is that each site gets a /56 or shorter mask length. Also, it is recommended that nibbles not be split when assigning things (each mask length assigned should be divisible by 4). Carriers will not advertise a prefix longer than /48, so if the individual sites are to be advertised separately, they each need a /48. –  Ron Maupin ♦ Commented Nov 8, 2015 at 5:34
  • That best practice (like most best practices) is generally a good idea, but may not always fit. For instance, if you are a Starbucks or McDonalds, you may not have enough /56s for all your stores. That's actually why organizations such as various country's militaries, and even a chain book store, wanted a /29 or even shorter prefixes. –  Kevin Keane Commented Nov 8, 2015 at 6:21
  • 1 My company got a much shorter mask length. You can easily get a much shorter mask length so that you can assign a /56 (or shorter) to each site. All I'm saying is that if you want to advertise a prefix on the Internet, you need a /48 or shorter mask length. Get a /32 or /24, it's not hard if you have the need. –  Ron Maupin ♦ Commented Nov 8, 2015 at 6:29
Are there any best practices guidance, or case studies , for IPv6 address space layout?

Super short answer: Starting at /56 try to project what will be used in the next several years and adjust up or down accordingly. People requesting a single address should still have a few allocated for future expansion, avoiding allocation fragmentation is important, moreso than slight over-allocation.

A longer answer:

Internet Engineering Task Force (IETF) - Best Current Practices :

  • RFC 6177 and BCP 157 - "IPv6 Address Assignment to End Sites" clarifies that a one-size-fits-all recommendation of /48 is not nuanced enough for the broad range of end sites and is no longer recommended as a single default.
1. Introduction - There are a number of considerations that factor into address assignment policies. For example, to provide for the long-term health and scalability of the public routing infrastructure, it is important that addresses aggregate well [ ROUTE-SCALING ]. Likewise, giving out an excessive amount of address space could result in premature depletion of the address space. This document focuses on the (more narrow) question of what is an appropriate IPv6 address assignment size for end sites. That is, when end sites request IPv6 address space from ISPs, what is an appropriate assignment size.
This document focuses on the (more narrow) question of what is an appropriate IPv6 address assignment size for end sites. That is, when end sites request IPv6 address space from ISPs, what is an appropriate assignment size.
This document does not make a formal recommendation on what the exact assignment size should be. The exact choice of how much address space to assign end sites is an issue for the operational community. The IETF's role in this case is limited to providing guidance on IPv6 architectural and operational considerations. This document provides input into those discussions.
2. On /48 Assignments to End Sites - Looking back at some of the original motivations behind the /48 recommendation [RFC3177], there were three main concerns. The first motivation was to ensure that end sites could easily obtain sufficient address space without having to "jump through hoops" to do so. For example, if someone felt they needed more space, just the act of asking would at some level be sufficient justification.
As a comparison point, in IPv4, typical home users are given a single public IP address (though even this is not always assured), but getting any more than one address is often difficult or even impossible -- unless one is willing to pay a (significantly) increased fee for what is often considered to be a "higher grade" of service. (It should be noted that increased ISP charges to obtain a small number of additional addresses cannot usually be justified by the real per-address cost levied by RIRs, but additional addresses are frequently only available to end users as part of a different type or "higher grade" of service, for which an additional charge is levied. The point here is that the additional cost is not due to the RIR fee structures, but to business choices ISPs make.)
An important goal in IPv6 is to significantly change the default and minimal end site assignment, from "a single address" to "multiple networks" and to ensure that end sites can easily obtain address space.
A change in policy (such as above) would have a significant impact on address consumption projections and the expected longevity for IPv6. For example, changing the default assignment from a /48 to /56 (for the vast majority of end sites, e.g., home sites) would result in a savings of up to 8 bits, reducing the "total projected address consumption" by (up to) 8 bits or two orders of magnitude. (The exact amount of savings depends on the relative number of home users compared with the number of larger sites.)
3. Other RFC 3177 Considerations - ... Given the large amount of address space in IPv6, there is plenty of space to grant end sites enough space to be consistent with reasonable growth projections over multi-year time frames. Thus, it remains highly desirable to provide end sites with enough space (on both initial and subsequent assignments) to last several years. Fortunately, this goal can be achieved in a number of ways and does not require that all end sites receive the same default size assignment.".
  • RFC 7608 and BCP 198 - "IPv6 Prefix Length Recommendation for Forwarding"
Abstract - IPv6 prefix length, as in IPv4, is a parameter conveyed and used in IPv6 routing and forwarding processes in accordance with the Classless Inter-domain Routing (CIDR) architecture. The length of an IPv6 prefix may be any number from zero to 128, although subnets using stateless address autoconfiguration (SLAAC) for address allocation conventionally use a /64 prefix. Hardware and software implementations of routing and forwarding should therefore impose no rules on prefix length, but implement longest-match-first on prefixes of any valid length.
  • RFC 7934 and BCP 204 - "Host Address Availability Recommendations" recommends that networks provide general-purpose end hosts with multiple global IPv6 addresses when they attach, and it describes the benefits of and the options for doing so.
Introduction - "Unlike IPv4, IPv6 networks are not forced by address scarcity concerns to provide only one address per host. ... Furthermore, providing multiple addresses has many benefits, including application functionality and simplicity, privacy, and flexibility to accommodate future applications. Another significant benefit is the ability to provide Internet access without the use of Network Address Translation (NAT). Providing only one IPv6 address per host negates these benefits.
2. Common IPv6 Deployment Model - IPv6 is designed to support multiple addresses, including multiple global addresses, per interface (see Section 2.1 of [RFC4291] and Section 5.9.4 of [RFC6434] ). Today, many general-purpose IPv6 hosts are configured with three or more addresses per interface: a link- local address, a stable address (e.g., using 64-bit Extended Unique Identifiers (EUI-64) or Opaque Interface Identifiers [ RFC7217 ]), one or more privacy addresses [ RFC4941 ], and possibly one or more temporary or non-temporary addresses obtained using the Dynamic Host Configuration Protocol for IPv6 (DHCPv6) [ RFC3315 ].
In most general-purpose IPv6 networks, hosts have the ability to configure additional IPv6 addresses from the link prefix(es) without explicit requests to the network. Such networks include all 3GPP networks ( [RFC6459], Section 5.2 ), in addition to Ethernet and Wi-Fi networks using Stateless Address Autoconfiguration (SLAAC) [ RFC4862 ].".
  • RFC 4862 - "IPv6 Stateless Address Autoconfiguration" explains:
3. Design Goals
Stateless autoconfiguration is designed with the following goals in mind: o Manual configuration of individual machines before connecting them to the network should not be required. ... Address autoconfiguration assumes that each interface can provide a unique identifier for that interface (i.e., an "interface identifier"). ...
Small sites consisting of a set of machines attached to a single link should not require the presence of a DHCPv6 server or router as a prerequisite for communicating. Plug-and-play communication is achieved through the use of link-local addresses. Link-local addresses have a well-known prefix that identifies the (single) shared link to which a set of nodes attach. A host forms a link- local address by appending an interface identifier to the link- local prefix.
A large site with multiple networks and routers should not require the presence of a DHCPv6 server for address configuration. In order to generate global addresses, hosts must determine the prefixes that identify the subnets to which they attach. Routers generate periodic Router Advertisements that include options listing the set of active prefixes on a link.
Address configuration should facilitate the graceful renumbering of a site's machines. For example, a site may wish to renumber all of its nodes when it switches to a new network service provider. Renumbering is achieved through the leasing of addresses to interfaces and the assignment of multiple addresses to the same interface. Lease lifetimes provide the mechanism through which a site phases out old prefixes. The assignment of multiple addresses to an interface provides for a transition period during which both a new address and the one being phased out work simultaneously.

Security Considerations :

  • OPSEC - " Operational Security Considerations for IPv6 Networks - draft-ietf-opsec-v6-12 ":
Generic Security Considerations
         2.1. Addressing Architecture
                IPv6 address allocations and overall architecture are an important part of securing IPv6. Initial designs, even if intended to be temporary, tend to last much longer than expected. Although initially IPv6 was thought to make renumbering easy, in practice, it may be extremely difficult to renumber without a good IP Addresses Management (IPAM) system.
                Once an address allocation has been assigned, there should be some thought given to an overall address allocation plan. With the abundance of address space available, an address allocation may be structured around services along with geographic locations, which then can be a basis for more structured security policies to permit or deny services between geographic regions.
                A common question is whether companies should use PI vs PA space RFC7381 ], but from a security perspective there is little difference. However, one aspect to keep in mind is who has administrative ownership of the address space and who is technically responsible if/when there is a need to enforce restrictions on routability of the space due to malicious criminal activity. Using PA space exposes the organization to a renumbering of the complete network including security policies (based on ACL), audit system, ... in short a complex task which could lead to some security risk if done for a large network and without automation; hence, for large network, PI space should be preferred.

Other References :

ARIN - " Recommended Draft Policy ARIN-2015-1: Modification to Criteria for IPv6 Initial End-User Assignments ".

ARIN - " Draft Policy ARIN-2011-3: Better IPv6 Allocations for ISPs ".

All ARIN Policies .

IANA - Main Page - Protocol Registries - IANA-managed Reserved Domains .

IETF - " Considerations on the IPv6 Host density Metric - draft-huston-hd-metric-00.txt ".

All IETF BCPs . ( Archives ).

Wikipedia's Best Current Practices (Currently not up to date).

AP NIC - " IPv6 Best Current Practices ".

Cloudmark's Whitepaper: " BCP for Near Term SMTP Deployments in IPv6 Networks ".

NSRC.org - " Ingress & Egress Filtering Lab - Campus Network Design & Operations Workshop ".

RIPE - " IPv6 Address Allocation and Assignment Policy " says (amongst many other things): "The minimum allocation size for IPv6 address space is /32. (for LIRs)", "To qualify for an initial allocation of IPv6 address space, an LIR must have a plan for making sub-allocations to other organisations and/or End Site assignments within two years.", "LIRs that meet the initial allocation criteria are eligible to receive an initial allocation of /32 up to /29 without needing to supply any additional information.", ...

RIPE - " Understanding IP Addressing and CIDR Charts " (also see below) offers these helpful charts:

IPv4 and IPv6

The original architecture of the Internet consisted mostly of large networks connecting to each other directly, and didn't look much like the hierarchical design used today. It was easy to give one huge address block to the military and another to Stanford University. In that model, routers had to remember only one IP address for each network, and could reach millions of hosts through each of those routes.

  • IPv6 devices all have a unique address given to them as a default, IPv4 devices use a classful network and do not have a unique address due to an exhaustion of addresses which occurred between Jan 31 2011 and Sept 24 2015.

Here is an old map of the entire Internet in February 1982 compared with the Internet of today, StackExchange.com is the tiny dot in the center of the right image, click to zoom way in.

The Internet 1984 versus today

RFC 3484 - "Default Address Selection for Internet Protocol version 6 (IPv6)" was obsoleted by RFC 6724 (Sept 2012), new in the update is:

"Sections 2.1.4 , 2.2.2 , and 2.2.3 of RFC 5220 describe address selection problems related to Unique Local Addresses (ULAs) [RFC4193]. By default, global IPv6 destinations are preferred over ULA destinations, since an arbitrary ULA is not necessarily reachable.".
  • A one-size-fits-all recommendation of /48 is not nuanced enough for the broad range of end sites and is no longer recommended as a single default.

See: RIPE - " Understanding IP Addressing and CIDR Charts ":

"Every device connected to the Internet needs to have an identifier. Internet Protocol (IP) addresses are the numerical addresses used to identify a particular piece of hardware connected to the Internet.
The two most common versions of IP in use today are Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6). Both IPv4 and IPv6 addresses come from finite pools of numbers.

For IPv4, this pool is 32-bits (2^32) in size and contains 4,294,967,296 IPv4 addresses.

The IPv6 address space is 128-bits (2^128) in size, containing 340,282,366,920,938,463,463,374,607,431,768,211,456 IPv6 addresses.

Address Allocation Model

Currently, IANA allocates address blocks to the regional registries. The registries in turn assign address blocks to service providers. It is the service provider’s responsibility to hand out addresses to their respective customers.

The current policy varies by region and in the most conservative case dictates that an end user must go through the user’s service provider to get IPv6 address space rather than directly approaching the regional registry for IPv6 address space.

Provider dependent policy

The figure graphically represents how this initial policy is enacted. This assignment model is commonly referred to as a provider assigned (PA) or provider dependent (PD) assignment. The prefix lengths that are shown in the figure are recommendations. The registries and service providers can assign blocks using the processes and procedures that they have established for their regions and customers. This is explained in RFC 6177.

RFC 6177 - "IPv6 Address Assignment to End Sites".

As an example of the policy, IANA has assigned 2600:0000::/12 to ARIN for assignment. This aligns with the top layer of the model. ARIN subsequently has assigned 2600::/29 block to Sprint, 2600:300::/24 to AT&T Mobility, 2600:7000::/24 to Hurricane Electric, etc.

These block assignments do not follow the original model defined in RFC 3177. The service providers subsequently assign blocks to their customers based on their customers’ needs. The Internet service provider (ISP) has the flexibility to assign a wide range of addresses to its customers.

For example, a large enterprise ISP customer might need a /40 assignment while a residential customer would only need a /60 assignment.

There is an exception to this policy enacted by the regional registries that allows end customers to directly approach registries and request IPv6 address space. This exception is known as provider independent (PI) addressing.

RFC 5375 - "IPv6 Unicast Address Assignment Considerations" outlines some issues that also need to be taken into account when building an addressing plan.

You should first decide if you want provider independent address blocks or is provider assigned addressing acceptable?

If the customer has PI addresses the assignment will remain valid providing the criteria for the original assignment are met.

Customers with PA addresses are recommended to obtain a new address space assignment from another LIR and return the PA address space that was assigned by their original LIR. In this

There's more, consulting the IANA and IETF links above is the best way to stay on top of the best practices.

Rob's user avatar

The best way of dividing ipv6 is into /64 subnets. because /64 address can be easily mapped to IPV4 manually

Vinod Reddy's user avatar

  • 1 How does dividing it into /64 make that easier than dividing it in /48's for example. Can you elaborate on how you would do this mapping? –  Teun Vink Commented Jan 19, 2017 at 12:56
  • 1 And why should we care about "easily mapped to IPV4"? –  Michael Hampton Commented Jan 19, 2017 at 18:25

The main differences between v4 and v6

  • there should be no need to micromanage. Address space is relatively plentiful.
  • The expectation is that all subnets will be /64s
  • NAT is strongly discouraged. For large buisnesses that is no problem, they just get PI space or even register as an LIR and advertise their space over BGP. However for small businesses it leaves a difficult choice, do they apply for PI space and buy more expensive internet connections that will let them use it? Do they run private addresses and ISP allocated public addresses in parallel and hope that no ISP allocated addresses end up in long term configuration files? do they ignore the IETF and run NAT anyway?
  • The hexadecimal notation makes nibble boundries conviniant for addressing levels.

Beyond that it shouldn't be much different from v4, figure out what subnets you need, figure out what logical groupings they fall into and how much room for future expansion you want at each level and start putting together a plan.

Peter Green's user avatar

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged ipv6 best-practices or ask your own question .

  • The Overflow Blog
  • One of the best ways to get value for AI coding tools: generating tests
  • The world’s largest open-source business has plans for enhancing LLMs
  • Featured on Meta
  • User activation: Learnings and opportunities
  • Site maintenance - Mon, Sept 16 2024, 21:00 UTC to Tue, Sept 17 2024, 2:00...

Hot Network Questions

  • What would a planet need for rain drops to trigger explosions upon making contact with the ground?
  • Do I have to use a new background that's been republished under the 2024 rules?
  • Function with memories of its past life
  • Place with signs in Chinese & Arabic
  • Was Willy Wonka correct when he accused Charlie of stealing Fizzy Lifting Drinks?
  • Does Rom.8.11 teach sanctification, or glorification?
  • Does SpaceX Starship have significant methane emissions?
  • Is it true that before European modernity, there were no "nations"?
  • Proper use of voices in more complicated melodies
  • In QGIS 3, changing borders of shared polygon shapefile features
  • Doesn't nonlocality follow from nonrealism in the EPR thought experiment and Bell tests?
  • How many minutes the Apollo astronauts had to wait from splashdown until the first frogmen touched the CM?
  • How to avoid bringing paper silverfish home from a vacation place?
  • zsh completion - ignore executable files with no dot in the name
  • Looking for a short story on chess, maybe published in Playboy decades ago?
  • Navigating career options after a disastrous PhD performance and a disappointed advisor?
  • How to limit matrix underbrace?
  • What are the pros and cons of the classic portfolio by Wealthfront?
  • In a world where the Gods have a presence does it make sense for the nobility to be the Champions of the Gods?
  • How to prove that the Greek cross tiles the plane?
  • How can I analyze the anatomy of a humanoid species to create sounds for their language?
  • Is it feasible to create an online platform to effectively teach college-level math (abstract algebra, real analysis, etc.)?
  • Drill perpendicular hole through thick lumber using handheld drill
  • Why was Esther included in the canon?

ip address assignment best practices

TechRepublic

TechRepublic | Forums | Networks

  • Register Now or SIGN IN to post
  • Recent Activity
  • Creator Topic

' src=

Assigning IP Address ranges – is there a “Best Practices” document?

by wfairley · about 16 years, 5 months ago

I feel like I am trying to herd cats at work – I would really like to standardize how we assign IP addresses within DHCP and the subnets, but each administrator has his or her own “Best Practices” ideas about IP Address schemes – one person uses x.x.x.1 as the gateway/router, while another uses x.x.x.254, and yet another will use x.x.x.2. One administrator will use x.x.x.30 through x.x.x.100 as the DHCP range, while another will use x.x.x.100 through x.x.x.200 and so on. One administrator even used a range in the middle of his DHCP scope for switches, routers, and servers!

I want to do something like this: x.x.x.1 = default gateway (assuming /24) 2-20 = network access devices 21-40 = server devices 41-60 = network peripherals (printers, etc.) 61-80 = reserved for special devices 81-100 = static addressed workstations 101-200 = standard DHCP pool of addresses 201-254 = VOIP devices (switch, adapters, etc.)

BUT ~ I can’t convince everyone to do it the same way. They want to see a “Best Practices” document from an authoritative source, and I don’t blame them, I do too. This has led into some heated but healthy discussions about ITIL and IT Service Management, areas in which I am still learning more.

Any feedback? Does such a document exist?

Thanks in advance!

All Answers

  • Author Replies

Clarifications

In reply to Assigning IP Address ranges – is there a “Best Practices” document?

' src=

I’ve never seen one…

by fregeus · about 16 years, 5 months ago

…and i doubt there is one. DHCP range assignment is very situation specific. Some networks don’t have VioP devices, some don’t have servers, some don’t have printer or workstations.

I think your best bet is to get everyone involved in a conference room and hammer out your own standard.

TCB edited for typos

' src=

No real best practice… subnetting is how it’s really done.

by cg it · about 16 years, 5 months ago

I don’t think designing a network by IP address is considered SOP, though, it’s a factor in design.

Also, assigning addresses isn’t just a matter of wanting to be able to identify what host is what address.

There are many factors to use in the design process which effects what addresses to use. # of hosts needed. Security required, delegation of administration, any growth plans.

Besides, you will have to use MAC filtering in DHCP to ensure that hosts get the correct pool addresses. Not a monumental task but does tend to create more administrative effort when you retire older equipment, replace broken equipment, upgrade equipment, replace faulty NICs, blah blah blah.

' src=

A few options

by michael kassner · about 16 years, 5 months ago

There are two schools of thought when it comes to this subject. There are those who like to compartmentalize the IP addrs and obviously those that do not.

The first approach is a great idea, but then you run into devices that do not fit into compartments and what do you do with them? There are other problems as well, what if you run out of IP addrs for a specific group?

I subscribe to the second approach for most of my clients and then immediately mention that they need to get a good inventory documentation application that will automatically provide all of the pertinent information about any given device.

Cisco has a great article about all of this and might be helpful to you.

http://www.cisco.com/en/US/tech/tk869/tk769/technologies_white_paper09186a008014f924.shtml#topic4

This is the closest I’ve seen what I’m looking for…

In reply to A few options

Thanks Michael! This Cisco document actually presents some “core” address assignments within subnets, such as suggested address assignments for gateways, special devices, and standard DHCP ranges, as well as naming conventions, etc. It is very helpful!

To put my request in context, I currently work for a small telecom company that occasionally gets a network consultant type sell for small firms in my city, predominantly law firms and doctor offices, with fewer than 20 people, so VLANs and subnetting are rarely necessary. By defining some standard practices regarding address ranges and assignments, we can improve our response times for outages, improve our documentation efficiency, even improve our install lead times and install process times.

This document takes me in the right direction, but I will continue my search, then publish my own “Best Practices” document in the TechRepublic forums and look for feedback.

Thanks again Michael!

In reply to This is the closest I’ve seen what I’m looking for…

Well, Cisco did all of the work. I would very much appreciate hearing about the final plan that you come up with. For the most part it is very specific to the situation being discussed, but I have never been led astray by Cisco.

' src=

Your better off approaching this

by dumphrey · about 16 years, 5 months ago

from a standards and procedures vector. Since this is mostly a pet peeve of yours, its your responsibility to show what financial, management, and security benefits are to be had by developing an internal standard.

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

IMAGES

  1. ip-address-assignment

    ip address assignment best practices

  2. IP Address Assignment Tool : Pixelink Support

    ip address assignment best practices

  3. A Beginner's Guide to IP Addresses on Home Networks

    ip address assignment best practices

  4. IP Address Assignment by a central system

    ip address assignment best practices

  5. How to Assign Static IP Address in Windows 10 (Solution)

    ip address assignment best practices

  6. 17- CompTIA Network+

    ip address assignment best practices

VIDEO

  1. Cisco Router IP Address Assignment Tagalog

  2. VIDEO 03 : IWLAN IP Address Assignment

  3. Bagaimana cara kerja DHCP ?

  4. AWS Basics |Networking Concepts

  5. Serial Port IP Addressing |Simple Steps to Assigning IP Addresses to Serial Ports without Command

  6. IP Addressing -Classfull and Classless types*ASSIGNMENT INCLUDED*

COMMENTS

  1. PDF Cisco IT IP Addressing Best Practices

    IPv4. As the available pool of public IPv4 addresses is limited, Cisco IT should play a part in allocating addresses sensibly and appropriately. Cisco IT should allocate address space in appropriately sized blocks to allow for a good balance of summarization capability and to avoid wasting IP address space.

  2. Ip Addresses For Network Devices: Best Practices And Recommendations

    Summary and Conclusion. Strategically assigning IP addresses to networked devices provides major benefits for enterprises and small networks alike when properly planned and documented. Leveraging logical organization, segmentation, and other best practices allows room for expansion while delivering simpler, more resilient IP infrastructure.

  3. PDF IP Addressing Guide

    This guide is a concise reference on IP addressing best practices, including: • The basic concepts of IP addressing • The IP addressing plan used in the Smart Business Architecture (SBA) ... • Duplicate IP address device assignments • Wasted IP address space • Unnecessary complexity. IP Addressing Basics 3 IP Addressing Basics IP ...

  4. Understanding IP Address Assignment: A Complete Guide

    In simple terms, an IP address is a numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication. It consists of four sets of numbers separated by periods (e.g., 192.168..1) and can be either IPv4 or IPv6 format. IP Address Allocation Methods.

  5. Best Practices for Efficient IP Address Management

    Efficient IP Address Management is critical for maintaining a well-functioning network. By following these best practices and adopting an automated IPAM solution, businesses can optimize their IP addresses, streamline administrative tasks, and improve overall network management. With the ever-increasing demand for connectivity and mobility, now ...

  6. Best Practices for Setting Static IP Addresses on Cisco Business

    Devices that use DHCP are automatically given a dynamic IP address in the proper subnet mask. This pool of available IP address can change over time as addresses are assigned or abandoned. You can configure the internal IP address to stay the same by configuring static DHCP on the router or assign a static IP address on the device itself.

  7. IP Addressing Best Practices · Tan Duc Mai

    Use static IP address assignment for network infrastructure devices (e.g., servers, printers, WAPs, LAN gateway addresses on routers, and management addresses on network devices such as switches). ... For the allocation of IPv4 subnets, stick to the following best practices: Private addresses are used for internal networks. Allocate /24 subnets ...

  8. 10 IP Addressing Scheme Best Practices

    For larger networks, you may want to use a different private IP address range. The 10.0.0.0/8 range is often used for this purpose.2. Assign static IP addresses to servers and network devices. If you don't assign static IP addresses to your devices, then every time the device reboots it will be assigned a new IP address by the DHCP server.

  9. IP Address Management Best Practices

    White Papers. IP Address Management Best Practices. An IP addressing environment can quickly become a jungle (if it isn't one already). However, you can impose rules, conventions, policies, and an overall plan to make your IP addressing simple- easy to understand, administer and grow, highly secure, always available, and lightning-fast.

  10. Network Design

    Importance of IP Addressing for Network Design. One of the major concerns in the network design phase is ensuring that the IP addressing scheme is properly designed. This aspect should be carefully planned and an implementation strategy should exist for the structural, hierarchical, and contiguous allocation of IP address blocks.

  11. PDF 10 Key Best Practices

    10 Key Best Practices. White Paper. ess Management (IPAM)In an ever-evolving business environment, IT agility and eficiency have become of strategic interest to companies, in order to stay competitive and execute sustain. le long-term growth. IP address plans, DNS and DHCP services are network foundations playing a key role in mee.

  12. IP Addressing Design Considerations

    This section outlines overall best practices for IP addressing, including basic addressing, routing, filtering, and Network Address Translation (NAT). General Best Practices and Route Summarization. The basic best practices for IP addressing should be familiar to you. At a high level in your design, you first must decide whether the IP address ...

  13. Best practice for assigning private IP ranges?

    Unless your sites are less than one degree apart. We had two offices a few city blocks apart at one point, which are less than 0.02 degrees apart in terms of lat/lon ;-) - rmalayter. Mar 22, 2010 at 21:42. 2. If this is a concern (and it's a reasonable one), then use the third private IP range: 172. (16-31)../16.

  14. What is the best practice for assigning static IP addresses?

    In my scenario, I would be assigning IP addresses to security cameras. I am wondering if I should assign a static IP address from the device itself, or have a static entry on my DHCP server with their MAC addresses. I know both would work, but I was wondering if it is best practice to assign the IP from the devices, DHCP server, or both.

  15. Configuration Management: Best Practices White Paper

    Recommended IP address management standards reduce the opportunity for overlapping or duplicate subnets, non-summarization in the network, duplicate IP address device assignments, wasted IP address space, and unnecessary complexity. The first step to successful IP address management is understanding the IP address blocks used in the network.

  16. Static IP vs. dynamic IP addresses: What's the difference?

    Dynamic IP assignments are best for nonpermanent devices and those that don't often need to be found by other network nodes. Dynamic IP addresses offer the following advantages: The server does not make typographical errors. Duplicate IP address assignments are reduced. Changing the IP address configuration is quick and efficient.

  17. When to Use a Static IP Address

    When Static IP Addresses Are Used. Static IP addresses are necessary for devices that need constant access. For example, a static IP address is necessary if your computer is configured as a server, such as an FTP server or web server. If you want to ensure that people can always access your computer to download files, force the computer to use ...

  18. Number Resources

    We are responsible for global coordination of the Internet Protocol addressing systems, as well as the Autonomous System Numbers used for routing Internet traffic. Currently there are two types of Internet Protocol (IP) addresses in active use: IP version 4 (IPv4) and IP version 6 (IPv6). IPv4 was initially deployed on 1 January 1983 and is ...

  19. IP Address Lookup And Allocation Best Practices

    Discover the best practices for IP address allocation and different types of IP addresses. Learn about IP address lookup directions and more. ... Assigning an IP Address: The DHCP server listens to your device's request and gives it an IP address. This address is usually temporary (dynamic). It's like the cafe staff giving you a Wi-Fi code ...

  20. Cisco Catalyst 9800 Series Configuration Best Practices

    Assigning an IP address to the service port (SP) is optional but remember that the SP on the physical appliance belongs to the Management VRF, so an IP address has to be assigned accordingly. ... Again, the best practice is to assign it statically to the process; this can be done under the global parameter map (shown for the 9800-CL): The same ...

  21. Is there IT best practice for assigning pool of IP address to same type

    Let say I had this 10 server (physical and VM and HyperV) 50 Desktop 10 laptop 5 Printers 15 wifi router 5 switches 2 firewall 15 NVR / DVRr 5 HDD with network 10 attendance machine For Example, I assigned IP address 192.168..1 to 192.168..20 to servers Is there guide or best practice to assign series of IP address to devices of the same type? Thanks in advance

  22. IPv6 address space layout best practices

    A longer answer: Internet Engineering Task Force (IETF) - Best Current Practices: RFC 6177 and BCP 157 - "IPv6 Address Assignment to End Sites" clarifies that a one-size-fits-all recommendation of /48 is not nuanced enough for the broad range of end sites and is no longer recommended as a single default. 1.

  23. Assigning IP Address ranges

    21-40 = server devices. 41-60 = network peripherals (printers, etc.) 61-80 = reserved for special devices. 81-100 = static addressed workstations. 101-200 = standard DHCP pool of addresses. 201 ...