Why ACI:
Scaleable
Security (follows whitelisting model)
No more spanning tree in the core
Devices replacement is super easy and time effective
Components:
Spine switches
Leaf switches
APICs (3 at least)
CLOS Design / Architecture
CLOS talks about connecting the interlayer devices with each other but not the intra layer devices.
APIC
Application policy infrastructure controller - Brain of the fabric
Works in a cluster - made up of at least 3 controllers. Scalability of the cluster is directly proportional to the needs of the ACI deployment and is scaled based on transaction rate requirements. Any controller in the cluster can serve any user for any operation. *
Config is pushed from APIC to all the devices
Doesn't participate in the data plane
Connected to LEAF switches only
APICs are available as physical or virtual appliances
Physical APICs are 1 RU Cisco C-series servers with ACI code install and come in two different sizes : M (medium) and L (Large)
There are 3 generations of APIC UCS servers:
1st generation = Cisco UCS C220 M3 servers
2nd generation = Cisco UCS C220 M4 servers
3rd generation = Cisco UCS C220 M5 servers
*Number of APIC nodes in a cluster
3 node cluster = up to 80 leaf switches
4 node cluster = up to 200 leaf switches
5 or 6 node cluster = up to 400 leaf switches
7 node cluster = 400-500 leaf switches
For implementations requiring greater number of switches than above, it is advisable to go for multi-site architecture, using several APIC clusters.
Database is distributed between APICs in the form of a "shard", which is essentially a unit of data / subset of the database, or a group of database rows distributed across the nodes of the cluster to enable parallel processing and data protection in the event of a failure. Each APIC database has exactly 3 copies / replicas which is distributed across the cluster based on shard layouts defined for each cluster size.
Refer this link for further information on sharding in Cisco APIC = https://community.cisco.com/t5/application-centric-infrastructure/unraveling-the-concept-of-database-sharding-in-cisco-aci-apics/td-p/4972877
We can write the information only if we have more than half the APIC size active in the network.
For e.g. if we have 3 controllers then 2 APICs must be available all the time to perform the write operation. If 2 APICs go down, the state of remaining one APIC will fall to read only because of the sharding behavior which says that for a particular data set at least 2 copies must be present at a given time (this is called quorum).
Spine Switches
Used to exchange routing information between LEAF switches using ISIS protocol
Used to provide information to leafs about the endpoints via COOP (Council of Oracle Protocol)
Works as route reflector for all the leafs and distribute the external routes into fabric using the MP-BGP
Leaf Switches
Servers and external devices like switches and routers connect to Leaf switches
APIC will be connected to LEAFs only
All the security policies will be implemented on leaf switches
Types of Leafs
Border Leaf - connected to external domain servers
Compute leaf - connected to servers
Service Leaf - connected to devices like FW and LBs
Hardware overview
ACI runs on Nexus 9K switches
We have chassis switches like 9500 or fixed switches for example 9300.
A 9K switch can be either in NX-OS mode or ACI mode but not both at the same time.
Cisco ACI scalability guide - Check this out for switch specs
Nexus 93180YC-FX Leaf switch
54 ports
last 6 ports - fabric ports which connect to spine switches only
Fabric ports can be changed to downlink ports
All spine switch ports are in "routed" mode. The fabric ports of leaf are in "routed" mode.. the remaining downlink ports are in "trunk" mode.
Cisco IMC (CIMC) ports provide out-of-band access to the server hardware itself. This is for the APIC server itself.
Command to check interfaces on APIC CLI: cat /proc/net/bonding/bond0
Cisco ACI Multi-Pod solution = stretched-fabric
It connects multiple APIC pods via Cisco IPN i.e. Inter-Pod Network
IPN supports OSPF only between IPN and spine switches
Though each pod consists of its own spine and leaf, all the pods in a multi-pod setup share a single APIC cluster.
At the data plane, the multipod solution uses MP-BGP EVPN connectivity over the IPN between the spine switches from each pod for communication over VxLAN encapsulation.
No comments:
Post a Comment