To accommodate increasingly dense technology environments, increasingly critical business applications, and increasingly stringent service level demands, data centers are typically engineered to deliver the highest-affordable availability levels facility-wide. Within this monolithic design approach, the same levels of mechanical, electrical, and IT infrastructure are installed to support systems and applications regardless of their criticality or business risk if unplanned downtime occurs. Typically, high redundancy designs are deployed in order to provide for all eventualities. The result, in many instances, is to unnecessarily drive up both upfront construction or retro-fitting costs and ongoing operating expenses.
Software-defined architectures have transformed enterprises to become more application-centric. With application owners seeking public-cloud-like simplicity and flexibility in their own data centers, IT teams are under pressure to reduce wait times to provision applications.
Legacy load balancing solutions force network architects and administrators to purchase new hardware, manually configure virtual services, and inefficiently overprovision these appliances. Simultaneously, new infrastructure choices are also enabling applications to be re-architected into autonomous microservices
from monolithic or n-tier constructs. These transformations are forcing organizations to rethink load balancing strategies and application delivery controllers (ADCs) in their infrastructure.