Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.






Anchor
_oc74v2wugnje
_oc74v2wugnje

Anchor
_hwhdcpj8wcg4
_hwhdcpj8wcg4

Anchor
_ipkon5t0m6wp
_ipkon5t0m6wp

Anchor
_eskqxy4tj3l3
_eskqxy4tj3l3

Anchor
_h6grjecg3vku
_h6grjecg3vku
Service Assurance in the NFVI


Executive Summary
Key Points
Telco Requirements And Definitions
Key terminology for Service Availability in NFV / networking services context
Service Availability and Continuity Targets and their Impacts on Infrastructure services
Fault Management Cycle
Fault Management Cycle Timeline
Infrastructure Services Relative Criticality
Different infrastructure services can have different impacts on the reliability of the infrastructure as a whole. Service components such as network switches can have a disproportionately high impact on the reliability of the entire infrastructure due to their connective nature. Conversely, traditionally emphasized components such as compute nodes mostly affect the VMs / Containers running on them.
The following table provides a high level view of the categories of the services / entities in the NFVI and VIM sublayers of the system SW stack. This high level analysis is based on the expected risk, which is determined based on entities (guest VMs as a proxy) at risk and the fault activation probability from the guest's perspective.
Service Availability and Upgrades
Upgrade Process
Container Upgrades
NFV Enhanced HA Architecture
Addressing Single Point of Failure
Fast Fault Detection and Notification Framework
Telemetry Collection
RT Fault Mgmt Agent (RFE x)
Background
RT Fault Mgmt Bus (RFE x)h1.

Anchor
_ttyap8nr7f5g
_ttyap8nr7f5g
Executive Summary

Telcos and Enterprises are moving to a cloud-based IT infrastructure to reduce cost by leverage a shared execution environment based on commodity servers. Telcos have the additional burden of moving from dedicated hardware with a vertically integrated software stack to commodity hardware where independent software applications must seamlessly interoperate.
The requirement for Highly Available (HA) applications and services does not change during this migration so the target cloud infrastructure must provide equivalent mechanisms for High Availability. For example, Telcos will still require 5NINES of availability for services; a feature that Telco customers rely upon. In addition, enterprises often rely on applications where a single second of unavailability could incur millions of dollars in lost revenue.
For many large service providers and enterprises the public cloud is not an option due to cost or security issues. A private cloud infrastructure provider that can deliver a level of high availability expected by these large customers stands to acquire a large and loyal following.

...

Telecommunications services are generally classified as part of the "critical infrastructure" by the regulatory bodies, and have various performance objectives that are set by the regulators and/or standardization bodies, including requirements for outage reporting and penalties associated for non-compliances, which are service and/or region specific. Implicitly, the infrastructure (in the context of this document this includes NFVI and VIM services) that is used to provide these services becomes part of the critical infrastructure and is subject to certain requirements in terms of availability, continuity, security, disaster recovery etc. In addition to regulatory constraints, the service outages cause direct (in terms of penalties and lost revenues) and indirect (in terms of customer churn, reputation etc.) monetary losses for the operators. While the service criticality, outage impact and other parameters vary by service, the most critical services (e.g. regulated services and safety-relevant services such as 911 call handling) determine the availability and reliability related design targets for the underlying infrastructure.
As compared to the physical and/or software based network element designs of the past telecommunication networks, move to SDN/NFV controlled infrastructure and general trend of "network softwarization" changes both the business relationships between the suppliers and operators, as well as the implementation responsibilities of the associated R&A related mechanisms. Generally, in the past network elements were vertically integrated systems where NE supplier was responsible for all aspects of the system design, integration and testing, including reliability and availability related mechanisms. While such systems commonly utilized commercial hardware and software components (including open source SW such as Linux), the responsibility for the availability related metrics from the operator's perspective was always associated with the system supplier. In cases where the system incorporated 3rd party hardware and/or software, then any "flow-down" requirements and potential responsibilities for associated performance penalties for the sub-suppliers were established by mutual agreement(s) of the system supplier (NE vendor) and its technology suppliers. To the some extent this model continues with the large network equipment vendors (NEPs) in the context of the NFV system, i.e. NEP takes the OpenStack and Linux distribution and "hardens" it to meet their "carrier grade" performance objectives and sells the resulting system to the operators. However, this model is against the objectives of the NFV targets set by the operators, which include the breaking of the vertically integrated vendor specific "silos" to e.g. separate hardware, infrastructure software and application SW (VNFs) vendor specific parts. In the early adoption phase where the NFV system is presently, the lack of standards or mutually agreed open source solutions leads to the NEPs and/or leading operators to fill the associated gaps as they see fit. While the resulting systems may be able to meet the functional and non-functional requirements, including the service availability and continuity related performance metrics and targets they fail on the interoperability aspects - no two implementations are same, and associated interfaces and APIs are also different, which makes each combination of system components (HW, NFVI, VIM, VNFM, NFVO, OSS/BSS etc.) integration exercise that someone needs to undertake. While at least some of the large NEP customers are working to enhance the VIM/NFVI level components, only some of them are open sourcing the associated work to differing degrees, while others are keeping the enhancements for themselves. The result is continuing fragmentation of the NFV system, which in the worst case is directly visible for the VNF - system interfaces as needed to support interoperability in both execution environment interfaces (VNF to VIM) as well as to interfaces between VNF to MANO system.
The key objective of the SA work is to work with the NEPs, network operators, open source communities and projects to establish an open source based solution to address the availability and reliability related aspects of the decomposed system (currently focused on NFVI and VIM parts of the ETSI reference architecture).
Traditionally, in the context of the specific network element design, the services and associated service availability and service continuity objectives were specified at the high level, and associated downtime targets allocated to the hardware and software subsystems in the context of the specific implementation architecture constrained system configurations. The allocations of the performance objectives were typically determined based on the prescriptive modeling in early stages of the design (using R&A models of the configuration based on common modeling practices such as reliability block diagrams and/or markov reward models), followed by the testing and/or field data performance driven validation of the models in later stages (process called "descriptive modeling", which used the same model but driven by the actual measured performance parameters). The same processes were also used to allocate the availability related performance objectives (such as un-availability, remediation times, fault coverage, etc. parameters) between the parties involved on the realization of the system. Same high level processes can be used to establish a basis understanding of the relationships between the services provided by the applications and the services provided by the infrastructure that they application services depend on, as well as the associated service availability and continuity related objectives. However, due to diversity of the application designs, services and service specific targets, as well as variation on the infrastructure implementation options, we are necessarily limited to general relationships and high level objectives only rather than full, application instance specific design and analysis. This exercise is still considered to be useful for establishing the generic understanding on the availability related targets flowing down from the high level (application / VNF level) performance objectives. For simplicity, the following overview uses RBDs for basis of establishment of the high level relationships of the availability of the VNFs in the context of NFV system. The intent here is not to be a tutorial on availability modeling, see e.g. NFV ISG REL-003 document for the background material in the NFV context. It should also be noted that RBDs are not sufficient to fully model complex time-based behaviours (such as failure rates, recovery times and fault coverage), but unlike MRMs, they are easy to construct, understand and compute and sufficient for establishment of basis for high level allocations based on structural relationships and dependencies between software and hardware elements.
A common numerical availability target associated with the so called "carrier grade" systems that we are required to be able to meet in NFV systems as well is 5NINES Availability or higher (there are elements that require higher availability level targets), but this is generally considered to be the low threshold for the "carrier grade" availability. As availability target is meaningless without specifying availability of "what" along the target, this generally is taken to mean 99.999% availability of a specific network element and its associated service(s), i.e. network element availability across the population of network elements of same type, translating to DPM target of ~315 sec/NE/year. The target is sufficiently high that, at least when combined with service continuity requirement it is expected to imply the need for some level of redundancy (at least two components in e.g. active-standby redundancy configuration). Since our hypothetical virtualized network element relies on at least the NFVI+VIM, as well as GVNFM+NFVO services for its operation, we can draw a following high level series RBD that establishes these dependencies:

From the above, we can see that for the service to be available, both blocks need to be available. We can also see that to meet the service availability target of 5NINES, each of the components in series need to be more available than 5NINES (as it is unreasonable to expect the NFVI+VIM+GVNFM combination to be 100% available, even with redundancy). Note that the availability of the series RBD can be obtained by multiplying the availabilities of individual components, or alternatively by adding the associated component DPMs. So, if we would have each of blocks 5NINES available, the combined availability would be .99999^3=.99997, which translates to unavailability of 1-A * 31536000 sec/y = 946 sec/y, i.e. one third of the target. The high level process of target setting is referred to availability allocation, and is generally done based on allocation of downtime in terms of time rather than using NINES (although both are equivalent, downtime is much easier to use as an performance objective).
To further expand the above example, we can incorporate a bit more detail by drawing a more detailed RBD as follows to account to parallel-series relationships of the VNF components and services that are within each top level block:

If the combined availability of the VNF SW components, its embedded OS instance and all of the supporting node HW and SW components of the single node is three nines (0.999), the availability associated with the dual-redundant VNF instantiated in anti-affinity configuration in two independent, homogeneous compute nodes N1 and N2 will be 1-(1-.999)^2 = 0.999999, i.e. six nines for the pair. Since the pair depends on the VIM and other MANO functionality for the remediation operation (i.e. failover), it depends on those components that are in series, and their availability needs to be considered to determine the VNF availability. When we combine the VNF in node pair availability with the series elements (NFVI+VIM shared components and rest of the MANO system), we get total availability for the chain of: .999999 (VNF) x .999999 x .999999 = .999997, which is well over 5NINES (based on assumed 6NINES availability of the critical path functionality in the VIM and MANO elements). Note that as this is the simple RBD calculation, the other aspects such as fault detection coverage, failover failure probability, or failover time are not modeled (implied assumption is 100% coverage for service affecting faults, zero failover time and perfect failover - which are all unrealistic in real implementation), and the time spent on the associated processes add directly to the downtime (Service Un-Availability). Therefore, this is a maximum possible availability that can be achieved. This is key reason for why the critical path from the fault management cycle (from the fault detection to the end of remediation phase) needs to be completed as quickly as feasible - even with the redundancy in place, until the remediation processes have been completed, the service is down from the user's perspective.
From our very simple (i.e. simplest possible) high-availability workload configuration with only two VNF Component Instances, we could already see that to achieve the 5NINES performance by the application, we will need to have better than 5NINES performance for the critical infrastructure services that the application depends on (such as network, storage when used, and API services that are used by the application VNF e.g. to perform remediation and recovery actions). In reality, the VNFs have internal structure, are composed of multiple VNFCIs with potentially differing redundancy configurations, and their corresponding RBDs look significantly more complex than previous example. It is not uncommon that single complex VNF would have 5-10 stages of components in series on the associated RBD, and the move towards microservices architectures that operators are pushing vendors towards will increases this even more. Furthermore, the services are composed of chains of the NEs (physical and/or virtual), which increases the number of elements that the service depends on. All this means that for the equivalent Service Availability, the availability of the chained elements (and the elements of the infrastructure they depend on) must be proportionally higher. In addition, from the OpenStack cloud shared services perspective, a realization of many network services requires use of components located in multiple clouds even in the single operator scenario - typical configuration to support end to end service would have at least three separate clouds in series: access/edge cloud 1, core cloud, and access/edge cloud 2, which implies that we will have three independent openstack service regions in series within the overall chain. The following picture, taken from ETSI NFV REL-003 is an illustrative example of availability effect from E2E service availability perspective, where the blue blocks represent the availability of service chains within the specific cloud instances.


To illustrate the cumulative effect of the homogeneous chained elements to the Service Availability, the following table provides an overview of the expected availability expressed in number of NINES (based on RBD only calculation, which have the limitations outline before, i.e. the associated performance statement is optimistic). As a target setting, NFV system should be able to support applications (VNFs, VFs) / services at least in yellow and light green zones.

Same table as a (linear) graph illustrates the effect of individual chain element availability (also, keep in mind that the chain availability cannot be higher than the availability performance of the lowest performing element in the chain:

Finally, equivalent in the DPMs (Seconds/NE/Y or Seconds/Service/Y, depending on the scope of the top level availability specification):

The top level allocations of the DPM based on 10% of total NE and/or service downtime (typical from similar systems from the past efforts, and also stated performance by competition) looks like follows:

High Level Availability Allocation Calculation; all causes

 

 

 

 

 

 

 

 

 

 

 

 

 

Atarget

DPM-Tot (s/y)

%HW

%ISW (≤VIM)

%ISW (>VIM)

%APP (SW)

%OPER

%TOT

HW DPM

ISW (≤VIM) DPM

ISW (>VIM) DPM

APPSW DPM

OPER DPM

 

0.99999

315.3599999986

10

10

10

50

20

100

31.54

31.54

31.54

157.68

63.07

DPM (sec/NE/y)

 

 

 

 

 

 

 

 

0.999999

0.999999

0.999999

0.999995

0.999998

A

5

 

 

 

 

 

 

 

6.00

6.00

6.00

5.30

5.70

#NINES

0.999999

31.54

10

10

10

50

20

100

3.15

3.15

3.15

15.77

6.31

DPM (sec/NE/y)

 

 

 

 

 

 

 

 

0.9999999

0.9999999

0.9999999

0.9999995

0.9999998

A

6

 

 

 

 

 

 

 

7.00

7.00

7.00

6.30

6.70

#NINES


As a sanity check for the above targets, we calculated the top level un-availability attributions based on the reported service outage data from European Community summary report from 2015, as follows:

Weighted allocation check (enisa 2015 dataset)

 

 

 

 

 

Category

%tot (all incidents)

Iavg (ksub)

MTTR (hrs)

RPN

RPN%

System Failure

68.8

1600

25

27520

80.61

Human Error

21.7

1400

9

2734

8.01

Malicious Act

8

1000

47

3760

11.01

Natural Phenom

1.5

200

42

126

0.37

Total

100

 

 

34140

100


Approximately 70% of all causes are attributable to the "system failures", which includes all Network elements and their HW and SW attributable failures. When weighted for impact (average of thousands of subscribers affected by outage x impact duration), this translates to approximately 80% of total weighted downtime, with remaining 20% of the weighted downtime associated with operator attributable / external causes. This aligns with our top level allocation (10+10+10+50%=80% for network elements, and remaining 20% for the external, operator attributable causes). This is also well aligned with past availability / downtime performance data from real operator networks, as well as high level downtime budget allocations used for the engineering purposes.
Failover time of carrier-grade network services is often characterized by the 50ms target. To further determine if and when this applies to us, it is necessary to understand the basics of its origin as well as applicability in the context of the cloud services. The origins of the 50ms target trace back to the redundancy mechanisms in early TDM transport systems, and has firmly established itself as a target for the associated replacement techniques over time. It is commonly used target in the specifications of the ring topologies, optical transport networks, and the subsequent mechanisms associated with replacement of such topologies with mesh topologies, e.g. in context of IP/MPLS based transport services. It should be noted that this applies to distributed networks, and therefore is especially hard to meet in the context of the large topologies, where speed-of light propagation in fiber related issues become relevant in the context of propagating the fault indications between the associated elements. Generally speaking, the common prerequisites for realization of this tough target include: very fast fault detection times (order of single 10's of milliseconds or less), availability of pre-computed back-up paths for each specific failure mode (e.g. link / path failure), availability of resources pre-allocated and pre-configured to handle the failed over traffic load after the fault is detected. To overcome "limitations" of speed-of-light related issues, the network can be concatenated to smaller recovery regions that can individually satisfy the requirement, and assuming that each region, irrespective of its implementation can meet the target performance, the Network Service can also meet them at least on the context of the independent, single failures. At a first observation, it might appear that this is not relevant to the NFVI clouds - after all, we will not be taking over e.g. the Sonet rings, as they are physical systems that cannot be simply eliminated and moved to "cloud". However, routers and other packet processing entities are targets for moving to cloud based implementations, and even elements that are not considered to be directly on "transport service" path may need to interact or even directly integrate with transport services, and in the context of the service that requires fast restoration requirements, whatever is on the end-to-end path may be subjected to such requirements. In the traditional, physical system "box" design, the associated key element which is on the path of the service realization is the switching subsystem of the box. To negate the effects associated with long failover times of the physical switch elements to the overall service switchover performance, it is necessary that those switching components exhibit failover time properties that are at least as good as the network level mechanism, but are usually engineered to much higher standards, which is relatively simple to do within constrained topology of the individual box (sub-10 ms designs for fabric failover time are not uncommon in physical network elements). When we decompose such functions and move them to the cloud, the NFVI fabric (underlay and overlay, depending on how the applications consume these resources) replaces the fabric that used to be embedded within the box. Therefore, to achieve similar performance, the elements of the fabric when subjected to equivalent performance constraints as well as the associated pre-requisites (such as allocation of precomputed backup paths mentioned above) should be able to meet the similar order of failover time requirements as the fabric implementation on the physical systems. Otherwise, while we might be surrounded by the access network that meets the 50ms targets at the network level, and metro/core transport network that meets the 50ms requirements, and our fabric and elements in the path would have say 1 second failover time, it would be impossible to provide a network service that consistently meets the 50ms target, as every time the NFVI network instance on the path would fail, network service user would experience the Service Outage of 1 sec (as like with the Service Availability, the lowest performing "weakest link" element in the chain determines the achievable performance).
Some examples of the associated fail-over time requirements that require this level of performance include e.g. the following (many more are available, in case of needed we can provide a longer list of references discussing how this is to be accomplished in NFV/SDN controlled NFVI network implementations):
1.) ETSI REL-001 statements that goal is to achieve Service Level that are equivalent or better than service levels in the physical systems, and especially the REL-001 problem statement in section 4.1.: "Whereas in the IT domain outages lasting seconds are tolerable and the service user typically initiates retries, in the telecom domain there is an underlying service expectation that outages will be below recognizable level (i.e. milliseconds), and service remediation (recovery) is performed automatically."
2.) ETSI IFA requirements to support anti-affinity policies in context of allocation of virtualized network resources (which is prerequisite for being able to support redundancy mechanisms based on path diversity - i.e. anti-affinitized backup paths)
3.) Examples of SLAs of end customer services, which require same order of magnitude failover performance (60-100ms, depending on service); note that as SLAs are essentially components of bilateral contracts, they are seldom made public. Here is one example of public document from US Government's common SLA requirements, which includes Service Availability and various KPIs, including failover times for select services: GSA Networx SLAs
4.) failover time requirements in 5G crosshaul project - 4 out of 5 use cases states "time of recovery after failure" of 50 ms or less, while remaining one has 100ms target: 5G CrossHaul.
5.) An aggregated responses of Red Hat commissioned Heavy Reading NFV operator survey of 128 service providers, "Telco Requirements for NFVI", November 2016, which had a specific question on the failover time. Associated graph on response distribution is replicated below for convenience; 40% of respondents were asking for 50ms. In addition, the associated clarification text on the responses stated "Those already executing their NFV strategies were much more likely than those who are not to say that recovery time should be within 50 ms"
Image Modified
To summarize, the high level engineering targets derived from the application level requirements that we use as service independent availability metrics for the NFVI infrastructure, as well as to establish target values for parameters to be measured on testing are:

...


The following diagram shows the key events of the FM cycle timeline for a generic failure. Up until the tfailure, the failed component is operating normally with a stand-by component providing redundancy. In this example, the timeline after remediation phase is shown separately for both pooled and non-pooled resources. In a pooled resource scenario, a group of resources (VMs, Containers, etc) can be reallocated from the cloud resource pool to rapidly restore the intended redundancy configuration. In non-pooled scenario, a new resource needs to be physically provided before it can be made available to serve as a stand by resource, which adds the physical repair time to the aggregate timeline. The advantage of pooled resources is that the recovery time (time for restoration of redundancy) is substantially shorter, reducing the exposure to secondary failure induced outages. At tfailure, the primary component fails and tDET (detection) begins. After the failure has been detected, the failure event must be delivered to the relevant components to either perform a recovery operation or to provide an external notification. Finally, remediation begins the process of activating the standby resource and restore the operation of the service. The target time for the series of operations is given by: TDET + TNOT + TREM < 50 ms. TREM often requires the most significant time to complete therefore, TDET + TNOT must be minimized as much as possible. After remediation is complete, the service has been restored, but the failed component is not redundant and a failure of active component could result in a substantial service outage. The recovery phase addresses this issue by bring a standby component online and configured. The recovery time can start as early as the detection time if recovery operations can occur in parallel with remediation. After recovery, the component is again redundant.

Anchor
_mokxnfskrrb6
_mokxnfskrrb6
Up, redundantDown, RemediationUp, RecoveringUp, Repair PendingMinimize TUATDETTREM1st Failure – Potential Outage or DegradationTUA = TDET + TREMUp, RedundantUp, RecoveringUp, RedundantFailure EventService RecoveredRedundancy Restored (pooled)Repair Completed (non-pooled)Redundancy Restored (non-pooled)TREC, Pooled~2nd failure exposure, typ. ~2 mins MTTRECTREP~T{~}REC, Non-Pooled~2nd failure exposure, typ. 4+ hrs MTTREP1st Indication: FM cycle startFor non-pooled resources: coupled, critical repairFor pooled resources: uncoupled, deferred repairs

...

The intent of this table is to prioritize the improvement efforts of the services / subsystems that represents the highest (guest) service risk first, based on the expected impact. The next steps will involve mapping all specific individual services to these categories, as well as establishing target availability (or unavailability derived remediation times) for each category. Generally, it is expected that the lowest unavailability targets are at the lowest levels of the dependency stack (which also have the highest priorities).

 

Component

Impact Potential (nodes)

Impact Potential (VMs)

Failure Activation (for active nodes/service failure)

Pri

CRITICAL
Large & Immediate Impact

Fabric - spine

~#nodes/#spines

~#VMs/#spines

Immediate for ea. Serv user

1

 

Fabric - leaf

~#nodes/#leafs

~#VMs/#leafs

Immediate for ea. Serv user

2

 

Network node

~#nodes/#nnodes

<#VMs/#nnodes

Immediate for ea. Serv user

3

 

Storage node

~#nodes/#snodes

<#VMs/#snodes

Immediate for ea. Serv user

4

MAJOR
Large & Conditional Impact

Control node

#nodes

#vms

Secondary activation, all hosted services impacted

5

 

Control node, specific service

#nodes

#vms

Secondary activation, only specific service impacted

6

CRITICAL
Limited Immediate Impact

Compute node

1

#vms/node

Immediate

7

 

Compute node critical services

1

#vms/node

Immediate (compute, network, storage "critical path")

8

MAJOR
Limited, Conditional Impact

Compute node non-critical services

1

#vms/node

Secondary (for associated control/mgmt services)

9

 

Compute node VM specific SW

(N/A)

1

Immediate for associated VM (example: QEMU)

10

MINOR
No direct service impact

Supplementary Services

#nodes

#vms

No direct activation mechanism that impact quest availability state (e.g. GUI/CLI services not used by automated processes)

11

  1. Network "GW" nodes and other infrastructure "in-line" services (e.g. LBaaS) assumption is that not all VMs are directly (or at all) using these services
  2. Assumption is that not all VMs are directly/continuously using shared storage services

...


The downtime targets are for all causes, including failures (all causes; HW failures and SW defect related failures), updates/upgrades etc., which means that no single cause (or category of causes) can "claim" the whole un-availability "budget". Since update/upgrade operations are performed on-demand, the performance is expected to be better than failure related performance because the system can take actions to reconfigure itself to minimize the impact (e.g. through gracefully removing the entity subject to the update from service rather than performing asynchronous remediation sequence, as is the case of the failure related actions). Target update/upgrade frequency (in terms of operations expected per year) should linearly decrease the individual update downtime allocations (e.g. if you expect Devops environment with monthly updates, then ea. Update can take 1/12th of allocated yearly downtime).
Proposed yearly unavailability targets for the update/upgrade related outages per category, in HA configuration:

Category / Description

Target Time (Max. Service Un-Availability DPM)

Critical guest impact services

Less than 3 seconds/year

Major guest impact services

Less than 15 seconds/year

Minor guest impact services

Less than 5 minutes/year

Maximum time for operating with degraded/no redundancy configuration due to upgrades

5 minutes/year for all nodes/services, except 30 minutes/year for nodes/services that require database / other persistent storage migrations during update process


The following table maps OpenStack and infrastructure services to the appropriate risk level. Risk is broken into categories of runtime impact, upgrade impact. A OpenStack service's runtime functionality can varying degrees of impact on the availability of the infrastructure. For example, if the runtime services of the Ceph backend for cinder fail, user services that depend upon persistence storage may fail immediately. However, if the cinder API fails, user services might see a degradation in functionality if new volumes cannot be created, but may continue to function. As another example, if the ability of nova to create new VMs is lost, user services might be degraded in that they cannot scale up. While the loss of the VM migration would affect the infrastructure's ability to react to node failures.
The intersection of the cloud's philosophy of fail and retry and the more traditional telco model of always available need to be explored. However, OpenStack services themselves need to be resilient in that the service may fail, but should not leave any permanent errors in the infrastructures such as orphaned processes or memory leaks. For example, it may be ok for a nova boot command to fail before completion if a control plane host fails. However, that failure should not leave any VM or networks that need to be cleaned up manually. If an OpenStack service cannot be made to be tolerant of control plane failure, the services behaviour during that failure should be documented. In any event, OpenStack service failures should be designed to notify of their failure as quickly as possible. For example, if a nova boot operation fails due to a control plane host failure, the user service should be notified of the failure as quickly as possible.

Service

Runtime Impact

Upgrade Impact

API Impact

Aodh

Major

Major

 

Ceilometer

Major

Major

 

Nova

Major

Major

 

Ceph Monitor

Critical

Critical

 

Cinder-Volume

Critical

Critical

 

Cinder-Backup

Major

Minor

 

Glance

Minor

Minor

 

Gnocchi

Major

Major

 

Heat

Major

Major

 

Horizon

Minor

Minor

 

Ironic (Overcloud)

Major

Major

 

Ironic (Undercloud)

N/A

Critical

 

Keystone

Major

Major

 

Manilla

Major

Major

Is this for NFV? How widely used in general? How is it used?

Neutron API

Critical

Critical

Failure recovery mechanisms may rely on network changes.

Sahara

Minor

Minor

 

Switch API

Critical

Critical

 

Neutron-dhcp-agent

Major

Major

VMs with dhcp addresses may reboot at any time. However, critical user services should have HA mechanisms.

Neutron-l3-agent

Critical

Critical

The L3 agent provides data path functionality for applications that require SNAT functionality.

Neutron-metadata-agent

Major

Major

VMs may reboot during failure. However, critical user services should have HA mechanisms.

Neutron-ovs-agent

Critical

Critical

Failure recovery mechanisms may rely on network changes.

SDN Controller (ODL)

Critical

Critical

If in control path...

LBaaS-API

Major

Major

 

LBaaS-Service

Critical

Critical

Our responsibility for HAProxy version.

Swift

Critical

Critical

 

Manila-share

Major

Major

 

Mongodb

Major

Major

If used..see Gnocchi

HAProxy

Critical

Critical

 

Galera (MariaDB)

Critical

Critical

 

RabbitMQ

Critical

Critical

 

Redis

Critical

Critical

 

Virtual-IPs

Critical

Critical

 

memcached

Critical

Critical

 


Anchor
_mpz2wapn0wvu
_mpz2wapn0wvu
Upgrade Process

...

  1. IPMI / DRAC connections should be redundant and connected to separate switches.
  2. Bonding / LAG
    1. OpenStack networks (Storage, Storage Management, Management, Internal API, Tenant (non-SRIOV) ) should be connected to multiple switches using the switch vendor's multi-chassis bonding mechanism (MLAG, VLT, CLAG, etc…).
    2. Each SR-IOV network should connect to multiple switches. For the initial version of the architecture, active/standby bonding should be used inside the VM with a bond member from each switch connection.
  3. A Leaf-Spine network architecture should be employed. Director should be enhanced to accommodate leaf-spine.
  4. Installation should include automatic discovery of networking elements and connectivity with an emphasis on verifying the required HA attributes

...


The diagram below shows an idealized fault and telemetry system. The diagram illustrates the separation of time series data (TSD) from event data. The separation allows the event system to operate with much lower latency than the TSD system.
The TSD system samples data at regular intervals and transports to sampled data to storage for further (offline) processing. An ideal sampling system allows the sample rate to be higher (Adapter/Sampler) than the transport rate so that reaction to sample data can have higher resolution than the data that is transported for post-processing. Higher resolution allows the Sample Processor to react quickly to local events based upon thresholds, averages, etc… Events generated by the Sample Process or Collector can be immediately passed to the separate event path. TSD data is then formatted and serialized for transport across the TSD Transport. The TSD transport bus should be flexible such that multiple consumers and producers can exist on the bus with a common data object model so that data is easily shared from all producers to consumers regardless of application.
For the event pipeline, EProducers monitor the system by attaching to interrupt-based mechanisms in the software platform. EProducers are in contrast to the sampling mechanisms that use polling. For example, to monitor the network interface you can either poll at a regular interval or attach a listener to a system callback mechanism. In order to achieve < 10 ms detection of events at the network interface, you would need to poll the system faster than 5ms. The cpu overhead associated with such poll rates is significant Evaluation of Collectd for event processing.(http:///….) when taken in aggregate of all the systems that would need to be polled at the high rate.

Ideal Event / Telemetry System, within the context of single Node instance
The focus for this epic will be on the event pipeline with telemetry work mainly focused on TSD transport using a common bus and object model. It is a goal that the same transport bus be used for both events and telemetry if possible.

Anchor
_36muh3be6z9w
_36muh3be6z9w
Telemetry Collection

...


As mentioned previously, part of the work will be determining if collectd is the correct local for the local agent.

Anchor
_b2dj6eavqrcz
_b2dj6eavqrcz
RT Fault Mgmt Bus (RFE x)

...