...
Parameter | Sub-Category-1 | Sub-Category-2 | Description | Example Value |
physical_provisioner | ||||
deployment_strategy | Name of the strategy to use. User can use the one that is defined in airshipit/treasuremap/global/deployment See below. | deployment-strategy | ||
deploy_interval | The seconds delayed between checks for progress of the step that performs deployment of servers | 30 | ||
deploy_timeout | The maximum seconds allowed for the step that performs deployment of all servers | 3600 | ||
destroy_interval | The seconds delayed between checks for progress of destroying hardware nodes | 30 | ||
destroy_timeout | The maximum seconds allowed for destroying hardware nodes | 900 | ||
join_wait | The number of seconds allowed for a node to join the Kubernetes cluster | 0 | ||
prepare_node_interval | The seconds delayed between checks for progress of preparing nodes | 30 | ||
prepare_node_timeout | The maximum seconds allowed for preparing nodes | 1800 | ||
prepare_site_interval | The seconds delayed between checks for progress of preparing the site | 10 | ||
prepare_site_timeout | The maximum seconds allowed for preparing the site | 300 | ||
verify_interval | The seconds delayed between checks for progress of verification | 10 | ||
verify_timeout | The maximum seconds allowed for verification | 60 | ||
kubernetes | ||||
node_status_interval | ||||
node_status_timeout | ||||
kubernetes_provisioner | ||||
drain_timeout | maximum seconds allowed for draining a node | 3600 | ||
drain_grace_period | seconds provided to Promenade as a grace period for pods to cease | 1800 | ||
clear_labels_timeout | maximum seconds provided to Promenade to clear labels on a node | 1800 | ||
remove_etcd_timeout | maximum seconds provided to Promenade to allow for removing etcd from a node | 1800 | ||
etcd_ready_timeout | maximum seconds allowed for etcd to reach a healthy state after a node is removed | 600 | ||
armada+ | ||||
get_releases_timeout | timeout for Retrieving Helm charts releases after deployment | 300 | ||
get_status_timeout | timeout for retrieving status | 300 | ||
manifest+ | The name of the manifest document that the workflow will use during site deployment activities | 'full-site' | ||
post_apply_timeout | 7200 | |||
validate_design_timeout | Timeout to validate the design. | 600 | ||
Deployment-Strategy | ||||
groups | named sets of nodes that will be deployed together. | |||
name | name of the group | masters | ||
critical | if this group is required to continue to additional phases of deployment | true | ||
depends_on | Group names that must be successful before this group can be processed | [] | ||
selectors | A list of identifying information to indicate the nodes that are members of this group. Each selector has following 4 filter values | |||
node_names | Name of the node | node01 | ||
node_labels | Label of the node | ucp_control_plane: enabled | ||
node_tags | Tags in Node | control | ||
rack_names | Name of the rack | rack01 | ||
success_criteria | A list of identifying information to indicate the nodes that are members of this group. When no criteria are specified, it means that no checks are done - processing continues as if nothing is wrong | |||
percent_successful_nodes | The calculated success rate of nodes completing the deployment phase. | 75 would mean that 3 of 4 nodes must complete the phase successfully | ||
minimum_successful_nodes | An integer indicating how many nodes must complete the phase to be considered successful | 3 | ||
maximum_failed_nodes | An integer indicating a number of nodes that are allowed to have failed the deployment phase and still consider that group successful. | 0 |
...
Note: One host profile can adopt values from other host profile. It just have to add
Parameter Category | Sub Category 1 | Sub Category 2 | Sub-Category-2 | Sub-Category-2 | Description | Example Value 1 |
---|---|---|---|---|---|---|
hardware_profile | NA | NA | The hardware profile used by the host | intel_2600.yaml | ||
primary_network | NA | NA | The main network used for administration | dmz | ||
Interfaces | NA | NA | Define each and every interfaces of the host in detail. | |||
Name | NA | Name of the Interface | dmz, data1 | |||
device_link | The name of the |
networkLink that will be attached to this interface. NetworkLink definition includes part of the interface configuration such as bonding(see below) | dmz, data1 | ||
slaves | NIC |
Aliases. The list of hardware interfaces used for creating this interface. This value can be a device alias defined in the HardwareProfile or the kernel name of the hardware interface. For bonded interfaces, this would list all the slaves. For non-bonded interfaces, this should list the single hardware interface used | ctrl_nic1, data_nic1 | |||||
networks | This is the list of networks to enable on this interface. If multiple networks are listed, the NetworkLink attached to this interface must have trunking enabled or the design validation will fail.. | dmz, private, management | ||||
storage | Either in a HostProfile or BaremetalNode document. The storage configuration can describe the creation of partitions on physical disks, the assignment of physical disks and/or partitions to volume groups, and the creation of logical volumes. | |||||
physical_devices* | A physical device can either be carved up in partitions (including a single partition consuming the entire device) or added to a volume group as a physical volume. Each key in the physical_devices mapping represents a device on a node. The key should either be a device alias defined in the HardwareProfile or the name of the device published by the OS. The value of each key must be a mapping with the following keys | |||||
labels | A mapping of key/value strings providing generic labels for the device | bootdrive: true | ||||
volume_group | A volume group name to add the device to as a physical volume. Incompatible with the partitions specification | |||||
partitions* | A sequence of mappings listing the partitions to be created on the device. Incompatible with volume_group specification | |||||
name | Metadata describing the partition in the topology | 'root | ||||
size | The size of the partition. | '30g' | ||||
part_uuid | A UUID4 formatted UUID to assign to the partition. If not specified one will be generated | |||||
volume_group | name assigned to a volume group | |||||
labels | ||||||
bootable | Boolean whether this partition should be the bootable device | true | ||||
filesystem | An optional mapping describing how the partition should be formatted and mounted | |||||
mountpoint | Where the filesystem should be mounted. If not specified the partition will be left as a raw device | '/' | ||||
fstype | The format of the filesystem. Defaults to ext4 | 'ext4' | ||||
mount_options | fstab style mount options. Default is ‘defaults’ | 'defaults' | ||||
fs_uuid | A UUID4 formatted UUID to assign to the filesystem. If not specified one will be generated | |||||
fs_label | A filesystem label to assign to the filesystem. Optional. | |||||
volume_groups | ||||||
vg_uuid | A UUID4 format uuid applied to the volume group. If not specified, one is generated | |||||
logical_volumes* | A sequence of mappings listing the logical volumes to be created in the volume | |||||
name | Used as the logical volume name | |||||
lv_uuid | A UUID4 format uuid applied to the logical volume: If not specified, one is generated | |||||
size | The logical volume size | |||||
filesystem | A mapping specifying how the logical volume should be formatted and mounted | |||||
mountpoint | Same as above. | |||||
fstype | ||||||
mount_options | ||||||
fs |
physical_devices
_uuid | ||
fs_label |
platform | define the operating system image and kernel to use as well as customize the kernel configuration | |||||
image | Image name | 'xenial' | ||||
kernel | Kernel Version | 'hwe-16.04' | ||||
kernel_params | A mapping. Each key should either be a string or boolean value. For boolean true values, the key will be added to the kernel parameter list as a flag. For string values, the key:value pair will be added to the kernel parameter list as key=value | kernel_package: 'linux-image-4.15.0-46-generic' |
oob | The ipmi OOB type requires additional configuration to allow OOB management | |||||
network | which node network is used for OOB access. | oop | ||||
account | valid account that can access the BMC via IPMI over LAN | root | ||||
credential | valid password for the account that can access the BMC via IPMI over LAN | root | ||||
spec | host_profile | Name of the HostProfile that this profile adopts and overrides values from. | defaults | |||
metadata | ||||||
owner_data | ||||||
<software-component-name> enabled/disabled | openstack-l3-agent: enabled |
Nodes
This is defined under Baremetal. Node network attachment can be described in a HostProfile
or a BaremetalNode
document. Node addressing is allowed only in a BaremetalNode
document.
Hence, this focuses mostly on addressing. Nodes adopts all values from the profile that it is mapped to and can then again override or append any configuration that is specific to that node.
Separate schema is created for each and every node. That is the below table contents are repeated for each and every node of the deployment.
Parameter Category | Sub-Category-1 | Sub-Category-2 | Sub-Category-3 | Sub-Category-4 | Description | Example Value |
---|---|---|---|---|---|---|
addressing* | Contain IP address assignment for all the networks. It is a valid design to omit networks from this, and in that case the interface attached to the omitted network will be configured as link up with no address | |||||
address | It defines a static IP address or dhcp for each network a node should have a configured layer 3 interface on. | 10.10.100.12 or dhcp | ||||
network | The Network name. | oob, private, mgmt, pxe, etc. | ||||
host_profile | Which host profile to assign to this node. | cp-intel-pod10 | ||||
metadata | ||||||
tags | 'masters' | |||||
rack | pod10-rack |
*: Array of Values.
Network Definition
...