As infrastructure becomes more integrated into services for companies large and small, it also becomes more complex. This is particularly true for telcos such as AT&T, which is why the company founded the Airship project to help cleanly manage infrastructure. Airship is a collection of loosely coupled but interoperable open source tools that declaratively automate cloud provisioning.

This series will explore how this brand-new open source project works to streamline the deployment of tools such as OpenStack and Kubernetes. Airship 1.0 is available to try; however, I have not figured out how to install the AirSkiff (Lightweight Airship for Dev) on my CentOS server as the installation scripts are for Ubuntu (Yet to try the nested virtualization).

As mentioned earlier, Airship is a collection of loosely coupled but interoperable open-source tools; hence it is crucial to understand how those tools work. Let’s start with the theory to understand all Airship components and how it works. Most of the Airship components are Python/Shell scripts, so doing reverse engineering may not be an issue ;)

We will also explore what is new in Airship 2.0 and the open-source tools part of 2.0 (Not yet available)


  1. Kubernetes
  2. Helm

There are many other prerequisites, but the above ones are mandatory.

What is Airship?

Why Airship?

  1. One Workflow for Lifecycle Management: We need a system that is predictable with lifecycle management at its core. This means one workflow handling both initial deployments and site updates. In another word, new deployment and an update to an existing site should be virtually identical.
  2. Containers Are the New and Only Unit of Software Delivery: Containers are the unit of software delivery for Airship. Everything that can be a container is a container. This allows us to progress environments from development to testing and finally production with confidence that the same software is being used.
  3. Flexible for Different Architectures and Software: Airship is delivering environments both very small and large with a wide range of configurations. AT&T uses Airship to manage its entire cloud platform, not just OpenStack.

AirShip 1.0 Elements

  1. Create override yaml to customize the default configuration
  2. Invoke the shipyard command to initiate the site deployment
shipyard create action deploy_site

3. Use update_site to make changes in the site (Day 2)

It might be hard to follow now, but we will deepdive in the upcoming articles.


Pegleg supports local and remote Git repositories. Remote repositories can be cloned using a variety of protocols — HTTP(S) or SSH. Afterward, specific revisions within those repositories can be checked out, their documents aggregated, linted, and passed to the rest of Airship for orchestration, allowing document authors to manage their site definitions using version control.


Designs and Secrets


Shipyard adopts the Falcon web framework and uses Apache Airflow as the backend engine to programmatically author, schedule and monitor workflows.

The current workflow is as follows,

  1. Initial region/site data will be passed to Shipyard from either a human operator or Jenkins
  2. The data (in YAML format) will be sent to Deckhand for validation and storage
  3. Shipyard will make use of the post-processed data from DeckHand to interact with Drydock.
  4. Drydock will interact with Promenade to provision and deploy bare metal nodes using Ubuntu MAAS and a resilient Kubernetes cluster will be created at the end of the process
  5. Once the Kubernetes clusters are up and validated to be working properly, Shipyard will interact with Armada to deploy OpenStack using OpenStack Helm
  6. Once the OpenStack cluster is deployed, Shipyard will trigger a workflow to perform basic sanity health checks on the cluster



  • Support for Canonical MAAS provisioning.
  • Configuration of complex network topologies including bonding, tagged VLANs and static routes
  • Support for running behind a corporate proxy
  • Extensible boot action system for placing files and SystemD units on nodes for post-deployment execution
  • Supports Keystone-based authentication and authorization


Core Responsibilities

  • substitution — provides separation between secret data and other configuration data for security purposes and reduces data duplication by allowing common data to be defined once and substituted elsewhere dynamically
  • revision history — maintains well-defined collections of documents within immutable revisions that are meant to operate together, while providing the ability to rollback to previous revisions
  • validation — allows services to implement and register different kinds of validations and report errors
  • secret management — leverages existing OpenStack APIs — namely Barbican — to reliably and securely store sensitive data


The Armada Python library and command line tool provide a way to synchronize a Helm (Tiller) target with an operator’s intended state, consisting of several charts, dependencies, and overrides using a single file or directory with a collection of files. This allows operators to define many charts, potentially with different namespaces for those releases, and their overrides in a central place. With a single command, deploy and/or upgrade them where applicable.

Armada also supports fetching Helm chart source and then building charts from source from various local and remote locations, such as Git endpoints, tarballs or local directories.

It will also give the operator some indication of what is about to change by assisting with diffs for both values, values overrides, and actual template changes.

Its functionality extends beyond Helm, assisting in interacting with Kubernetes directly to perform basic pre- and post-steps, such as removing completed or failed jobs, running backup jobs, blocking on chart readiness, or deleting resources that do not support upgrades. However, primarily, it is an interface to support orchestrating Helm.

Core Responsibilities

  • Manage multiple chart dependencies using Chart Groups
  • Enhancing base Helm functionality
  • Supports Keystone-based authentication and authorization


Bootstrapping begins by provisioning a single-node cluster with a complete, configurable Airship infrastructure. After hosts are added to the cluster, the original bootstrapping node can be re-provisioned to avoid subtle differences that could result in future issues.

Promenade provides cluster resiliency against both node failures and full cluster restarts. It does so by leveraging Helm charts to manage core Kubernetes assets directly on each host, to ensure their availability.


  1. Bare metal configuration management for a few very targeted use cases
  2. Bare metal package manager orchestration

What problems does it solve?

  1. To provide a day 2 solution for managing these configurations going forward
  2. [Future] To provide a day 2 solution for system level host patchingBerth

Open-Source Elements

  1. Docker
  2. Kubernetes
  3. Helm
  4. OpenStack-Helm
  5. Argo (AS2.0)
  6. Kustomize (AS2.0)
  7. Cluster API (AS2.0)
  8. Metal Kube (AS2.0)
  9. Ironic (AS 2.0)
  10. HAProxy
  11. PostgreSQL
  12. MariaDB
  13. Apache AirFlow
  14. MaaS (Metal-As-A-Service)
  15. Python 3.5+

AS2.0 — AirShip 2.0

Content Courtesy