Lab as a Service


Lab as a service intends to provide a cloud based build and deploy environment for executing Jenkins verify and merge jobs and as a staging area for scenario deployment prior to running on our bare metal Pharos infrastructure.  Intended as an infrastructure and design resource the lab as a service initiative should both streamline our development and jenkins workflows and reduce the burden on Pharos labs.

Requirements for LaaS ... Project Planning

JIRA task for "Utilize Virtual Environments for CI and developers" ...

LaaS Meetings

LaaS Project Page

Accepted project proposal can be found here:  Lab as a Service

LaaS Motivation

Towards serving developers in a better way

Objective for LaaS

What is OPNFV LaaS

OPNFV LaaS initial use-cases

Use CaseDescription
Snapshot deploy

Spawn a deployment from snapshot

Initial master deployDeploy certain OPNFV scenario from scratch from master using an existing artifact
Initial stable deployDeploy certain OPNFV scenario from scratch using a released OPNFV version
Build and deployBuild the artifact and deploy certain OPNFV scenario for the given patch
Deploy with addonsDeploy certain OPNFV scenario and provide additional scenarios to deploy/develop other components of the stack such as ONAP
OPNFV for ONAP developerDeploy and integrate OPNFV scenario and ONAP instance for developer use
OPNFV for ONAP X-CIDeploy and integrate OPNFV scenario and ONAP instance for X-CI
OPNFV+ONAP CI/CDDeploy and integrate OPNFV scenario and ONAP instance for full CI/CD/testing
Deploy OSProvide a machine with OS installation only

See LaaS scenarios for a detailed description of LaaS use cases and associated requirements.

LaaS - Number of Servers

For specific information about hardware and capabilities, see the pages for the participating companies below:


LaaS Flow Proposal

Use Case: OPNFV Developer Access to On-Demand POD

Basic workflow can be seen on below diagram (image from LaaS.vsdx).

Here is the workflow in detail

  1. Requester: Logs in to dashboard and issues the request. The request should contain
  2. Dashboard: Triggers a Jenkins Job and Updates the status of the request from New to Queued
  3. Jenkins Job - Deployment:
    1. Updates the status of the request from Queued to In Progress when the job starts running for real
    2. Updates the status of the resource from Idle to In Use
    3. Does the deployment using the details from the request
    4. Adds the SSH keys of the developer
  4. Jenkins updates the status of the request from In Progress to Complete - Failure or Complete - Success depending on the result
  5. Sends the deploy complete notification to requester, including access and credentials info for VPN to the OPNFV POD
  6. Requester: Logs in and uses the POD

(outside this flow)

  1. Jenkins Job - Cleanup:
    1. Runs periodically and wipes out the deployment, removes the keys and so on from the resource
    2. Updates the status of the resource from In Use to Idle

This work can also be done in phases and the first phase could be creating basic Jenkins jobs to put the logic in place.The job could just echo Hello World initially.

Once the Jenkins job logic is in place, a basic form on dashboard can be created to trigger the job and the request details are passed to the job.

And so on.

ONAP Integration Proposal

Below are use cases for OPNFV+ONAP lab integration. Discussion on these use cases are continuing in the OPNFV Infra Working Group Meeting. Aspects under discussion include:

The goal is to come to a pretty good understanding of those aspects before, as a community, OPNFV reaches out to ONAP with a specific (concrete) proposal. In the meantime however, as we address those questions, OPNFV and ONAP members continue to work on them tactically through:

Use Case: ONAP Developer Access to On-Demand OPNFV+ONAP POD

Basic workflow can be seen on below diagram (image from LaaS.vsdx).

Preliminary assumptions about the OPNFV LaaS POD resources expected for typical ONAP developer/CI use cases:

Here is the workflow in detail:

  1. Requester: Logs in to dashboard and issues the request. The request should contain all of the info from the "OPNFV Developer Access to On-Demand POD" use case plus
    1. ONAP scenario details: What scenario should be deployed (ONAP "scenarios" are still being defined - for now it's assumed there is only one)
    2. ONAP/scenario version: Should it be built and deployed for a patch or from master and so on
  2. Dashboard: Triggers a Jenkins Job and Updates the status of the request from New to Queued
  3. Jenkins Job - Deployment:
    1. Updates the status of the request from Queued to In Progress when the job starts running for real
    2. Updates the status of the resource from Idle to In Use
    3. Does the deployment using the details from the request
    4. Adds the SSH keys of the developer
  4. Jenkins Job - Deployment:
    1. Using ONAP POD management APIs (TBD), allocates an ONAP POD
    2. Establishes a VPN connection between the OPNFV POD and the ONAP POD
    3. Triggers a Jenkins Job for ONAP install and updates the status of the request to Pending ONAP
    4. Configures ONAP for access to the OPNFV POD VIM (VIM address, credentials)
  5. ONAP: registers with the VIM using the credentials provided 
  6. Once ONAP access to the VIM has been verified, Jenkins
    1. Updates the status of the request from In Progress to Complete - Failure or Complete - Success depending on the result
    2. Sends the deploy complete notification to requester, including access and credentials info for VPN to the ONAP and OPNFV PODs
  7. Requester: connects to the ONAP POD via VPN and uses it
  8. Requester: optionally connects to the OPNFV POD via VPN and uses it

Use Case: ONAP Developer Access to OPNFV POD with ONAP Deployed

Basic workflow is the same as "Use Case: OPNFV Developer Access to On-Demand POD" with addition that the user can select to have ONAP deployed on the OPNFV POD as well. Analysis of the resource requirements for this use case is underway. Below are some source links:



Completed Trials

The information below is a record of earlier trials of the LaaS concept.

Ravello Trials

Evaluations to use Ravello for OPNFV CI  have been parked due to technical limitations and no more evaluations will be done until after C-release is out. (stability of nested virtualization)

Ravello has been trialled as a candidate for supporting nested virtualization deployments of OPNFV in a cloud environment.  The environment was found to be unsuitable for OPNFV scenario deployments with a significant amount of instability, slow execution times, and crashes due as we understand to the highly nested environment and HVM layer. 

IPMI Support

  1. Ravello doesn't natively support IPMI (as a service) today – but that may change in the future
  2. It is possible however, to workaround this even today by having an open IPMI server as a part of the 'Ravello      Application/Blueprint' that can talk to Ravello's REST APIs

Rackspace Trials

Evaluations to use Rackspace for OPNFV CI have been parked due to technical limitations. It is also expensive.

Trials are starting to evaluate the ability to run nested virtualization deployments in Rack Space.  The rackspace environment provides a less nested environment with the ability to potentially integrate directly through Jenkins.  To be updated...

Other Options

Evaluations to use GCE, EC2, and Azure for OPNFV CI have been parked due to technical limitations. (lack of nested virtualization support)

Google Compute Engine, Amazon EC2, and Microsoft Azure have been evaluated as well. But these providers do not expose CPU virtualization features so they deemed to be not viable for OPNFV at this time.

LaaS Hardware

A key objective of LaaS is to create a development environment on the fly, whether it is vPOD, Pharos baremetal POD or and OPNFV-ONAP POD. LaaS will avoid dedicating a particular server to a particular role, but rather allow for flexible allocation following demand. As a consequence, LaaS assumes a single type of server which can be used for a vPOD, a Pharos POD, or an ONAP POD. 

Server Hardware (x86 based)

Server Hardware (ARM based)

Network Switches

Switches should support 10/25/40/100Gbps ports to future proof the setup (switch hardware PODs from 10 Gbps to 25 Gbps, 40 Gbps or 100 Gbps to also support performance testing), e.g. Cisco Nexus 92160YC-X. 

Assuming the above number of servers: 38 x86 based servers, and 14 ARM based servers and further assuming that a maximum of 6 ports per server would be wired up to the switch, a total of 312 switch ports would be needed.
This means that just for connecting the server-ports (not counting ports on uplink/spine switches), a total of 7 48-port switches would be needed.

Lab-Hosting options / metal as a service