Our Newsletter #1 is out!

The first Deep-HybridDataCloud Newsletter is out!

Take a look to the first issue of the newsletter here!

It includes:

  • An overview of the project including a summary of the main objectives and an introduction to some of the team members.
  • Software Quality
  • Alien4cloud for DEEP
  • Udocker: Docker for users with no privileges
  • Summary of past events and future events

Enjoy!

Hybrid Virtual Elastic Clusters Across Clouds

Virtual clusters provide the required computing abstraction to perform both HPC (High Performance Computing) and HTC (High Throughput Computing) with the help of a LRMS (Local Resource Management System), such as SLURM, to be used as a job queue, and a shared file system, such as NFS (Network File System).

Virtual elastic clusters in the Cloud can be deployed with Infrastructure as Code tools such as EC3 (Elastic Cloud Computing Cluster), which relies on the IM (Infrastructure Manager) to perform automated provisioning and configuration of Virtual Machines on multiple IaaS (Infrastructure as a Service) Clouds, both on-premises, such as OpenStack, or public, such as Amazon Web Services.

However, these virtual clusters are typically spawned in a single Cloud, thus limiting the amount of computational resources that can be simultaneously harnessed. With the help of the INDIGO Virtual Router, the DEEP Hybrid-DataCloud project is introducing the ability to deploy hybrid virtual elastic clusters across Clouds.

The INDIGO Virtual Router (VRouter) is a substrate-agnostic appliance, which spans an overlay network across all participating clouds. The network is completely private, fully dedicated to the elastic cluster, interconnecting its nodes wherever they are. The cluster nodes themselves require no special configuration since the hybrid, inter-site nature of the deployment is completely hidden from them by the vRouter.

In order to test the functionality of these hybrid virtual clusters, the deployment depicted in the figure below was performed:

First, both the front-end node and a working node were deployed at the CESNET Cloud site based on OpenStack. The front-end node includes both CLUES, the elasticity manager in charge of deploying/terminating nodes of the cluster depending on the workload, and SLURM as the LRMS. It also hosts the VRouter central point, which provides a configured VPN server. This is deployed through the IM based upon a TOSCA template that is publicly available. In case you don’t know, TOSCA stands for Topology Orchestration Specification for Cloud Applications and provides a YAML language to describe application architectures to be deployed in a Cloud. 

The deployment of the computational resources at the UPV Cloud site, based on OpenNebula, included a VRouter, in charge of establishing the VPN tunnel to the VRouter central point, which was used as the default gateway for the working node of the cluster. Seamless connectivity to the other nodes of the cluster is achieved through the VPN tunnels. Again, this is deployed through the IM based upon another TOSCA template.

Notice that the working nodes are deployed in subnets that provide private IP addressing. This is important due to the lack of public IP across many institutions (we were promised a future with IPv6 that is being delayed). Also, the communications among the working nodes in each private subnetwork do use the VPN tunnels, for increased throughput. 

Would you like to see this demo in action? Check it out in YouTube.

We are in the process of extending this procedure to be used with the INDIGO PaaS Orchestrator, so that the user does not need to specify which Cloud sites are involved in the hybrid deployment. Instead, the PaaS Orchestrator will be responsible for dynamically injecting the corresponding TOSCA types to include the VRouter in the TOSCA template initially submitted by the user. This will provide an enhanced abstraction layer for the user.

We plan to use this approach to provide extensible virtual clusters that can span across the multiple Cloud sites included in the DEEP Pilot testbed in order to easily aggregate more computing power. 

And now that you are here, if you are interested in how TOSCA templates can be easily composed, check out the following blog entry: Alien4Cloud in DEEP.