Home » Objectives


The final goal of the project is to prepare a new generation of e-infrastructures that harness latest generation technologies, supporting deep learning and other intensive computing techniques to exploit very large data sources. The project will provide the corresponding services to lower the adoption barriers for new communities and users, satisfying the needs of both research, education  communities and citizen science.

Focus the interest of intensive computing techniques on the analysis of very large datasets, considering demanding cases from different research communities, in the context of the future generations of e-infrastructure.

Evolve, up to production level, intensive computing services exploiting specialized hardware components, like GPUs, low-latency interconnects, and others usually accessed as “bare metal” resources. The services, based on open source software, will follow existing standards to guarantee their deployment and orchestration on different platforms.

Integrate the intensive computing services under a Hybrid Cloud approach, assuring interoperability with the existing EOSC platforms and their services.

Define a “DEEP as a Service” solution to offer an easy integration path to the developers of final applications.

Analyse the complementarity with other ongoing projects targeting added value services for the cloud, in particular those related to the management of extremely large datasets.

Global Objective

Promote the use of intensive computing services by different research communities and areas, and their support by the corresponding e-infrastructure providers and open source projects.

The DEEP Hybrid DataCloud solutions will be contributed to the EOSC service catalogue. However, according to our experience, further effort will be needed to promote their use. The first step will be to correctly identify the different stakeholders and elaborate adequate training and dissemination material for each target group: e-infrastructure and technology providers, developers of solutions with high technical background, and final users (i.e. researchers). As already indicated, from a researcher perspective, intensive and high performance computing to exploit large datasets is a complex task, even more when using distributed infrastructures. The fear to “waste time” in learning new programming languages or service interfaces, at the expense of daily research activities, has always been considered one of the limiting factors preventing e-infrastructure adoption.

Therefore, this objective is critical to the success of the project: we aim to release a DEEP as a Service solution able to exploit advanced features provided at the infrastructure level, and reduce as much as possible the effort required to execute the applications over a Hybrid Cloud e-infrastructure.