PARALLEL AND DISTRIBUTED COMPUTING

Compilers for parallel architectures:

Automatic generation of parallel code: This research line addresses the automatic parallelization of sequential programs. The focus is on the development of compiler techniques to convert a sequential program into a concurrent program to be executed in modern multi-core and many-core architectures. We are working on the development of advanced program analyses to discover the parallelism implicit in sequential programs and the development of code transformation techniques to build the most efficient parallel version of a sequential program.

Iterative Optimization: We are working on the iterative optimization of codes on heterogeneous architectures including GPUs and/or CPUs. Iterative optimization allows generating a huge number of optimized versions of the same application and selecting the fastest version for a given architecture by means of analytical models, heuristics or the execution of the real code.

Languages and tools for parallel programming:

Our group designs and builds tools (Servet) and libraries (HTA, UPCBLAS) to improve the productivity of programmers, particularly in the development of parallel applications. Servet is a portable suite of benchmarks to obtain the most relevant hardware parameters to support the automatic optimization of applications on multicore clusters. The Hierarchically Tiled Array or HTA data type is a class designed to facilitate the writing of programs based on tiles in object-oriented languages. HTAs allow exploiting locality as well as to express parallelism with much less effort than other approaches. UPCBLAS is a parallel numerical library for dense matrix computation using the PGAS (Partitioned Global Address Space) UPC (Unified Parallel C) language. The popularity of PGAS languages has increased during the last years thanks to their high programmability and performance especially on hierarchical architectures such as multicore clusters.

Our proposals, which cover distributed, shared and hybrid memory systems, lead to codes which are better structured, more readable and easier to maintain than those built using standard tools, while the performance is very similar. Much of this research has been performed in close collaboration with leading universities such as the University of Illinois at Urbana-Champaign or first level companies such as HP and IBM.

Fault tolerance and malleability of parallel applications:

Systems intended for the execution of long-running parallel applications should provide fault tolerant capabilities, since the probability of failure increases with the execution time and the number of nodes. Checkpointing and rollback recovery is one of the most popular techniques to provide fault-tolerance support. We have developed CPPC (ComPiler for Portable Checkpointing), an application-level checkpointing tool for message-passing applications designed with special focus on portability.

Currently we are exploring the possibility of implementing malleability of MPI applications as an extension to the CPPC tool, so that the final approach enables transparent reconfiguration during the application’s execution.

General purpose computation on GPUs:

In this field we are working in the development of tools that allow the automatic or semi-automatic implementation of algorithms on a GPU. Additionally, another objective is to develop high-level libraries for multi-GPU systems focused on providing simple yet efficient communication among the different GPUs. All our research is centred in providing solutions based on the main programming languages in these platforms, such as OpenCL and Cuda.

PLUTON CLUSTER

ULTRA-FAST JAVA

Ultra-fast Java communications:

We are working on low latency Java communication middleware in high-speed cluster networks, such as 10/40 Gigabit Ethernet and InfiniBand. The developed middleware supports message-passing communications (Fast MPJ), as well as speeds up communications in a user-transparent way (Java Fast Sockets). The main objective is the reduction of the response time of parallel and distributed Java applications, key in several areas such as high frequency financial trading.

COMPUTER GRAPHICS

Computer Graphics and scientific visualization:

The main goal of this research line is to achieve an efficient processing of complex graphical models, such as surface subdivision, parametric surfaces and hybrid terrain representation. We put special emphasis on the development of interactive real-time processing, efficiently exploiting the cutting-edge hardware that can be found in current computer systems: multicore CPUs and GPUS (Graphics Processing Unit).

Another relevant topic within this research field is the photo-realistic rendering of synthesis images, using both physically-based illumination models, advanced texture mapping and transparency effects.

HIGH PERFORMANCE  MICROPROCESSORS

Computer microarchitecture design:

We have participated in the development of SESC, a research microarchitecture simulator used worldwide.  The group also innovates in this field proposing new memory hierarchy architectures as well as management policies with the aim of improving the execution times and reducing the power consumption of applications both in single core and multicore systems.

Hardware accelerator design for multimedia and energy-efficient computing:

In terms of energy efficiency, the gap between microprocessors and application-specific circuits is steadily increasing. Hardware acceleration is, therefore, a powerful way of reducing costs and achieving green computing.

Whereas the cost of designing and manufacturing application-specific circuits is high, a number of alternative platforms are now available, such as FPGAs, structured ASICs or ASIPs. Most of the new platforms have been developed for embedded systems, where they have already enabled fast and low-power computing.

This research line explores the potential of mapping common tasks onto hardware accelerators in order to improve energy efficiency, speed-up processing and even reduce the amount of hardware required. The main application scenarios include high-performance computing, Software-as-a-Service and Web 2.0, where a large number of servers may share a reduced number of accelerators for offloading the most demanding tasks.

Analytical modelling and performance prediction of the memory hierarchy of computer systems:

We have developed a unified analytical framework to predict accurately the cache behaviour. The framework, which provides its estimations in less than one second, only requires as input the source code and the memory hierarchy configuration. Our approach is fully automatable and may be used in a compiler. The framework has been used successfully for guiding complex compiler optimizations such as tile size selection as well as to predict the worst-case execution time for real-time systems.

MOBILE ROBOTICS AND COMPUTER VISION

One of the current challenges in robotics is the integration of robots in everyday environments. A fast and easy deployment of robots in new areas is necessary to get robots operating outside research centres and beyond the continuous supervision of roboticists. Robots must be capable of being installed and put into operation within very short periods of time.

Our main objectives are: Complete avoidance of pre-definition robot control and learning it from experience and human observation; Identify and locate people and robots in indoor environments; and Interacting with people in a friendly and natural way.

Another open question is to enable robots to autonomously move in non-structured, complex and dynamic environments. In this way, we are developing efficient 3D perception systems, environment representations and sensor fusions. Finally, the robot has to localize and map itself in its environment. We are developing algorithms for localizing the robot from a previous known map, and for simultaneous localization and mapping (SLAM) of the robot in the environment. As main sensors, we are using omnivision cameras, stereo cameras, Kinect sensors and home-made 3D lasers.

GEOGRAPHIC INFORMATION SYSTEM

The Computer Architecture Group has 15 year of experience in the field of Geographical Information Systems (GIS). The group has developed different projects in areas like infrastructures, land management, land consolidation, weather information, land use and forest management. The types of developments that it carries out include web-GIS information systems, high performance optimisation algorithms to solve problems with spatial components, mobile applications, web services, modules for desktop software, and visualisation of big georeferenced datasets. In most of the projects, priority is given to use free software and GIS-related standards, contributing to development of spatial data infrastructures.

Main application areas of the research and works are land management and rural development. It should be noted the contribution made in these areas to innovation in the public administration. By means of contracts with different administrations, systems have been developed to improve the management of various types of procedures. These systems algo encourage and facilitate the public participation, the transparency and e-government.

Other line of work within this field is the application of high performance computing techniques to GIS, for example, the parallelization of algorithms for land use planning over multicore systems and clusters. It aims to deepen in this line of work. The plans for future work include the application of this kind of techniques to remote sensing algorithms, which also have significant computational costs.

Among the tools developed by the group or those in which it takes part we can mention SITEGAL, Land Information System of Galicia, for the management of Banco de Terras de Galicia, MeteoSIX mobile, mobile application for Android and iPhone providing the numerical weather forecast of Meteogalicia, MeteoRoute, application for Android and iOS that provide weather forecasting on routes, the MeteoSIX API, the geographic viewer of Raia Observatory and SIUXFor, system for the creation of so called Unidades de Xestión Forestal. The GAC has also developed experimental tools for land consolidation, exchange of parcels between individuals, development of land use plans and visualisation of three-dimensional data, for example data from LiDAR flights.