Cross-Project Collaboration

The FET Proactive Emerging Paradigms and Communities call aims to explore and consolidate a new technological direction in order to put it firmly on the map as a viable paradigm for future technology and to stimulate the emergence of a European innovation eco-system around a new technological paradigm. The subtopic A: Artificial Intelligence for extended social interaction explores the combination of new Artificial Intelligence (AI) and immersive interaction technologies to enhance the social dimension in future virtual social spaces.

A total of 38 proposals were submitted to the subtopic A. Out of them, 4 gained support under the EIC Pathfinder/FET Proactive funding scheme: EXPERIENCE, SONICOM, TOUCHLESS and CAROUSEL.

The four projects will join forces through a series of collaboration activities, which will examine interaction technologies to enhance the social dimension in future virtual social spaces. An example of an area for such a collaboration is the multimodal aspect of Virtual and Augmented Reality. Specifically, the need for any immersive system/application to account for the various human senses, including video, audio, touch and others.

Collaborative work in this direction will allow not only to share best practices and state-of-the-art technologies, but also to take the potential achievements of each individual project well beyond what was originally envisaged. Furthermore, the organisation of activities such as special issues on academic journals, workshops and special sessions at international conferences, and other knowledge transfer efforts, will maximise the potential impact of the collaborative research.

EXPERIENCE – The “Extended-Personal Reality”: augmented recording and transmission of virtual senses through artificial-IntelligENCE
focused on virtual and augmented reality. With the help of artificial intelligence, it provides a yet-uncovered dimension of interpersonal communication.
SONICOM – Transforming auditory-based social interaction and communication in VR/AR
Immersive audio is our everyday experience of being able to hear and interact with sounds around us. Simulating spatially located sounds in virtual or augmented reality (VR/AR) must be done in a unique way for each individual which is expensive and time-consuming, making it commercially unfeasible. Furthermore, the impact of immersive audio beyond perceptual metrics such as presence and localisation is still an unexplored area of research, specifically when related with social interaction, entering the behavioural and cognitive realms.SONICOM will revolutionise the way we interact socially within AR/VR environments and applications by leveraging methods from Artificial Intelligence (AI) to design a new generation of immersive audio technologies and techniques, specifically looking at personalisation and customisation of the audio rendering. Using a data-driven approach, it will explore, map and model how the physical characteristics of spatialised auditory stimuli can influence observable behavioural, physiological, kinematic, and psychophysical reactions of listeners within social interaction scenarios.Website link:

TOUCHLESS – Enabling users to receive digital touch sensations that evoke not only a functional response, but also an experiential one, whilst not having any physical contact with a device.
Our society is increasingly being deprived of social tactile interactions, due in part to increased virtualisation and the growth of digital networks, and recently magnified by the COVID-19 social distancing measures. The TOUCHLESS project will develop the next generation of touchless haptic technologies using neurocognitive models and a novel artificial intelligence (AI) framework.

Our ambition is to go beyond functional haptic technology (simple haptic notifications and feedback to discriminate between objects) and enable computer systems to intelligently recreate the experiences that were previously lost in the virtual transition.

Website link:


CAROUSEL paves the way for emergence of social- physical behaviour of digital characters by better understanding human body-language and being capable to interact autonomously with a group of people.

Website link: