Projects

GoLocal (CMU/Portugal)

From monitoring global data streams to context-aware recommendations

Streams of Web user data activities are mostly discarded by current Web information systems. User location, devices, services and other sensors create specific information consumption profiles that should be identified by online services to better answer consumer needs. However, the scale of this data is too large to be archived or processed. Most of this data is only useful during a short period of time and is related to short-life events, far shorter than the time a batch and non-distributed data mining algorithm needs to process real-time large-scale data. For example, each tourist stays in Lisbon 2 to 2.5 days, which is a very short window of opportunity for recommending one of the many attractions or cultural events.

GoLocal proposes to advance big data technology in the development of new information businesses and services. Our long-term vision aims at making big data economically useful by realizing the full potential of big data analysis technologies in the design of innovative services for the end-consumer. A big data processing framework, with several cutting-edge technologies, will be released by the consortium.

There are many opportunities to leverage big data to innovate services. Lisbon City Council, SAPO and Priberam, the non-academic partners in the consortium, will provide real-world consumer data: both language and behavioral data will be captured in online services and mobile apps. This data can be used to recommend a full-day of tourist activities, to detect the right consumer for a given promotion, or to monitor a brand reputation.

cmu  inescid  it  cml  sapopriberam


COGNITUS (H2020 project)

Converging Broadcast and User Generated Content for Interactive Ultra-High Definition Services

COGNITUS will deliver innovative ultra-high definition (UHD) broadcasting technologies that allow the joint creation of UHD media exploiting the knowledge of professional producers, the ubiquity of user generated content (UGC), and the power of interactive networked social creativity in a synergistic multimedia production approach.

The project will provide a proof of concept to cement the viability of interactive UHD content production and exploitation, through use case demonstrators at large events of converging broadcast and user generated content for interactive UHD services. The envisaged demonstrators will be based on two different use cases drawn from real-life events. These use cases are in turn single examples of the fascinating and potentially unlimited new services that could be unleashed by the un-avoidable confluence of UHD broadcasting technology and smart social mobile UGC brought by COGNITUS.

Building on recent technological advances, in UHD broadcasting and mobile social multimedia sharing coupled together with over fifty years of research and development in multimedia systems technology, mean that the time is now ripe for integrating research outputs towards solutions that support high-quality, user sourced, on demand media to enrich the conventional broadcasting experience. COGNITUS vision is to deliver a compelling proof of concept for the validity, effectiveness and innovative power of this integrated approach. As a consequence, over 36 months the project will demonstrate the ability to bring a new range of dedicated media services into the European broadcasting sector, adding critical value to both media and creativity sectors.

horizon2020

 

VisualSpeech (CMU/Portugal)

This exploratory project proposes to research natural and multimodal interaction mechanisms for providing bio-feedback in speech therapy through the use of serious (computer) games. This project proposes a serious game toolset that has two main goals.

The first goal is to keep the patients engaged in the exercises in order to have more fruitful sessions. Building on our previous work, the game will use visual stimuli, and a reward system that will adapt to each particular patient and it will provide analysis [URL] of the facial exercises as part of speech therapy.

The second goal aims at providing the therapist with a toolset to plan the course of the ongoing therapy session. This will leverage on the team’s previous experience in language tutoring systems. The toolset will access multimodal information extracted from the current and previous sessions. More specifically, the toolset will provide a powerful set of inspection tools to examine the therapy recordings and assist the therapist with audio-visual recordings and annotations of the session.

QSearch (QREN)

Personalized search for enterprise settings

This is a technology transfer project co-funded by Quidgest and QREN/AdI. The QSearch project aims at developing enterprise-search technology to enable a better access to and management of documents. The main technological objectives for this project are: (1) textual analysis techniques for information extraction, (2) advanced search techniques, and (3) novel search interfaces for supporting faceted search. These key technologies are nowadays the fundamental tools of every user of document management user system. These tools are fundamentally linked to the productive and optimization of an information management system.

 

CS4SE (FCT-MEC)

Compressive sensing for media search engines

A number of information exploration applications have recently emerged providing access to rich media, e.g., Flickr, YouTube and Wikipedia. These applications are used for both entertainment and professional purposes. The success of these applications is closely related to the users’ role in the information-processing chain: users generate content, metadata and provide valuable feedback concerning information relevance. Systems collect vast amounts of user interaction data such as queries, click data, annotations, comments and new content. These diverse sources of information create two critical challenges to traditional indexing and search techniques: (1) mining the relevant information from a large number of sources and (2) matching the user query to the extracted information.

The main hypothesis of this project is that compressed sensing techniques will define the new state-of-the-art for multimedia information retrieval. This hypothesis is supported by two facts. The first fact is related to the L1 minimization criterion: rich media applications need to handle information with a large number of variables, and sparse models, as the ones computed by compressed sensing techniques, can indeed reduce the number of information sources. The second fact is related to the large-scale resources available that allow the inference of a sparse representation of media documents.

 

ImTV – (UTAustin/Portugal)

On-demand Immersive-TV

Millions of users now look for video entertainment not only on their favorite TV channels or cinemas, but also online – an example of this paradigm shift is the YouTube live transmission of a U2 band concert. High-quality entertainment video shows are now created by professionals, independent producers and amateurs that publish their media online and free of charge. Our goal is to devise a platform to integrate media from different sources and end-users.

Our lab goal in this project is to research new algorithms to model user preferences based on their comments,tags and usage history and compute heterogeneous media recommendation for individuals and groups of users. These algorithms are intended to integrate a global platform to where traditional TV services are merged with Web TV services.