MC2 2018 Lab

Multilingual Cultural Mining and Retrieval

Home > Tasks 2018 > 2- Mining opinion argumentation > More about use case, data and evaluation process

More about use case, data and evaluation process

Friday 9 February 2018, by Chiraz Latiri, Julio Gonzalo, Malek Hajjem

Detailed description

use case

Given, a selected of festivals name from popular festivals on FlickR English and French language, participants have to search for the most argumentative tweets in a collection covering 18 months of news about festivals in different languages. The identified tweets have to be a summary of ranked tweets according to their probability of being argumentative tweets. This use case was proposed to help festival organiser treating such set of tweets on priority. That is why the more the summary of ranked tweets is variant the in term of argumentation the more the run is useful.
For each language English and French (English and French), a monolingual scenario is expected : Given a festival name from topics file, participants have to search, from the microblog collection, the set of the most argumentative tweets in the same query language.
Samples of argumentative Tweets are provided here: English_Sample, French_Sample


English and French contain respectively 12 and 4 festival name. They represent a set of some popular festivals on FlickR for which we have pictures. Topics were carefully selected by organizer to ensure that selected topics have enough related argumentative tweets in our corpus. Such manual selection was conduct to to ensure a possible evaluation.

The choice of FlickR as source of topic was motivated by the fact that such social media platform had a high quality amateur pictures. This personal involvement serves our goal as we are interested mainly to personal tweets.

Microblog Corpus

A login is required to access the data, once registered to CLEF

  • The complete stream of 70 000 000 microblogs is available here for registered participants. This document collection is provided by GAFES. Microblogs are provided with their meta-information and expanded URLs on a MySQL server.
    Due to legal terms the access to this database is restricted to registered participants under privacy agreement.
  • An [indri Index with a web interface-] are available to query the whole set of microblogs


The official evaluation measure is NDCG.

This ranking measures will give a score for each retrieved tweet with a discount function over the rank. As we are mostly interested in top ranked arguments, this ranking measures meet our expectation.
This measure was also used in TREC Microblog Track [1]. A tweet is considered as highly relevant when it is a personal and contains an argument that directly refers to the festival (topic).

[1] Overview of the TREC-2015 Microblog Track
Jimmy Lin,Miles Efron, Yulu Wang, Garrick sherman, Ellen Voorhees

Result Submission
The runs must respect the classical trec top files format as describe above. Only the top 100 results for each query run must be given. Each run in each language, must contain 3 fields:
- Id : a long integer representation of the unique identifier of this Tweet
- Scores : The probability of being an argument tweet accorded by participant system
- Rank : The accorded position of the tweet in the grading list of argument tweets
- Content: Microblog textual content

Diversity criteria:
The more a run detects different arguments about a cultural event, the more it is interesting.

Exemples about "Cannes festival name:
- I ve seen some people saying they’re boycotting Cannes because of the high heels rule. I’m not sure they’ll notice.
- Not going to lie, one of my favorite things about the Cannes festival is all of these handsome men in tuxedos.
- Cannes is relevant because movies get timed standing ovations.

How to get the data?

To get an access to the Microblog corpus, email or registered to CLEF
The English topics can be downloaded here
The French topics can be downloaded here.

Contact Information

If you have any question, email us through this address mail :