2014 Guidelines

Attention lab participants! - you should now start writing your working notes papers!  

*Online working notes internal review submission deadline June 3rd.
**Online working notes camera ready for CLEF submission deadline June 7th.
Details on preparing working notes & link to the working notes submission system are available at:  http://clef2014.clef-initiative.eu/index.php?page=Pages/instructions_for_authors.html


The ShARe/CLEF Task 2 disease/disorder guidelines provide an overview of the original ShARe attribute guidelines with extensions for the 2014 ShARe/CLEF Task 2. We encourage participants to review these guidelines prior to system development. Participants will be provided training and test data sets. The evaluation will be conducted using the withheld test disease/disorder templates. Participating teams are asked to stop development as soon as they download the test disease/disorder templates. Teams are allowed to use any outside resources in their algorithms.  

Timeline

(Small) Example data set release: Dec 9 2013
(Full) Training data set release: Jan 10 2014
Test data set release: April 23 2014
Test data submissions: May 1 2014

Task 2 Evaluations
Post submission normalization and cue detection assessment will be conducted on the test
disease/disorder templates
to generate the complete result set. To do this, task participants will be asked to submit up to two runs for Task 2a and Task 2b.
  • Task 2a is mandatory for participation.
  • Task 2b is optional for participation.
Runs submitted have to follow template format.

Submitting your runs:
To submit your runs to Task 2a and 2b, please follow these guidelines carefully.

1. Follow the task-specific submission deadline for 1 May 2014

2. Navigate to our Easy Chair for CLEFeHealth2014 runs  submission page (https://www.easychair.org/conferences/?conf=clefehealth2014) and submit separately to each task by selecting “New Submission”. You will submit all runs for one task at the same time. After you have created a new submission, you can update it, but no updates of runs are accepted after the deadline has passed.

3. List all your team members as “Authors”. “Address for Correspondence” and “Corresponding author” refer to your team leader. Note: you can acknowledge people not listed as authors separately in the working notes (to be submitted by June 7 (instructions to be described below in due time)) – we wish this process to be very similar to defining the list of authors in scientific papers.

4. Please provide the task and your team name as “Title” (e.g., “Task 2a: Team NICTA” or “Task 2a using extra annotations: Team NICTA”) and a short description (max 100 words) of your team as “Abstract”. See the category list below the abstract field for the task names. If you submit to multiple tasks, please copy and paste the same description to all your submissions and use the same team name in all submissions.

5. Choose a “category” and one or more “Groups” to describe your submission. We allow up to 2 runs for Task 2a and Task 2b.

6. Please provide 3-10 “Keywords” that describe your the different runs in the submission, including methods (e.g., MetaMap, Support Vector Machines, Weka) and resources (e.g., Unified Medical Language System, expert annotation). You will provide a narrative description later in the process.

7. As “Paper” please submit a zip file including the runs for this task. Please name each run as follows: “name + run + task + add/noadd” (e.g., TeamNICTA.1.2a.add) where name refers to your team name; run to the run ID; task to 2a or 2b; and “add/noadd” to the use of additional annotations.  Please follow the file formats available at https://sites.google.com/a/dcu.ie/clefehealth2014/task-2/2014-dataset.

8. As the mandatory attachment file, please provide a txt file with a description of the submission. Please structure this file by using your run-file names above. For each run, provide a max 200 word summary of the processing pipeline (i.e., methods and resources). Be sure to describe differences between the runs in the submission.

 
Evaluation Metrics:
Evaluation will focus on Accuracy for Task 2a and F1-measure for Task 2b. We will evaluate each task by overall performance and by attribute type.

(2a) predict each attribute’s normalization slot value

Evaluation measure: Accuracy (overall and per attribute type)
Accuracy = Correct/Total 
Correct = Number of attribute: value slots with correct normalization value
Total = Number of attribute: value slots

(2b) predict each attribute’s cue slot value

Evaluation measure: F1-score (overall and per attribute type)
F1-score = (2 * Recall * Precision) / (Recall + Precision)
Recall = TP / (TP + FN)
Precision = TP / (TP + FP)
TP = same span 
FP = spurious span 
FN = missing span

Exact F1-score: span is identical to the reference standard span                                                                                  
Overlapping F1-score: span overlaps reference standard span


 
Evaluation Write-up:
Participating groups in Task 2 are asked to submit a report (working notes) describing their Task 2 experiments.

Details on preparing working notes & link to the working notes submission system are available at:  http://clef2014.clef-initiative.eu/index.php?page=Pages/instructions_for_authors.html