Participants are given an option to submit to two evaluations:
1. By Feb 1, 2014 (optional): drafts for comments. Based on this submission, we will provide participants comments that may help them to prepare their final submission. We encourage all participants to submit this draft, but this submission is not mandatory.
2. By May 1, 2014: final submissions to be evaluated. This submission will be used to determine the final evaluation results.
All submissions need to be entered via the official EasyChair system (https://www.easychair.org/conferences/?conf=clefehealth2014) on the Internet. The system was opened in January 2014.
Final submissions need to encompass the following mandatory items:
1. A concise report of the design, implementation (if applicable), and application results discussion in form of an extended abstract. The abstract needs to highlight the obtained findings, possibly supported by an informal user study or other means of validation. This abstract is due by 1 May 2014 and the teams can use this document as a basis for their working notes (due by 15 June, 2014), if they wish. For submissions that address only Task 1a or 1b, please do not exceed 10 pages of text and figures that illustrate the design and interaction on the CLEF2013 abstract template. For submissions that address the grand challenge, please follow the same guideline but do not exceed 20 pages.
2. Two demonstration videos illustrating the relevant functionality of the functional design or paper prototype in application to the provided task data. In the first video, the user should be from the development team (i.e., a person who knows functionality). In the second video, the user should be a novice, that is, a person with no previous experience from using the functionality and the video should also explain how the novice was trained to use the functionality. Each video should be from 5 to 7.5 minutes for submissions that address only Task 1a or 1b, and from 10 to 15 minutes for submissions that address the grand challenge.
In addition to their actual submission, all participating teams are asked to provide us the following mandatory items:
1. a response to our online survey that we will use to characterise teams and their submissions,
2. an extended abstract, which summarises the main details and results of the experiments (to produce the CLEF 2014 Book of Abstracts), and
3. a report (working notes) describing the experiments (to be published on the Internet).
Further details about the survey, extended abstracts, and reports will be provided here in May 2014.
As the Feb 1 submission, we ask teams to submit their draft of the aforementioned concise report.
Solutions need to address the Task problems by appropriate visual-interactive design and need to demonstrate its effectiveness. The problems are deliberatively defined in a creative way and involve visual interactive design and ideally, a combination of automatic, visual and interactive techniques. Participants are encouraged to implement prototypical solutions, but also pure designs without implementation are allowed.
Submissions will be judged towards their rationale for the design, including selection of appropriate visual interactive data representations and reference to state of the art techniques in information visualisation, natural language processing, information retrieval, machine learning, and document visualisation. They need to
1. demonstrate that the posed problems are addressed, in the sense that the layperson patient is helped in their complex information need,
2. provide a compelling use-case driven discussion of the workflow supported and exemplary results obtained,
3. highlight the evaluation approach and obtained findings.
Each final submission will be assessed by a team of at least three evaluation panelists, supported by one member of our organising committee and one peer from the other participating teams. The panel members are renowned experts in patient-centric healthcare, information visualisation, software design, machine learning, and natural language processing. They represent academic, industrial, and governmental sectors as well as healthcare practice in different countries all over the world. The panelists are neither members of our organising committee nor participants. Consequently, the list of panelist will be released after the submission deadline in May, 2014.
The team members will be guided to use our evaluation criteria in their assessment. Primary evaluation criteria include the effectiveness and originality of the presented submissions. Submissions will be judged on Usability, Visualisation, Interaction, and Aesthetics. The judges will be provided with a 5-point Likert scale for each heuristic and will also be requested to discuss the reasons behind their scores.
The final score will be the average of the Likert scores and the sum of unique problems. Our categories are based on the literature (Forsell & Johansson, 2010: “An Heuristic Set for Evaluation in Information Visualization”, AVI 2010, DOI: 10.1145/1842993.1843029) and adjusted for the present tasks and prototype access (i.e., the videos).
1. Minimal Actions : whether the number of steps needed to get to the solution is acceptable,
2. Flexibility: whether there is an easy/obvious way to proceed to the next/other task, and
3. Orientation and Help: easy of undoing actions, going back to main screen and finding help.
1. Information Encoding: whether the necessary/required info is shown,
2. Dataset Reduction: whether the required info is easy to digest,
3. Recognition rather than recall.: Users should not remember or memorize information to carry out tasks or understand information presented,
4. Spatial Organization: layout, efficient use of space, and
5. Remove the extraneous: looks uncluttered.
Recognitions will be given to the best submissions along a number of categories depending on the field of all submissions. Prospective categories include but are not limited to:
1. Effective use of visualisation,
2. Effective use of interaction,
3. Effective combination of interactive visualization with computational analysis,
4. Solution adapting to different environments (e.g., desktop, mobile/tablet or print for presentation),
5. Best use of external information resources (e.g., Wikipedia, Social Media, Flickr, or Youtube),
6. Best solution for Task 1a,
7. Best solution for Task 1b,
8. Best solution for Grand Challenge, and
9. Best integration of external information resources.