SIGIR'17 Workshop on Axiomatic Thinking for Information Retrieval and Related Tasks (ATIR)

Home

Call For Papers

  • ATIR: Workshop on Axiomatic Thinking for Information Retrieval and Related Tasks
  • Co-located with ACM SIGIR 2017
  • August 11, 2017. Tokyo, Japan

Motivation

The goal of the proposed workshop is to bring together researchers and practitioners interested in applying axiomatic analysis to all kinds of IR and IR-related problems, including particularly both those interested in developing retrieval models and those interested in developing evaluation measures, and to enable them to share their findings (both positive or negative), to present their latest research results, and to discuss future directions.

Theme

As the title of the workshop suggested, the general theme of the workshop will be about all aspects of applications of axiomatic thinking to solve IR and IR-related problems. The basis of this general theme is the recent growth of work on applying axiomatic thinking to analyze and improve both retrieval models and evaluation metrics, which we expect to continue. The existing work has clearly demonstrated many advantages of axiomatic thinking, including particularly specific theoretical results in the form of novel constraints to be satisfied by retrieval functions or evaluation metrics and improved models or evaluation metrics. However, much more research is still needed in multiple directions.

Opportunities of applying axiomatic thinking also go beyond analyzing the basic retrieval functions; in fact, understanding constraints is also beneficial to many IR tasks that use machine learning techniques. Instead of having a designer carefully choose a set of assumptions to make when designing a formal model, these approaches use machine learning to weight items in a pool of features derived from many retrieval heuristics. However, this potentially results in a bloated backend which computes many features irrelevant to the task or collection. Having knowledge about relevant features would help slim down backends and speed learning and ranking. An important strength of the axiomatic methodology is that evaluation data sets become resources used to check motivated hypotheses instead of optimization mechanisms, which are at risk of overfitting. There are even more opportunities for new research on applying axiomatic thinking to evaluation as has already been happening where researchers have done axiomatic analysis of metrics for tasks such as text categorization, clustering, and ranking.

In general, an understanding of how to apply axiomatic thinking to IR problems may become increasingly important as information retrieval continues to broaden into new areas. New tasks often require new constraints, and an understanding of these constraints can provide guidance on how to adapt existing methods or how to develop new methods for the new tasks. For example, domain-specific IR tasks such as medical record search might require new retrieval constraints that can capture the domain knowledge.

The workshop aims to bring together researchers and practitioners from a broader community to exchange research ideas and results and to foster collaborations across subcommunities. Some of the specific topics we envision to be covered by the workshop theme include, but not limited to:

  • What constraints are effective to improve retrieval performance independent of the underlying model?
  • What constraints were expected to be useful but have not been effective in practice? Why not?
  • In the case of evaluation metrics, why some metric constraints do not affect the system comparison or the user satisfaction?
  • How can we potentially unify the axiomatic analysis of IR models and evaluation metrics given that both lines of work aim at formally modeling relevance?
  • Have new languages, media, or domains suggested new constraints for established domains?
  • To what extent is a valid constraint in one domain also valid in other domains? More generally, which constraints for retrieval methods or evaluation metrics are core ones, and which constraints are highly scenario dependent?
  • How can axiomatic thinking be combined with machine learning techniques to learn more effective retrieval functions?

Planned Activities

  • Keynote talk
  • Panel
  • Presentations of papers

Paper Submission

We solicit papers describing research work related to the above theme. In addition to the innovative methods with promising results, we also welcome papers reporting negative results.

Papers need to be:

  • 4 or 10 pages
  • In ACM format
  • Submission Site: EasyChair

Formal proofs can be added as additional material. The submissions are not anonymous.

Important Dates

  • Submission deadlines: June 23 (for both long and short papers) extended
  • Review due: July 7
  • Notification: July 10

Workshop Organizers

References