skip to main content
10.1145/2637002.2637026acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiiixConference Proceedingsconference-collections
research-article

Time well spent

Published: 26 August 2014 Publication History

Abstract

Time-biased gain provides a general framework for predicting user performance on information retrieval systems, capturing the impact of the user's interaction with the system's interface. Our prior work investigated an instantiation of time-biased gain aimed at traditional search interfaces utilizing clickable result summaries, with gain realized from the recognition of relevant documents. In this paper, we examine additional properties of time-biased gain, demonstrating how it generalizes effectiveness measures from across the field of information retrieval. We explore a new instantiation of time-biased gain, applicable to systems where the user judges the quality of their experience by the amount of time well spent. Rather than the single number produced by traditional effectiveness measures, time-biased gain models user variability and produces a distribution of gain on a per-query basis. With this distribution, we can observe performance differences at the user level. We apply bootstrap sampling to estimate confidence intervals across multiple queries.

References

[1]
J. Allan. HARD track overview in TREC 2004: High accuracy retrieval from documents. In TREC, 2004.
[2]
L. Azzopardi. Usage based effectiveness measures: monitoring application performance in information retrieval. In CIKM, pp. 631--640, 2009. ACM.
[3]
F. Baskaya, H. Keskustalo, and K. Järvelin. Time drives interaction: Simulating sessions in diverse searching environments. In SIGIR, pp. 105--114, 2012.
[4]
C. Buckley and E. M. Voorhees. Retrieval evaluation with incomplete information. In SIGIR, pp. 25--32, 2004.
[5]
B. Carterette. System effectiveness, user models, and user utility: A conceptual framework for investigation. In SIGIR, pp. 903--912, 2011.
[6]
B. Carterette, E. Kanoulas, and E. Yilmaz. Simulating simple user behavior for system effectiveness evaluation. In CIKM, pp. 611--620, 2011.
[7]
B. Carterette, E. Kanoulas, and E. Yilmaz. Incorporating variability in user behavior into systems based evaluation. In CIKM, pp. 135--144, 2012.
[8]
B. Carterette. Multiple testing in statistical analysis of systems-based information retrieval experiments. TOIS, 30(1):4:1--4:34, March 2012.
[9]
O. Chapelle, D. Metlzer, Y. Zhang, and P. Grinspan. Expected reciprocal rank for graded relevance. In CIKM, pp. 621--630, 2009.
[10]
C. L. A. Clarke, N. Craswell, I. Soboroff, and A. Ashkan. A comparative analysis of cascade measures for novelty and diversity. In WSDM, pp. 75--84, 2011.
[11]
C. L. A. Clarke, L. Freund, M. D. Smucker, and E. Yilmaz. Report on the SIGIR 2013 workshop on modeling user behavior for information retrieval evaluation (MUBE 2013). SIGIR Forum, 47(2):84--95, January 2013.
[12]
A. P. de Vries, G. Kazai, and M. Lalmas. Tolerance to irrelevance: A user-effort oriented evaluation of retrieval systems without predefined retrieval unit. In RIAO, 2004.
[13]
G. Doherty, M. Massink, and G. Faconti. Reasoning about interactive systems with stochastic models. In Chris Johnson, editor, Interactive Systems: Design, Specification, and Verification, vol 2220 of LNCS, pp. 144--163. 2001.
[14]
G. Dupret and M. Lalmas. Absence time and user engagement: Evaluating ranking functions. In WSDM, pp. 173--182, 2013.
[15]
B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall/CRC, 1998.
[16]
J. Hewitt, C. Brett, and V. Peters. Scan rate: A new metric for the analysis of reading behaviors in asynchronous computer conferencing environments. Amer. J. of Distance Ed., 21(4):215--231, 2007.
[17]
K. Järvelin and J. Kekäläinen. Cumulated gain-based evaluation of IR techniques. TOIS, 20(4):422--446, 2002.
[18]
D. Kelly and C. R. Sugimoto. A systematic review of interactive information retrieval evaluation studies, 1967-2006. JASIST, 62(4):745--770, April 2013.
[19]
M. Lalmas and A. Tombros. Evaluating XML retrieval effectiveness at INEX. SIGIR Forum, 41(1):40--57, June 2007.
[20]
A. Moffat and J. Zobel. Rank-biased precision for measurement of retrieval effectiveness. TOIS, 27(1):1--27, 2008.
[21]
B. Piwowarski, A. Trotman, and M. Lalmas. Sound and complete relevance assessment for XML retrieval. TOIS, 27(1):1:1--1:37, December 2008.
[22]
F. Radlinski and N. Craswell. Optimized interleaving for online retrieval evaluation. In WSDM, pp. 245--254, Rome, 2013.
[23]
S. Robertson. On GMAP: And other transformations. In CIKM, pp. 78--83, 2006.
[24]
S. Robertson. A new interpretation of average precision. In SIGIR, pp. 689--690, 2008.
[25]
T. Sakai and Z. Dou. Summaries, ranked retrieval and sessions: A unified framework for information access evaluation. In SIGIR, pp. 473--482, 2013.
[26]
M. D. Smucker, J. Allan, and B. Carterette. A comparison of statistical significance tests for information retrieval evaluation. In CIKM, pp. 623--632, 2007.
[27]
M. D. Smucker and C. L. A. Clarke. Modeling user variance in time-biased gain. In HCIR, pp. 3:1--3:10, 2012.
[28]
M. D. Smucker and C. L. A. Clarke. Stochastic simulation of time-biased gain. In CIKM, pp. 2040--2044, 2012.
[29]
M. D. Smucker and C. L. A. Clarke. Time-based calibration of effectiveness measures. In SIGIR, pp. 95--104, 2012.
[30]
M. D. Smucker and C. Jethani. Human performance and retrieval precision revisited. In SIGIR, pp. 595--602, 2010.
[31]
C. Wade and J. Allan. Passage retrieval and evaluation. Technical Report IR-396, Center for Intelligent Information Retrieval (CIIR), Department of Computer Science, University of Massachusetts Amherst, 8 pages, 2005.

Cited By

View all
  • (2021)Task Intelligence for Search and RecommendationSynthesis Lectures on Information Concepts, Retrieval, and Services10.2200/S01103ED1V01Y202105ICR07413:3(1-160)Online publication date: 9-Jun-2021
  • (2017)The Pareto Frontier of Utility Models as a Framework for Evaluating Push Notification SystemsProceedings of the ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3121050.3121089(253-256)Online publication date: 1-Oct-2017
  • (2017)Investigating Users' Time Perception during Web SearchProceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval10.1145/3020165.3020184(127-136)Online publication date: 7-Mar-2017
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
IIiX '14: Proceedings of the 5th Information Interaction in Context Symposium
August 2014
368 pages
ISBN:9781450329767
DOI:10.1145/2637002
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

  • University of Regensburg: University of Regensburg

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 August 2014

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. evaluation
  2. search

Qualifiers

  • Research-article

Funding Sources

Conference

IIiX '14
Sponsor:
  • University of Regensburg

Acceptance Rates

IIiX '14 Paper Acceptance Rate 21 of 45 submissions, 47%;
Overall Acceptance Rate 21 of 45 submissions, 47%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)1
Reflects downloads up to 15 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2021)Task Intelligence for Search and RecommendationSynthesis Lectures on Information Concepts, Retrieval, and Services10.2200/S01103ED1V01Y202105ICR07413:3(1-160)Online publication date: 9-Jun-2021
  • (2017)The Pareto Frontier of Utility Models as a Framework for Evaluating Push Notification SystemsProceedings of the ACM SIGIR International Conference on Theory of Information Retrieval10.1145/3121050.3121089(253-256)Online publication date: 1-Oct-2017
  • (2017)Investigating Users' Time Perception during Web SearchProceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval10.1145/3020165.3020184(127-136)Online publication date: 7-Mar-2017
  • (2017)Does Document Relevance Affect the Searcher's Perception of Time?Proceedings of the Tenth ACM International Conference on Web Search and Data Mining10.1145/3018661.3018694(141-150)Online publication date: 2-Feb-2017
  • (2016)The Solitude of Relevant Documents in the PoolProceedings of the 25th ACM International on Conference on Information and Knowledge Management10.1145/2983323.2983891(1989-1992)Online publication date: 24-Oct-2016
  • (2016)Optimizing Nugget Annotations with Active LearningProceedings of the 25th ACM International on Conference on Information and Knowledge Management10.1145/2983323.2983694(2359-2364)Online publication date: 24-Oct-2016
  • (2016)User Behavior in Asynchronous Slow SearchProceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval10.1145/2911451.2911541(345-354)Online publication date: 7-Jul-2016
  • (2016)Manipulating Time Perception of Web Search UsersProceedings of the 2016 ACM on Conference on Human Information Interaction and Retrieval10.1145/2854946.2854994(293-296)Online publication date: 13-Mar-2016
  • (2016)The Curious Incidence of Bias Corrections in the PoolAdvances in Information Retrieval10.1007/978-3-319-30671-1_20(267-279)Online publication date: 2016
  • (2015)Pooling for User-Oriented Evaluation MeasuresProceedings of the 2015 International Conference on The Theory of Information Retrieval10.1145/2808194.2809493(341-344)Online publication date: 27-Sep-2015
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media