Extracting audio summaries to support effective spoken document search

D Spina, JR Trippas, L Cavedon… - Journal of the …, 2017 - Wiley Online Library
Journal of the Association for Information Science and Technology, 2017Wiley Online Library
We address the challenge of extracting query biased audio summaries from podcasts to
support users in making relevance decisions in spoken document search via an audio‐only
communication channel. We performed a crowdsourced experiment that demonstrates that
transcripts of spoken documents created using Automated Speech Recognition (ASR), even
with significant errors, are effective sources of document summaries or “snippets” for
supporting users in making relevance judgments against a query. In particular, the results …
We address the challenge of extracting query biased audio summaries from podcasts to support users in making relevance decisions in spoken document search via an audio‐only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or “snippets” for supporting users in making relevance judgments against a query. In particular, the results show that summaries generated from ASR transcripts are comparable, in utility and user‐judged preference, to spoken summaries generated from error‐free manual transcripts of the same collection. We also observed that content‐based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection, which we have made publicly available.
Wiley Online Library