Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7457))

Included in the following conference series:

Abstract

Crowdsourcing is a low cost way of obtaining human judgements on a large number of items, but the knowledge in these judgements is not reusable and further items to be processed require further human judgement. Ideally one could also obtain the reasons people have for these judgements, so the ability to make the same judgements could be incorporated into a crowd-sourced knowledge base. This paper reports on experiments with 27 students building knowledge bases to classify the same set of 1000 documents. We have assessed the performance of the students building the knowledge bases using the same students to assess the performance of each other’s knowledge bases on a set of test documents. We have explored simple techniques for combining the knowledge from the students. These results suggest that although people vary in document classification, simple merging may produce reasonable consensus knowledge bases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
Â¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Snow, R., O’Connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 254–263. Association for Computational Linguistics, Honolulu (2008)

    Chapter  Google Scholar 

  2. Chen, K.-T.: Human Computation: Experience and Thoughts. In: CHI 2011 Workshop on Crowdsourcing and Human Computation Systems, Studies and Platforms (2011)

    Google Scholar 

  3. Ahn, L.V., Maurer, B., Mcmillen, C., Abraham, D., Blum, M.: reCAPTCHA: Human-Based Character Recognition via Web Security Measures. Science 321(12), 1465–1468 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  4. Brew, A., Greene, D., Cunningham, P.: Using Crowdsourcing and Active Learning to Track Sentiment in Online Media. In: Proceeding of the 2010 Conference on ECAI 2010: 19th European Conference on Artificial Intelligence, pp. 145–150. IOS Press (2010)

    Google Scholar 

  5. Lin, H., Davis, J., Zhou, Y.: Ontological Services Using Crowdsourcing. In: 21st Australasian Conference on Information Systems (2010)

    Google Scholar 

  6. O’Leary, D.E.: Knowledge Acquisition from Multiple Experts: An Empirical Study. Management Science 44(8), 1049–1058 (1998)

    Article  MATH  Google Scholar 

  7. Medsker, L., Tan, M., Turban, E.: Knowledge acquisition from multiple experts: Problems and issues. Expert Systems with Applications 9(1), 35–40 (1995)

    Article  Google Scholar 

  8. Turban, E.: Managing knowledge acquisition from multiple experts. In: IEEE/ACM International Conference on Developing and Managing Expert System Programs, Washington, DC, USA, pp. 129–138 (1991)

    Google Scholar 

  9. La Salle, A.J., Medsker, L.R.: Computerized conferencing for knowledge acquisition from multiple experts. Expert Systems with Applications 3(4), 517–522 (1991)

    Article  Google Scholar 

  10. Kim, Y.S., Park, S.S., Deards, E., Kang, B.H.: Adaptive Web Document Classification with MCRDR. In: International Conference on Information Technology: Coding and Computing (ITCC 2004), pp. 476–480 (2004)

    Google Scholar 

  11. Park, S.S., Kim, Y.S., Kang, B.H.: Personalized Web Document Classification using MCRDR. In: The Pacific Knowledge Acquisition Workshop, Auckland, New Zealand (2004)

    Google Scholar 

  12. Doan, A., Ramakrishnan, R., Halevy, A.Y.: Crowdsourcing systems on the World-Wide Web. Communications of the ACM 54(4), 86–96 (2011)

    Article  Google Scholar 

  13. Zhang, L., Zhang, H.: Research of Crowdsourcing Model based on Case Study. In: 8th International Conference on Service Systems and Service Management (ICSSSM), pp. 1–5. IEEE, Tianjin (2011)

    Google Scholar 

  14. Das, R., Vukovic, M.: Emerging theories and models of human computation systems: a brief survey. In: Proceedings of the 2nd International Workshop on Ubiquitous Crowdsouring, pp. 1–4. ACM, Beijing (2011)

    Chapter  Google Scholar 

  15. Heymann, P., Garcia-Molina, H.: Turkalytics: analytics for human computation. In: Proceedings of the 20th International Conference on World Wide Web, pp. 477–486. ACM, Hyderabad (2011)

    Chapter  Google Scholar 

  16. Little, G., Chilton, L.B., Goldman, M., Miller, R.C.: TurKit: human computation algorithms on mechanical turk. In: Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, pp. 57–66. ACM, New York (2010)

    Chapter  Google Scholar 

  17. Winchester, S.: The Surgeon of Crowthorne: A Tale of Murder. Madness and the Oxford English Dictionary, Penguin (1999)

    Google Scholar 

  18. Buhrmester, M., Kwang, T., Gosling, S.D.: Amazon’s mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science 6(1), 3–5 (2011)

    Article  Google Scholar 

  19. Davis, J.G.: From Crowdsourcing to Crowdservicing. IEEE Internet Computing 15(3), 92–94 (2011)

    Article  Google Scholar 

  20. Geiger, D., Seedorf, S., Schulze, T., Nickerson, R.C., Schader, M.: Managing the Crowd: Towards a Taxonomy of Crowdsourcing Processes. In: Americas Conference on Information Systems, AMCIS 2011 (2011)

    Google Scholar 

  21. Schenk, E., Guittard, C.: Towards a Characterization of Crowdsourcing Practices. Journal of Innovation Economics 7(1), 93–107 (2011)

    Article  Google Scholar 

  22. Mittal, S., Dym, C.L.: Knowledge Acquisition from Multiple Experts. AI Magazine 6(2), 32–36 (1985)

    Google Scholar 

  23. Richardson, M., Domingos, P.: Building large knowledge bases by mass collaboration. In: Proceedings of the 2nd International Conference on Knowledge Capture, pp. 129–137. ACM, Sanibel Island (2003)

    Chapter  Google Scholar 

  24. Puuronen, S., Terziyan, V.Y.: Knowledge Acquisition from Multiple Experts Based on Semantics of Concepts. In: Fensel, D., Studer, R. (eds.) EKAW 1999. LNCS (LNAI), vol. 1621, pp. 259–273. Springer, Heidelberg (1999)

    Chapter  Google Scholar 

  25. Park, S.S., Kim, S.K., Kang, B.H.: Web Information Management System: Personalization and Generalization. In: The IADIS International Conference WWW/Internet 2003, Algarve, Portugal, pp. 523–530 (2003)

    Google Scholar 

  26. Bagno, E., Eylon, B.-S.: From problem solving to a knowledge structure: An example from the domain of electromagnetism. American Journal of Physics 65(8), 726–736 (1997)

    Article  Google Scholar 

  27. Dumais, S., Chen, H.: Hierarchical classification of Web content. In: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 256–263. ACM, Athens (2000)

    Google Scholar 

  28. Wibowo, W., Williams, H.E.: Simple and accurate feature selection for hierarchical categorisation. In: Proceedings of the 2002 ACM Symposium on Document Engineering, pp. 111–118. ACM, McLean (2002)

    Chapter  Google Scholar 

  29. Liu, T.-Y., Yang, Y., Wan, H., Zeng, H.-J., Chen, Z., Ma, W.-Y.: Support vector machines classification with a very large-scale taxonomy. SIGKDD Explor. Newsl. 7(1), 36–43 (2005)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kim, Y.S., Kang, B.H., Ryu, S.H., Compton, P., Han, S.C., Menzies, T. (2012). Crowd-Sourced Knowledge Bases. In: Richards, D., Kang, B.H. (eds) Knowledge Management and Acquisition for Intelligent Systems. PKAW 2012. Lecture Notes in Computer Science(), vol 7457. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32541-0_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32541-0_23

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32540-3

  • Online ISBN: 978-3-642-32541-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics