SmartWiki: A reliable and conflict-refrained Wiki model based on reader differentiation and social context analysis
作者:
Highlights:
•
摘要
Wiki systems, such as Wikipedia, provide a multitude of opportunities for large-scale online knowledge collaboration. Despite Wikipedia’s successes with the open editing model, dissenting voices give rise to unreliable content due to conflicts amongst contributors. Frequently modified controversial articles by dissent editors hardly present reliable knowledge. Some overheated controversial articles may be locked by Wikipedia administrators who might leave their own bias in the topic. It could undermine both the neutrality and freedom policies of Wikipedia. As Richard Rorty suggested “Take Care of Freedom and Truth Will Take Care of Itself”[1], we present a new open Wiki model in this paper, called TrustWiki, which bridge readers closer to the reliable information while allowing editors to freely contribute. From our perspective, the conflict issue results from presenting the same knowledge to all readers, without regard for the difference of readers and the revealing of the underlying social context, which both causes the bias of contributors and affects the knowledge perception of readers. TrustWiki differentiates two types of readers, “value adherents” who prefer compatible viewpoints and “truth diggers” who crave for the truth. It provides two different knowledge representation models to cater for both types of readers. Social context, including social background and relationship information, is embedded in both knowledge representations to present readers with personalized and credible knowledge. To our knowledge, this is the first paper on knowledge representation combining both psychological acceptance and truth reveal to meet the needs of different readers. Although this new Wiki model focuses on reducing conflicts and reinforcing the neutrality policy of Wikipedia, it also casts light on the other content reliability problems in Wiki systems, such as vandalism and minority opinion suppression.
论文关键词:Knowledge representation,Online social network,Wikipedia,Natural language generation,Trust,Community discovery,Confirmation bias
论文评审过程:Received 28 July 2012, Revised 21 December 2012, Accepted 27 March 2013, Available online 6 April 2013.
论文官网地址:https://doi.org/10.1016/j.knosys.2013.03.014