Error detection in mechanized classification systems
作者:
Highlights:
•
摘要
When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22% and, of this 22%, the computer correctly identified two thirds of the decisions as doubtful. Professional examination of this doubtful group could further improve performance. The characteristics of the classification system, and of the material being classified, are mainly responsible for disagreement, and the size of the computer-identified, doubtful, group is a basic measure of the suitability of the system for the test material being processed. If is further suggested that if two professionals were compared on the same material then their disagreements would be mainly over the same documents.
论文关键词:
论文评审过程:Received 8 January 1976, Available online 17 July 2002.
论文官网地址:https://doi.org/10.1016/0306-4573(76)90052-2