Extending page segmentation algorithms for mixed-layout document processing

Amy Winder, Tim Andersen, Elisa H. Barney Smith

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

9 Scopus citations

Abstract

The goal of this work is to add the capability to segment documents containing text, graphics, and pictures in the open source OCR engine OCRopus. To achieve this goal, OCRopus' RAST algorithm was improved to recognize non-text regions so that mixed content documents could be analyzed in addition to text-only documents. Also, a method for classifying text and non-text regions was developed and implemented for the Voronoi algorithm enabling users to perform OCR on documents processed by this method. Finally, both algorithms were modified to perform at a range of resolutions. Our testing showed an improvement of 15-40% for the RAST algorithm, giving it an average segmentation accuracy of about 80%. The Voronoi algorithm averaged around 70% accuracy on our test data. Depending on the particular layout and idiosyncracies of the documents to be digitized, however, either algorithm could be sufficiently accurate to be utilized.

Original languageEnglish
Title of host publicationProceedings - 11th International Conference on Document Analysis and Recognition, ICDAR 2011
Pages1245-1249
Number of pages5
DOIs
StatePublished - 2011
Event11th International Conference on Document Analysis and Recognition, ICDAR 2011 - Beijing, China
Duration: 18 Sep 201121 Sep 2011

Publication series

NameProceedings of the International Conference on Document Analysis and Recognition, ICDAR
ISSN (Print)1520-5363

Conference

Conference11th International Conference on Document Analysis and Recognition, ICDAR 2011
Country/TerritoryChina
CityBeijing
Period18/09/1121/09/11

Keywords

  • open source OCR
  • page segmentation
  • RAST
  • Voronoi

Fingerprint

Dive into the research topics of 'Extending page segmentation algorithms for mixed-layout document processing'. Together they form a unique fingerprint.

Cite this