Lance Pollard
Lance Pollard

Reputation: 79450

General approach to parsing text with special characters from PDF using Tesseract?

I would like to extract the definitions from the book The Navajo Language: A Grammar and Colloquial Dictionary by Young and Morgan. They look like this (very blurry):

I tried running it through the Google Cloud Vision API, and got decent results, but it doesn't know what to do with these "special" letters with accent marks on them, or the curls and lines on/through them. And because of the blurryness (there are no alternative sources of the PDF), it gets a lot of them wrong. So I'm thinking of doing it from scratch in Tesseract. Note the term is bold and the definition is not bold.

How can I use Node.js and Tesseract to get basically an array of JSON objects sort of like this:

[
  ...
  { "term": "'odahizhdoojih, yo/o/", "definition": "they will carry ..." },
  ...
]

The details of what the term looks like is not super important here (I'll encode it in unicode), I'm mainly wondering how the architecture should be for Node.js to do this, and how I should properly train the system? Do I need to take screenshots of the words and somehow train it with manual annotations? How does this generally work in this situation?

Related:

Upvotes: 0

Views: 1600

Answers (1)

Gabe Weiss
Gabe Weiss

Reputation: 3342

Tesseract takes a lang variable that you can expand to include different languages if they're installed. I've used the UB Mannheim (https://github.com/UB-Mannheim/tesseract/wiki) installation which includes a ton of languages supported.

To get better and more accurate results, the best thing to do is to process the image before handing it to Tesseract. Set a white/black threshold so that you have black text on white background with no shading. I'm not sure how to do this in Node, but I've done it with Python's OpenCV library.

If that font doesn't get you decent results with the out of the box, then you'll want to train your own, yes. This blog post walks through the process in great detail: https://towardsdatascience.com/simple-ocr-with-tesseract-a4341e4564b6. It revolves around using the jTessBoxEditor to hand-label the objects detected in the images you're using.

Edit: In brief, the process to train your own:

  1. Install jTessBoxEditor (https://sourceforge.net/projects/vietocr/files/jTessBoxEditor/). Requires Java Runtime installed as well.
  2. Collect your training images. They want to be .tiffs. I found I got fairly accurate results with not a whole lot of images that had a good sample of all the characters I wanted to detect. Maybe 30/40 images. It's tedious, so you don't want to do TOO many, but need enough in order to get a good sampling.
  3. Use jTessBoxEditor to merge all the images into a single .tiff
  4. Create a training label file (.box)j. This is done with Tesseract itself. tesseract your_language.font.exp0.tif your_language.font.exp0 makebox
  5. Now you can open the box file in jTessBoxEditor and you'll see how/where it detected the characters. Bounding boxes and what character it saw. The tedious part: Hand fix all the bounding boxes and characters to accurately represent what is in the images. Not joking, it's tedious. Slap some tv episodes up and just churn through it.
  6. Train the tesseract model itself
  • save a file: font_properties who's content is font 0 0 0 0 0
  • run the following commands:

tesseract num.font.exp0.tif font_name.font.exp0 nobatch box.train

unicharset_extractor font_name.font.exp0.box

shapeclustering -F font_properties -U unicharset -O font_name.unicharset font_name.font.exp0.tr

mftraining -F font_properties -U unicharset -O font_name.unicharset font_name.font.exp0.tr

cntraining font_name.font.exp0.tr

You should, in there close to the end see some output that looks like this:

Master shape_table:Number of shapes = 10 max unichars = 1 number with multiple unichars = 0

That number of shapes should roughly be the number of characters present in all the image files you've provided.

If it went well, you should have 4 files created: inttemp normproto pffmtable shapetable. Rename them all with the prefix of your_language from before. So e.g. your_language.inttemp etc.

Then run:

combine_tessdata your_language

The file: your_language.traineddata is the model. Copy that into your Tesseract's data folder. On Windows, it'll be like: C:\Program Files x86\tesseract\4.0\tessdata and on Linux it's probably something like /usr/shared/tesseract/4.0/tessdata.

Then when you run Tesseract, you'll pass the lang=your_language. I found best results when I still passed an existing language as well, so like for my stuff it was still English I was grabbing, just funny fonts. So I still wanted the English as well, so I'd pass: lang=your_language+eng.

Upvotes: 1

Related Questions