user198729
user198729

Reputation: 63656

How to segment text images using MATLAB?

It's part of the process of OCR,which is :

How to segment the sentences into words,and then characters?

What's the candidate algorithm for this task?

Upvotes: 5

Views: 5998

Answers (4)

TheCodeArtist
TheCodeArtist

Reputation: 22497

I am assuming you are using the image-processing toolbox in matlab.

To distinguish text in an image. You might want to follow:

  1. Grayscale (speeds up things greatly).
  2. Contrast enhancement.
  3. Erode the image lightly to remove noise (scratches/blips)
  4. Dilation (heavy).
  5. Edge-Detection ( or ROI calculation).

With Trial-and-error, you'll get the proper coefficients such that the image you obtain after 5th step will contain convex regions surrounding each letter/word/line/paragraph.

NOTE:

  1. Essentially the more you dilate, the larger element you get. i.e. least dilation would be useful in identifying letters, whereas comparitively high dilation would be needed to identify lines and paragraphs.
  2. Online ImgProc MATLAB docs

Check out the "Examples in Documentation" section in the online docs or refer to the image-processing toolbox documentation in Matlab Help menu.

The examples given there will guide you to the proper functions to call and their various formats.

Sample CODE (not mine)

Upvotes: 0

kkbhavsar
kkbhavsar

Reputation: 1

for finding binary sequence like 101000000000000000010000001 detect sequence 0000,0001,001,01,1

Upvotes: 0

doug
doug

Reputation: 70048

First, NIST (Nat'l Institutes of Standards and Tech.) published a protocol known as the NIST Form-Based Handwriting Recognition System about 15 years ago for the this exact question--i.e., extracting and preparing text-as-image data for input to machine learning algorithms for OCR. Members of this group at NIST also published a number of papers on this System.

The performance of their classifier was demonstrated by data also published with the algorithm (the "NIST Handwriting Sample Forms.")

Each of the half-dozen or so OCR data sets i have downloaded and used have referenced the data extraction/preparation protocol used by NIST to prepare the data for input to their algorithm. In particular, i am pretty sure this is the methodology relied on to prepare the Boston University Handwritten Digit Database, which is regarded as benchmark reference data for OCR.

So if the NIST protocol is not a genuine standard at least it's a proven methodology to prepare text-as-image for input to an OCR algorithm. I would suggest starting there, and using that protocol to prepare your data unless you have a good reason not to.

In sum, the NIST data was prepared by extracting 32-bit x 32 bit normalized bitmaps directly from a pre-printed form.

Here's an example:

00000000000001100111100000000000 00000000000111111111111111000000 00000000011111111111111111110000 00000000011111111111111111110000 00000000011111111101000001100000 00000000011111110000000000000000 00000000111100000000000000000000 00000001111100000000000000000000 00000001111100011110000000000000 00000001111100011111000000000000 00000001111111111111111000000000 00000001111111111111111000000000 00000001111111111111111110000000 00000001111111111111111100000000 00000001111111100011111110000000 00000001111110000001111110000000 00000001111100000000111110000000 00000001111000000000111110000000 00000000000000000000001111000000 00000000000000000000001111000000 00000000000000000000011110000000 00000000000000000000011110000000 00000000000000000000111110000000 00000000000000000001111100000000 00000000001110000001111100000000 00000000001110000011111100000000 00000000001111101111111000000000 00000000011111111111100000000000 00000000011111111111000000000000 00000000011111111110000000000000 00000000001111111000000000000000 00000000000010000000000000000000

I believe that the BU data-prep technique subsumes the NIST technique but added a few steps at the end, not with higher fidelity in mind but to reduce file size. In particular, the BU group:

  • began with the 32 x 32 bitmaps; then
  • divided each 32 x 32 bitmap into non-overlapping blocks of 4x4;
  • Next, they counted the number of activated pixels in each block ("1" is activated; "0" is not);
  • the result is an 8 x 8 input matrix in which each element is an integer (0-16)

Upvotes: 1

BCS
BCS

Reputation: 78605

As a first pass:

  • process the text into lines
  • process a line into segments (connected parts)
  • find the largest white band that can be placed between each pair of segments.
  • look at the sequence of widths and select "large" widths as white space.
  • everything between white space is a word.

Now all you need a a good enough definition of "large".

Upvotes: 1

Related Questions