Reputation:
I have following pdf file Marsheet PDF m trying to extract data shown in example, I have tried PDFParse, PDFtoText, etc.... but not working properly is there any solution or example?
<?php
//Output something like this or suggest me if u have any better option
$data_array = array(
array( "name" => "Mr Andrew Smee",
"medicine_name" => "FLUOXETINE 20MG CAPS",
"description" => "TAKE ONE ONCE DAILY FOR LOW MOOD. CAUTION:YOUR DRIVING REACTIONS MAY BE IMPAIRED",
"Dose" => '9000',
"StartDate" => '28/09/15',
"period" => '28',
"Quantity" => '28'
),
array( "name" => "Mr Andrew Smee",
"medicine_name" => "SINEMET PLUS 125MG TAB",
"description" => "TAKE ONE TABLET FIVE TIMES A DAY FOR PD
(8am,11am,2pm,5pm,8pm)
THIS MEDICINE MAY COLOUR THE URINE. THIS IS
HARMLESS. CAUTION:REACTIONS MAY BE IMPAIRED
WHILST DRIVING OR USING TOOLS OR MACHINES.",
"Dose" => '0800,1100,1400,1700,2000',
"StartDate" => '28/09/15',
"period" => '28',
"Quantity" => '140'
), etc...
);
?>
Upvotes: 13
Views: 30856
Reputation: 57388
TL;DR You are almost certainly not going to do this with a library alone.
Update: a working solution (not a perfect solution!) is coded below, see 'in practice'. It requires:
- defining the areas where the text is;
- the possibility of installing and running a command line tool,
pdf2json
.
PDF files contain typesetting primitives, not extractable text; sometimes the difference is slight enough that you can go by, but usually having only extractable text, in easily accessible format, means that the document looks "slightly wrong" aesthetically, and therefore the generators that create the "best" PDFs for text extraction are also the less used.
Some generators exist that embed both the typesetting layer and an invisible text layer, allowing to see the beautiful text and to have the good text. At the expense, you guessed it, of the PDF size.
In your example, you only have the beautiful text inside the file, and the existence of a grid means that the text needs to be properly typeset.
So, inside, what there actually is to be read is this. Notice the letters inside round parentheses:
/R8 12 Tf
0.99941 0 0 1 66 765.2 Tm
[(M)2.51003(r)2.805( )-2.16558(A)-3.39556(n)
-4.33056(d)-4.33056(r)2.805(e)-4.33056(w)11.5803
( )-2.16558(S)-3.39556(m)-7.49588(e)-4.33117(e)556]TJ
ET
and if you assemble the (s)(i)(n)(g)(l)(e) letters inside, you do get "Mr Andrew Smee", but then you need to know where these letters are related to the page, and the data grid. Also you need to beware of spaces. Above, there is one explicit space character, parenthesized, between "Mr" and "Andrew"; but if you removed such spaces and fixed the offsets of all the following letters, you would still read "Mr Andrew Smee" and save two characters. Some PDF "optimizers" will try and do just that, and not considering offsets, the "text" string of that entity will just be "MrAndrewSmee".
And that is why most text extraction libraries, which can't easily manage character offsets (they use "text lines", and by and large they don't care about grids) will give you something like
Mr Andrew Smee 505738 12/04/54 (61
or, in the case of "optimized" texts,
MrAndrewSmee50573812/04/54(61
(which still gives the dangerous illusion of being parsable with a regex -- sometimes it is, sometimes it isn't, most of the times it works 95% of the time, so that the remaining 5% turns into a maintenance nightmare from Hell), but, more importantly, they will not be able to get you the content of the medication details timetable divided by cell.
Any information which is space-correlated (e.g. a name has different meanings if it's written in the left "From" or in the right "To" box) will be either lost, or variably difficult to reconstruct.
There are PDF "protection" schemes that exploit the capability of offsetting the text, and will scramble the strings. With offsets, you can write:
9 l 10 d 4 l 5 1 H 2 e 3 l o 6 W 7 o 8 r
and the PDF viewer will show you "Hello World"; but read the text directly, and you get "ldlHeloWor", or worse. You could add malicious text and place it outside the page, or write it in transparent color, to prank whoever succeeds in removing the easily removed optional copy-paste protection of PDF files. Most libraries would blithely suck up the prank text together with the good text.
Libraries such as XPDF (and its wrappers phpxpdf, pdf2html, etc.) will give you a simple call such as this
// open PDF
$pdfToText->open('PDF-book.pdf');
// PDF text is now in the $text variable
$text = $pdfToText->getText();
$pdfToText->close();
and your "text" will contain everything, and be something like:
...
START DATE START DAY
WEEK 1 WEEK 2 WEEK 3 WEEK 4
DATE 28 29 30 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
19/10/15
Medication Details
Commencing
D.O.B
Doctor
Hour:Dose 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7
Patient
Number
Period
MEDICATION ADMINISTRATION RECORD SHEETS Pharmacy No.
Document No.
02392 731680
28
0900 1
TAKE ONE ONCE DAILY FOR LOW MOOD.
CAUTION:YOUR DRIVING REACTIONS MAY BE IMPAIRED.
28
FLUOXETINE 20MG CAPS
Received Quantity returned quant. by destroyed quant. by
So, reading above, ask yourself - what is that second 28? Can you tell whether it is the received quantity, the returned quantity, the destroyed quantity without looking at the PDF? Sure, if there's only one number, chances are that it will be the received quantity. It becomes a bet.
And is 02392 731680 the document number? It looks like it is (it is not).
Notice also that in the PDF, the medicine name is before the notes. In the extracted text, it is after. By looking at the offsets inside the PDF, you understand why, and it's even a good decision -- but looking at the extracted text, it's not so easy.
So, automatic analysis looks enticingly like it can be done, but as I said, it is a very risky business. It is brittle: someone entering the wrong (for you) text somewhere in the document, sometimes even filling the fields not in sequential order, will result in a PDF which is visually correct and, at the same time, unexplainably unparseable. What are you going to tell your users?
Sometimes, a subset of the available information is stable enough for you to get the work done. In that case, XPDF or PDF2HTML, a bunch of regex, and you're home free in half a day. Yay you! Just keep in mind that any "little" addition to the project might then be impossible. Two numbers are added that are well separated in the PDF; are they 128 and 361, or 12 and 8361, or 1283 and 61? All you get in $text
is 128361
.
So if you go that way, document it clearly and avoid expectations which might be difficult to maintain. Your initial project might work so well, so fast, in so little, than an addition is accepted unbeknownst to you - and you're then required to do the impossible. Explaining why the first 95% was easy and the subsequent 5% very hard might be more than your job is worth.
But can you do the same thing "by hand"? After all, by looking at the PDF, you know what you are seeing. Can the same thing be done by a machine? (this still applies). Sure, in this - after all - clearly delimited problem of computer vision, you very probably can. It just won't be quick and easy. You need:
pdftk
). You need to recover the text with coordinates. "C" for "hospitalized" is worth nothing. "C, 495.2, 882.7" plus the coordinates of your grid tells you of a hospitalization on October 13th, 2015 -- and that is the information you are after!// Cell name X1 Y1 X2 Y2 Text
[ 'PatientName', 60, 760, 300, 790, '' ],
[ 'PatientNumber', 310, 760, 470, 790, '' ],
...
[ 'Grid01Y01X01', 90, 1020, 110, 1040, '' ],
...
Note that very many of those values you can calculate programmatically: once you have the top left corner and know one cell's size, the others are more or less calculable with a very slight error. You needn't input yourself six grids of four weeks with six rows each, seven days per week.
You can use the same structure to create a PNG with red areas to indicate which cells you've got covered. That will be useful to visually check you did not forget anything.
At that point you parse the PDF, and every time you find a text at coordinates (x1,y1) you scan all of your cells and determine where the text should be (there are faster ways to do that using XY binary search trees). If you find 'Mr Andrew S' at 66, 765.2 you add it to PatientName. Then you find 'mee' at 109.2, 765.2 and you also add it to PatientName. Which now reads 'Mr Andrew Smee'.
You might need to first gather all the text snippets that go inside a cell, and then sort them by their y coordinate and x coordinate in this order to handle text scrambling. You may also need to bin the Y values in case you get (4.9997, 8) and (5.0001, 6): the second element goes after the first even if the first's Y value is negligibly less.
If the horizontal distance of a snippet with respect to the previous one is above a certain threshold, you add a space (or more than one. And since we're at this, what about tab stops?).
(For very small text there's a slight risk of the letters being output out of order by the PDF driver and corrected through kerning, but usually that's not a problem).
More evil schemes (outputting "Mr An w Smee" and "dre" with overlapping offsets) are possible but never done in practice, because slight changes in the font being used for displaying will thoroughly wreck the text. This is also why the highest quality PDF drivers will try and output "(A)(n)(d)(r)(e)(w)" instead of "(Andrew)", with each letter addressed on its own.
So, usually a simplistic approach to the text reconstruction will work with no need for binning or kerning fixing or worse.
At the end of the whole cycle you will be left with
[ 'PatientName', 60, 760, 300, 790, 'Mr Andrew Smee' ],
[ 'PatientNumber', 310, 760, 470, 790, '505738' ],
and so on.
I did this kind of work for a large PDF import project some years back and it worked like a charm. Nowadays, I think most of the heavy lifting could be done with TcLibPDF.
The painful part is recording by hand, the first time, the information for the grid; possibly there might be tools for that, or one could whip up a HTML5/AJAX editor using canvases.
Most of the work has already been done by the excellent pdf2json tool, which consuming the 'Andrew Smee' PDF, outputs something like this, which is exactly what we need:
[
{
"height" : 1263,
"width" : 892
"number" : 1,
"pages" : 1,
"fonts" : [
{
"color" : "#000000",
"family" : "Times",
"fontspec" : "0",
"size" : "15"
},
...
],
"text" : [
{ "data" : "12/04/54",
"font" : 0,
"height" : 17,
"left" : 628,
"top" : 103,
"width" : 70
},
{ "data" : "28/09/15",
"font" : 0,
"height" : 17,
"left" : 105,
"top" : 206,
"width" : 70
},
{ "data" : "AQUARIUS",
"font" : 0,
"height" : 17,
"left" : 99,
"top" : 170,
"width" : 94
},
{ "data" : " ",
"font" : 0,
"height" : 17,
"left" : 193,
"top" : 170,
"width" : 5
},
{ "data" : "NURSING",
"font" : 0,
"height" : 17,
"left" : 198,
"top" : 170,
"width" : 83
},
...
In order to make things simple, I convert the Andrew Smee PDF to a PNG and resample it to 892 x 1263 pixel (any size will do, as long as you keep track of the size. Below, they are saved in 'width' and 'height'). This way I can read pixel coordinates straight off my old PaintShop Pro's status bar :-).
The "Address" field is from 73,161 to 837,193.
My sample "template", with only three fields, is therefore in PHP 5.7 (with short array syntax, [ ] instead of Array()
)
<?php
function template() {
$template = [
'Address' => [ 'x1' => 73, 'y1' => 161, 'x2' => 837, 'y2' => 193 ],
'Medicine1' => [ 'x1' => 1, 'y1' => 283, 'x2' => 251, 'y2' => 299 ],
'Details1' => [ 'x1' => 1, 'y1' => 302, 'x2' => 251, 'y2' => 403 ],
];
foreach ($template as $fieldName => $candidate) {
$template[$fieldName]['elements'] = [ ];
}
return $template;
}
// shell_exec('/usr/local/bin/pdf2json "Andrew-Smee.pdf" andrew-smee.json');
$parsed = json_decode(file_get_contents('ann-underdown.json'), true);
$pout = [ ];
foreach ($parsed as $page) {
$template = template();
foreach ($page['text'] as $text) {
// Will it blend?
foreach ($template as $fieldName => $candidate) {
if ($text['top'] > $candidate['y2']) {
continue; // Too low.
}
if (($text['top']+$text['height']) < $candidate['y1']) {
continue; // Too high.
}
if ($text['left'] > $candidate['x2']) {
continue;
}
if (($text['left']+$text['width']) < $candidate['x1']) {
continue;
}
$template[$fieldName]['elements'][] = $text;
}
}
// Now I must reassemble all my fields
foreach ($template as $fieldName => $data) {
$list = $data['elements'];
usort($list, function($txt1, $txt2) {
for ($r = 8; $r >= 1; $r /= 2) {
if (($txt1['top']/$r) < ($txt2['top']/$r)) {
return -1;
}
if (($txt1['top']/$r) > ($txt2['top']/$r)) {
return 1;
}
if (($txt1['left']/$r) < ($txt2['left']/$r)) {
return -1;
}
if (($txt1['left']/$r) > ($txt2['left']/$r)) {
return 1;
}
}
return 0;
});
$text = '';
$starty = false;
foreach ($list as $data) {
if ($data['top'] > $starty + 5) {
if ($starty > 0) {
$text .= "\n";
}
} else {
// Add space
// $text .= ' ';
}
$starty = $data['top'];
// Add text to current line
$text .= $data['data'];
}
// Remove extra spaces
$text = preg_replace('# +#', ' ', $text);
$template[$fieldName] = $text;
}
$paged[] = $template;
}
print_r($paged);
And the result (on a multipage PDF)
Array
(
[0] => Array
(
[Address] => AQUARIUS NURSING HOME 4-6 SPENCER ROAD, SOUTHSEA PO4 9RN
[Medicine1] => ATORVASTATIN 40MG TABS
[Details1] => take ONE tablet at NIGHT
)
[1] => Array
(
[Address] => AQUARIUS NURSING HOME 4-6 SPENCER ROAD, SOUTHSEA PO4 9RN
[Medicine1] => SOTALOL 80MG TABS
[Details1] => take ONE tablet TWICE each day
DO NOT STOP TAKING UNLESS YOUR DOCTOR TELLS
YOU TO STOP.
)
[2] => Array
(
[Address] => AQUARIUS NURSING HOME 4-6 SPENCER ROAD, SOUTHSEA PO4 9RN
[Medicine1] => LAXIDO ORANGE SF 13.8G SACHETS
[Details1] => ONE to TWO when required
DISSOLVE OR MIX WITH WATER BEFORE TAKING.
NOT IN CASSETTE
)
)
Upvotes: 19
Reputation: 1920
Sometimes its hard to extract the pdfs into required format/output directly using some libraries or tools. Same problem occurred with me recently where I had 1600+ pdfs and I needed to extract those data and store it in db. I tried almost all the libraries, tools and none of them helped me. So, I tried put some manual effort to find a pattern and process them using php. For this I used this php library PDF TO HTML.
Install PDF TO HTML library
composer require gufy/pdftohtml-php:~2
This will convert your pdf into html code with each < div > tag representing the page and < p > tag representing the titles and their values. Now using p tags if you can identify the common pattern and it is not hard to put that in the logic to process all the pdfs and convert them into csv/xls or anything else. Since in my case after each 11 < p > tags, the pattern was repeating so i used this .
$pdf = new Gufy\PdfToHtml\Pdf('<PDF_FILE_PATH>');
// get total no pages
$total_pages = $pdf->getPages();
// Iterate through each page and extract the p tags
for($i = 1; $i <= $total_pages; $i++){
// This will convert pdf to html
$html = $pdf->html($i);
// Create a dom document
$domOb = new DOMDocument();
// load html code in the dom document
$domOb->loadHTML(mb_convert_encoding($html, 'HTML-ENTITIES', 'UTF-8'));
// Get SimpleXMLElement from Dom Node
$sxml = simplexml_import_dom($domOb);
// here you have the p tags
foreach ($sxml->body->div->p as $pTag) {
// your logic
}
}
Hope this helps you as it helped me alot
Upvotes: 5