abdel-kader Magdy
abdel-kader Magdy

Reputation: 85

errors for creating arabic text files in python,

I write a python code to read Arabic text in files under a specific folder and remove unwanted symbols (e.g. (, ) , $, . . . etc), then save all Arabic clear text into another folder.

the problem is the the code creates files with the following text:

\'c3\'cb\'d1\'ca \f1 \ \f0 \'dd\'ed\'c7 \f1 \ \f0 \'e1\'c3\'e4 \f1 \ \f0 \'c8\'d8\'e1\'e5\'c7

not in Arabic language form.

How I can read and write text in Arabic form only.

The code

import sys, os
import codecs
import unicodedata
from nltk.tokenize import word_tokenize

if not os.path.exists(corpus_clean):
    os.makedirs(corpus_clean)

reload(sys)

sys.setdefaultencoding('utf8')
def readUnicodeDataFrom(inputDir):
    unicode_input = codecs.open(inputDir, encoding='utf8', mode='r')
    unicode_data = unicode_input.read()
    norm_data = unicodedata.normalize('NFKD', unicode_data).encode('utf8', 'ignore')
    norm_words = word_tokenize(norm_data)
    unicode_input.close
    return norm_words

def removeCharFrom(mylist):
    mylist = [x for x in mylist if not (x.isdigit() 
                                         or x[0] == '-' and x[1:].isdigit())]   

    to_remove = ['*','#', '/', '.', '...', ':','-','_',',',';','<','>','|','\\','0','1','2','3','4','5','6','7','8','9','?',')','(','%','&','+','$','^','!','"','[',']']

    for char in mylist:
        if char in to_remove:
            mylist.remove(char) 
    return mylist


def writeUnicodeDataTo(outputDir, listOfWords):
    unicode_output = codecs.open(outputDir, encoding='utf8', mode='w+')
    for word in listOfWords:
        word = unicode(word, errors='ignore')
        unicode_output.write(word+'\n')#.encode('utf8', 'ignore'))
    unicode_output.seek(0)
        unicode_output.close

if __name__ == '__main__':
    i = 1
    for root, dirs, files in os.walk(yourpath, topdown=False):
        for name in files:
             if name !='.DS_Store':
                f = open(os.path.join(root, name))
                norm_words = readUnicodeDataFrom(os.path.join(root, name))
                uniq_words = removeCharFrom(norm_words)
                writeUnicodeDataTo(os.path.join(corpus_clean, name), uniq_words)
                print ('The file number: '+str(i)+'\n\n')
                i+=1
                for j in range(len(uniq_words)):
                    print(u''+norm_words[j])
  f.close()

Upvotes: 0

Views: 901

Answers (3)

deepayan das
deepayan das

Reputation: 1657

Hi does the following code help? `

with open('arabic.txt','r') as in_file:
     new_text=[]
     arabic_text = in_file.readlines()
     for each_line in arabic_text:
         each_line.translate(None,'!@##$%^**()')
         new_text.extends(each_line)
with open('new_arabic.txt'.'a') as out_file:
     for line in new_text:
         out_file.write(line.encode(utf-8))`

Upvotes: 0

YLJ
YLJ

Reputation: 2996

norm_data = unicodedata.normalize('NFKD', unicode_data).encode('utf8', 'ignore')

change to

norm_data = unicodedata.normalize('NFKD', unicode_data)

and

word = unicode(word, errors='ignore')
unicode_output.write(word+'\n')#.encode('utf8', 'ignore'))

change to

unicode_output.write(word+'\n')

Reason

The problem is that you shouldn't encode again after codecs.open(inputDir, encoding='utf8', mode='r'). codecs already does it for you.

The same problem when you write to file using unicode_output.write(word+'\n').

Upvotes: 1

Dan Moica
Dan Moica

Reputation: 274

Maybe you should read and write the files in binary mode:

...
unicode_input = codecs.open(inputDir, encoding='utf8', mode='rb')
...
unicode_output = codecs.open(outputDir, encoding='utf8', mode='wb')

Upvotes: 0

Related Questions