bozdoz
bozdoz

Reputation: 12870

How to print non-ascii characters to file in Python 2.7

I'm trying to obfuscate some javascript by altering their character codes, but I've found that I can't correctly print characters outside of a certain range, in Python 2.7.

For example, here's what I'm trying to do:

f = open('text.txt','w')
f.write(unichr(510).encode('utf-8'))
f.close()

I can't write unichr(510) because it says the ascii codec is out of range. So I encode it with utf-8. This turns a single character u'\u01fe' into two '\xc7\xbe'.

Now, in javascript, it's easy to get the symbol for the character code 510:

String.fromCharCode(510)

Gives the single character: Ǿ

What I'm getting with Python is two characters: Ǿ

If I pass those characters to javascript, I can't retrieve the original single character.

I know that it is possible to print the Ǿ character in python, but I haven't been able to figure it out. I've gotten as far as using unichr() instead of chr(), and encoding it to 'utf-8', but I'm still coming up short. I've also read that Python 3 has this functionality built-in to the chr() function. But that won't help me.

Does anyone know how I can accomplish this task?

Thank you.

Upvotes: 2

Views: 5293

Answers (3)

bbayles
bbayles

Reputation: 4527

How about this?

import codecs
outfile = codecs.open(r"C:\temp\unichr.txt", mode='w', encoding="utf-8")
outfile.write(unichr(510))
outfile.close()

Upvotes: 4

Sheng
Sheng

Reputation: 3555

You should open the file in binary mode:

f = open('text.txt','wb')

And then write the bytes (in Python 3):

f.write(chr(510).encode('utf-8'))

Or in Python 2:

f.write(unichr(510).encode('utf-8'))

Finally, close the file

f.close()

Or you could do it in a better manner like this:

>>> f = open('e:\\text.txt','wt',encoding="utf-8")
>>> f.write(chr(510))
>>> f.close()

After that, you could read the file as:

>>> f = open('e:\\text.txt','rb')
>>> content = f.read().decode('utf-8')
>>> content
'Ǿ'

Or

>>> f = open('e:\\text.txt','rt',encoding='utf-8')
>>> f.read()
'Ǿ'

Tested on my Win7 and Python3. It should works with Python 2.X

Upvotes: 4

unutbu
unutbu

Reputation: 880269

Python is writing the bytes '\xc7\xbe' to the file:

In [45]: unichr(510).encode('utf-8')
Out[45]: '\xc7\xbe'

JavaScript is apparently forming the unicode u'\xc7\xbe' instead:

In [46]: 'Ǿ'.decode('utf-8')
Out[46]: u'\xc7\xbe'

In [47]: 'Ǿ'.decode('utf-8').encode('latin-1')
Out[47]: '\xc7\xbe'

The problem is in how JavaScript is converting the bytes to unicode, not in how Python is writing the bytes.

Upvotes: 1

Related Questions