سؤال

Hello I retrieve text based utf8 data from a foreign source which contains special chars such as u"ıöüç" while I want to normalize them to English such as "ıöüç" -> "iouc" . What would be the best way to achieve this ?

هل كانت مفيدة؟

المحلول

I recommend using Unidecode module:

>>> from unidecode import unidecode
>>> unidecode(u'ıöüç')
'iouc'

Note how you feed it a unicode string and it outputs a byte string. The output is guaranteed to be ASCII.

نصائح أخرى

It all depends on how far you want to go in transliterating the result. If you want to convert everything all the way to ASCII (αβγ to abg) then unidecode is the way to go.

If you just want to remove accents from accented letters, then you could try decomposing your string using normalization form NFKD (this converts the accented letter á to a plain letter a followed by U+0301 COMBINING ACUTE ACCENT) and then discarding the accents (which belong to the Unicode character class Mn — "Mark, nonspacing").

import unicodedata

def remove_nonspacing_marks(s):
    "Decompose the unicode string s and remove non-spacing marks."
    return ''.join(c for c in unicodedata.normalize('NFKD', s)
                   if unicodedata.category(c) != 'Mn')

The simplest way I found:

unicodedata.normalize('NFKD', s).encode("ascii", "ignore")

import unicodedata
unicodedata.normalize()

http://docs.python.org/library/unicodedata.html

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top