Python:替换重音符号(éto e)、删除[^a-zA-Z\d\s]和lower()的有效方法

2024-10-04 05:23:21 发布

您现在位置:Python中文网/ 问答频道 /正文

使用Python3.3。我想做以下工作:

  • 替换特殊的字母字符,如e acute(é)和o 带基字符(例如ôto o)的扬抑符(ô)
  • 删除除字母数字和字母数字之间的空格以外的所有字符 字符
  • 转换为小写

这就是我目前所拥有的:

mystring_modified = mystring.replace('\u00E9', 'e').replace('\u00F4', 'o').lower()
alphnumspace = re.compile(r"[^a-zA-Z\d\s]")
mystring_modified = alphnumspace.sub('', mystring_modified)

我该如何改进?效率是一个很大的问题,特别是因为我目前正在执行循环内的操作:

# Pseudocode
for mystring in myfile:
    mystring_modified = # operations described above
    mylist.append(mystring_modified)

有问题的文件每个大约有200000个字符。


Tags: tore字母数字字符lowerreplacemodified
2条回答
>>> import unicodedata
>>> s='éô'
>>> ''.join((c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn'))
'eo'

也可以查看unidecode

What Unidecode provides is a middle road: function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose.

The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be.

Note that this module generally produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets.

您可以使用str.translate

import collections
import string

table = collections.defaultdict(lambda: None)
table.update({
    ord('é'):'e',
    ord('ô'):'o',
    ord(' '):' ',
    ord('\N{NO-BREAK SPACE}'): ' ',
    ord('\N{EN SPACE}'): ' ',
    ord('\N{EM SPACE}'): ' ',
    ord('\N{THREE-PER-EM SPACE}'): ' ',
    ord('\N{FOUR-PER-EM SPACE}'): ' ',
    ord('\N{SIX-PER-EM SPACE}'): ' ',
    ord('\N{FIGURE SPACE}'): ' ',
    ord('\N{PUNCTUATION SPACE}'): ' ',
    ord('\N{THIN SPACE}'): ' ',
    ord('\N{HAIR SPACE}'): ' ',
    ord('\N{ZERO WIDTH SPACE}'): ' ',
    ord('\N{NARROW NO-BREAK SPACE}'): ' ',
    ord('\N{MEDIUM MATHEMATICAL SPACE}'): ' ',
    ord('\N{IDEOGRAPHIC SPACE}'): ' ',
    ord('\N{IDEOGRAPHIC HALF FILL SPACE}'): ' ',
    ord('\N{ZERO WIDTH NO-BREAK SPACE}'): ' ',
    ord('\N{TAG SPACE}'): ' ',
    })
table.update(dict(zip(map(ord,string.ascii_uppercase), string.ascii_lowercase)))
table.update(dict(zip(map(ord,string.ascii_lowercase), string.ascii_lowercase)))
table.update(dict(zip(map(ord,string.digits), string.digits)))

print('123 fôé BAR҉'.translate(table,))

收益率

123 foe bar

在下面,您必须列出所有要翻译的特殊重音字符。@gnibbler的方法需要更少的编码。

从好的方面来说,str.translate方法应该相当快,并且一旦设置了table,它可以在一个函数调用中处理所有的需求(下壳、删除和删除重音符号)。


顺便说一下,一个有200K个字符的文件不是很大。因此,将整个文件读入单个str,然后在一个函数调用中转换它会更有效。

相关问题 更多 >