最近被python的编码问题折腾的比较头痛,记录踩过的一些坑。欢迎交流~
原文地址见python2编码问题分析
Python2编码问题
python2的字符串有两种:str和unicode
Python 2.7.10 (default, Aug 22 2015, 20:33:39)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.getdefaultencoding()
'ascii'
>>> a = u'百度'
>>> print a.__class__
<type 'unicode'>
>>> print a
百度
>>> a
u'\u767e\u5ea6'
>>> print u'\u767e\u5ea6'
百度
>>> len(a)
2
>>> aa = a.encode('unicode_escape')
>>> aa
'\\u767e\\u5ea6'
>>> print aa
\u767e\u5ea6
>>> print len(aa)
12
>>> b = a.encode('utf8')
>>> print b.__class__
<type 'str'>
unicode在python里以一种编码无关的方式存储,存储的是code point, 如中文u'百度'的编码就是u'\u767e\u5ea6'。
如果字符串是str,代表一定采用了某种编码,如utf8,这个时候存储的是字节。当然,也可以使用unicode_escape对unicode进行编码,aa字符数为12个,此时aa可以写入文件。对于str类型的字符串 len求的的就是字节数。只是对utf16这种定长编码来说,每个字符占用2个字节,编码后unicode字符串变成str字符串,这时候会加一个大端小端标记(2字节),这是不可见字符。如''\xff\xfe''
问题:对于unicode字符串,在内存里存储时,占用几个字节?
答:不一定,和编译器有关,若是UTF16则为2个字节,UTF32则为4个字节。可以通过查看sys.maxunicode判断。stackoverflow上有个解释如下
Python 2 and Python 3.0-3.2 use either UCS2* or UCS4 for unicode characters, meaning it'll either use 2 bytes or 4 bytes for each character. Which one is picked is a compile-time option.
\u2049
is then represented as either \x49\x20
or \x20\x49
or \x49\x20\x00\x00
or \x00\x00\x20\x49
depending on the native byte order of your system and if UCS2 or UCS4 was picked. ASCII characters in a unicode string still use 2 or 4 bytes per character too.
Python 3.3 switched to a new internal representation, using the most compact form needed to represent all characters in a string. Either 1 byte, 2 bytes or 4 bytes are picked. ASCII and Latin-1 text uses just 1 byte per character, the rest of the BMP characters require 2 bytes and after that 4 bytes is used.
json dumps和json loads
json.dumps将python数据转换为json。loads相反。对应的还可以使用json.dump和load处理文件。
json.dumps得到的json数据基本和python原始的数据一致,
>>> import json
>>> baidu = '百度'
>>> baidu
'\xe7\x99\xbe\xe5\xba\xa6'
>>> len(baidu)
6
>>> baidu.decode('utf8')
u'\u767e\u5ea6'
>>> len(baidu.decode('utf8'))
2
>>> a = json.dumps(baidu)
>>> a
'"\\u767e\\u5ea6"'
>>> print a
"\u767e\u5ea6"
>>> len(a)
14
>>> [e for e in a]
['"', '\\', 'u', '7', '6', '7', 'e', '\\', 'u', '5', 'e', 'a', '6', '"']
>>> at = json.dumps(baidu, ensure_ascii=False)
>>> at
'"\xe7\x99\xbe\xe5\xba\xa6"'
>>> print at
"百度"
>>> at.decode('utf8')
u'"\u767e\u5ea6"'
>>> len(at)
8
>>> [e for e in at]
['"', '\xe7', '\x99', '\xbe', '\xe5', '\xba', '\xa6', '"']
json.dumps 序列化python数据时对中文默认使用的ascii编码。此时,发现a占用14个字节,存储的是code point组成部份的单个字节。
加ensure_ascii=False选项后,会选择默认的中文编码,我这里是utf8。
python3
python2即将在2020年停止维护,但就目前的趋势看,谁说得定呢,不过python3确实有许多比较好的特性,值得花时间看看
网友评论