I recently encountered a requirement at work, which requires detecting whether a field contains rare words and some illegal characters such as ~!@#$%^&*. I solved it by searching for information on the Internet. Now I will share the solution process and sample code with everyone. Those who need it can refer to it. Let’s take a look below.
Solution idea
The first thing that comes to mind is to use python’s regular expressions to match illegal characters and then find illegal records. However, ideals are always full, but reality is cruel. During the implementation process, I discovered that I lacked knowledge about character encoding and Python's internal string representation. During this period, I went through a lot of pitfalls, and although there were still some ambiguities in the end, I finally had an overall clear understanding. Record your experience here to avoid falling in the same place in the future.
The following test environment is the python 2.7.8 environment that comes with ArcGIS 10.3. There is no guarantee that other python environments will also be applicable.
Python regular expression
The regular function in python is provided by the built-in re function library, which mainly uses 3 functions. re.compile()
Provides reusable regular expressions, match()
and search()
functions return matching results. The difference between the two is : match()
Start matching from the specified position, search()
will search backward from the specified position until a matching string is found. For example, in the following code, match_result
starts matching from the first character f, and returns a null value if the match fails; search_result
searches backward from f until the first matching character is found. a, and then use the group() function to output the matching result as the character a.
import re pattern = re.compile('[abc]') match_result = pattern.match('fabc') if match_result: print match_result.group() search_result = pattern.search('fabc') if search_result: print search_result.group()
The above implementation requires compiling a pattern first and then matching. In fact, we can directly use the re.match(pattern, string)
function to achieve the same function. However, the direct matching method is not as flexible as compiling first and then matching. First of all, regular expressions cannot be reused. If a large amount of data is matched with the same pattern, it means that internal compilation is required every time, causing performance losses; in addition, re .match()
The function is not as powerful as pattern.match()
. The latter can specify the position from which to start matching.
Encoding issues
After understanding the basic functions of python regular expressions, the only thing left is to find a suitable regular expression to match rare words and illegal characters. Illegal characters are very simple. You can match them by using the following pattern:
pattern = re.compile(r'[~!@#$%^&* ]')
However, the matching of rare characters really stumps me. The first is the definition of rare words. What kind of words are considered rare? After consultation with the project manager, it was determined that non-GB2312 characters are rare characters. The next question is, how to match GB2312 characters?
After query, the range of GB2312 is [\xA1-\xF7][\xA1-\xFE]
, among which the range of Chinese character area is [\xB0-\xF7] [\xA1-\xFE]
. Therefore, the expression after adding rare word matching is:
pattern = re.compile(r'[~!@#$%^&* ]|[^\xA1-\xF7][^\xA1-\xFE]')
The problem seems to be solved logically, but I am still too simple and too naive. Since the strings to be judged are all read from layer files, arcpy thoughtfully encodes the read characters into unicode format. Therefore, I need to find out the encoding range of GB2312 character set in unicode. But the reality is that the distribution of the GB2312 character set in unicode is not continuous, and using regular expressions to represent this range must be very complicated. The idea of using regular expressions to match rare words seems to have hit a dead end.
Solution
Since the provided string is in unicode format, can I convert it to GB2312 and then match it? In fact, it is not possible, because the unicode character set is much larger than the GB2312 character set, so GB2312 => unicode
is always achievable, and conversely unicode => GB2312
is not necessarily possible can succeed.
This suddenly provided me with another idea. Assuming that the unicode => GB2312
conversion of a string will fail, does that mean that it does not belong to the GB2312 character set? So, I used the unicode_string.encode('GB2312')
function to try to convert the string, catching the UnicodeEncodeError exception to identify rare characters.
The final code is as follows:
import re def is_rare_name(string): pattern = re.compile(u"[~!@#$%^&* ]") match = pattern.search(string) if match: return True try: string.encode("gb2312") except UnicodeEncodeError: return True return False
Summary
The above is the detailed content of How to detect rare words with Python. For more information, please follow other related articles on the PHP Chinese website!