How to build a system?

PHP中文网
Release: 2017-06-20 11:00:18
Original
2240 people have browsed it

How to build a system for extracting structured information and data from unstructured text? What methods use this type of behavior? Which corpora are suitable for this work? Is it possible to train and evaluate the model?

Information extraction, especially structured information extraction, can be compared to database records. The corresponding relationship binds the corresponding data information. For unstructured data such as natural language, in order to obtain the corresponding relationship, the special relationship corresponding to the entity should be searched and recorded using some data structures such as strings and elements.

Entity recognition: chunking technology

For example: We saw the yellow dog, according to the idea of ​​chunking, the last three words will be divided into NP, and the three words inside Each word corresponds to DT/JJ/NN respectively; saw is divided into VBD; We is divided into NP. For the last three words, NP is the chunk (larger set). In order to achieve this, you canuse NLTK's own chunking syntax, similar to regular expressions, to implement sentence chunking.

Construction of chunking syntax

Just pay attention to three points:

  • ##Basic chunking:

    Chunking:{ Sub-block under the block} (similar to: "NP: {

    ?*}"A string like this). And ?*+ saves the meaning of the regular expression.

import nltk
sentence = [('the','DT'),('little','JJ'),('yellow','JJ'),('dog','NN'),('brak','VBD')]
grammer = "NP: {
?*}"cp = nltk.RegexpParser(grammer) #生成规则result = cp.parse(sentence) #进行分块print(result) result.draw() #调用matplotlib库画出来
Copy after login


  • You can define a ## for sequences of identifiers that are not included in chunks. #gap

    }+{

    ##
    import nltk
    sentence = [('the','DT'),('little','JJ'),('yellow','JJ'),('dog','NN'),('bark','VBD'),('at','IN'),('the','DT'),('cat','NN')]
    grammer = """NP:             {
    ?*} }+{ """ #加缝隙,必须保存换行符cp = nltk.RegexpParser(grammer) #生成规则result = cp.parse(sentence) #进行分块print(result)
    Copy after login


can be called recursively, which conforms to the recursive nesting in the language structure. For example:
    VP: {*} PP:{}
  • . At this time, the parameter

    loop of the RegexpParser function can be set to 2 and looped multiple times to prevent omissions.

    Tree diagram
If you call

print(type(result))

to view the type, you will find that it is

nltk.tree. Tree. As you can tell from the name, this is a tree-like structure. nltk.Tree can realize tree structure, and supports splicing technology, providing node query and tree drawing.

tree1 = nltk.Tree('NP',['Alick'])print(tree1)
tree2 = nltk.Tree('N',['Alick','Rabbit'])print(tree2)
tree3 = nltk.Tree('S',[tree1,tree2])print(tree3.label()) #查看树的结点tree3.draw()
Copy after login

IOB mark

stands for internal, external, and beginning respectively (the first letter of the English word). For classifications such as NP and NN mentioned above, you only need to add I-/B-/O- in front. This allows collections outside the rules to be exposed, similar to adding gaps above.

Developing and evaluating chunkers


NLTK already provides us with chunkers, reducing manual building rules. At the same time, it also provides content that has been divided into chunks for reference when we build our own rules.

#这段代码在python2下运行from nltk.corpus import conll2000print conll2000.chunked_sents('train.txt')[99] #查看已经分块的一个句子text = """   he /PRP/ B-NP   accepted /VBD/ B-VP   the DT B-NP   position NN I-NP   of IN B-PP   vice NN B-NP   chairman NN I-NP   of IN B-PP   Carlyle NNP B-NP   Group NNP I-NP   , , O   a DT B-NP   merchant NN I-NP   banking NN I-NP   concern NN I-NP   . . O"""result = nltk.chunk.conllstr2tree(text,chunk_types=['NP'])
Copy after login

For the previously defined rules
cp

, you can use

cp.evaluate(conll2000.chunked_sents(' train.txt')[99]) to test the accuracy. Using the Unigram tagger we learned before, we can segment noun phrases into chunks and test the accuracy

class UnigramChunker(nltk.ChunkParserI):"""        一元分块器,        该分块器可以从训练句子集中找出每个词性标注最有可能的分块标记,        然后使用这些信息进行分块    """def __init__(self, train_sents):"""            构造函数            :param train_sents: Tree对象列表        """train_data = []for sent in train_sents:# 将Tree对象转换为IOB标记列表[(word, tag, IOB-tag), ...]conlltags = nltk.chunk.tree2conlltags(sent)# 找出每个词性标注对应的IOB标记ti_list = [(t, i) for w, t, i in conlltags]
            train_data.append(ti_list)# 使用一元标注器进行训练self.__tagger = nltk.UnigramTagger(train_data)def parse(self, tokens):"""            对句子进行分块            :param tokens: 标注词性的单词列表            :return: Tree对象        """# 取出词性标注tags = [tag for (word, tag) in tokens]# 对词性标注进行分块标记ti_list = self.__tagger.tag(tags)# 取出IOB标记iob_tags = [iob_tag for (tag, iob_tag) in ti_list]# 组合成conll标记conlltags = [(word, pos, iob_tag) for ((word, pos), iob_tag) in zip(tokens, iob_tags)]return nltk.chunk.conlltags2tree(conlltags)
test_sents = conll2000.chunked_sents("test.txt", chunk_types=["NP"])
train_sents = conll2000.chunked_sents("train.txt", chunk_types=["NP"])

unigram_chunker = UnigramChunker(train_sents)print(unigram_chunker.evaluate(test_sents))
Copy after login

Named entity recognition and information extraction

Named entity: an exact noun phrase that refers to a specific type of individual, such as a date, person, organization, etc.

. If you go to Xu Yan classifier by yourself, you will definitely have a big head (ˉ▽ ̄~)~~. NLTK provides a trained classifier--

nltk.ne_chunk(tagged_sent[,binary=False]). If binary is set to True, then named entities are only tagged as NE; otherwise the tags are a bit more complex.

sent = nltk.corpus.treebank.tagged_sents()[22]print(nltk.ne_chunk(sent,binary=True))
Copy after login

If the named entity is determined,
Relationship extraction

can be implemented to extract information. One way is to find all triples (X,a,Y). Among them, X and Y are named entities, and a is a string representing the relationship between the two. The example is as follows:

#请在Python2下运行import re
IN = re.compile(r'.*\bin\b(?!\b.+ing)')for doc in nltk.corpus.ieer.parsed_docs('NYT_19980315'):for rel in nltk.sem.extract_rels('ORG','LOC',doc,corpus='ieer',pattern = IN):print nltk.sem.show_raw_rtuple(rel)
Copy after login

The above is the detailed content of How to build a system?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!