Home > Backend Development > Python Tutorial > Detailed example of word vector embedding

Detailed example of word vector embedding

PHP中文网
Release: 2017-06-21 16:11:30
Original
3098 people have browsed it

Word vector embedding requires efficient processing of large-scale text corpora. word2vec. In a simple way, the word is sent to the one-hot encoding learning system, the length is a vector of the length of the vocabulary, the corresponding position element of the word is 1, and the other elements are 0. The vector dimension is very high and cannot describe the semantic association of different words. Co-occurrence represents words, resolves semantic associations, traverses a large-scale text corpus, counts the surrounding words within a certain distance of each word, and represents each word with the normalized number of nearby words. Words in similar contexts have similar semantics. Use PCA or similar methods to reduce the dimensionality of the occurrence vector to obtain a denser representation. It has good performance and tracks all vocabulary co-occurrence matrices. The width and height are the vocabulary length. In 2013, Mikolov, Tomas and others proposed a context calculation word representation method, "Efficient estimation of word representations in vector space" (arXiv preprint arXiv:1301.3781(2013)). The skip-gram model starts from a random representation and predicts a simple classifier of context words based on the current word. The error is propagated through the classifier weight and word representation, and the two are adjusted to reduce the prediction error. The large-scale corpus training model representation vector approximates the compressed co-occurrence vector.

Dataset, English Wikipedia dump file contains the complete revision history of all pages, the current page version is 100GB.

Download the dump file and extract the page words. Count the number of occurrences of words and build a common vocabulary list. Encode the extracted pages using a vocabulary. The file is read line by line and the results are written immediately to disk. Save checkpoints between different steps to avoid program crashes.

__iter__Traverses the word index list page. encode obtains the vocabulary index of the string word. decode returns the string word according to the vocabulary index. _read_pages extracts words from a Wikipedia dump file (compressed XML) and saves them to a pages file, with one line of space-delimited words per page. The bz2 module open function reads files. Intermediate result compression processing. Regular expressions capture any sequence of consecutive letters or individual special letters. _build_vocabulary counts the number of words in the page file, and words with high frequency are written into the file. One-hot encoding requires a vocabulary. Glossary index encoding. Spelling errors and extremely uncommon words are removed, and the vocabulary only contains vocabulary_size - 1 most common words. All words that are not in the vocabulary are marked with and do not appear in word vectors.

Dynamicly form training samples, organize a large amount of data, and the classifier does not occupy a large amount of memory. The skip-gram model predicts the context words of the current word. Traverse the text, current word data, surrounding word targets, and create training samples. Context size R, each word generates 2R samples, R words to the left and right of the current word. Semantic context, close distance is important, create as few training samples of far-context words as possible, and randomly select the word context size in the range [1, D=10]. Training pairs are formed based on the skip-gram model. Numpy arrays generate numerical stream batch data.

Initially, words are represented as random vectors. The classifier predicts the current representation of the context word based on the mid-level representation. Propagate errors, fine-tune weights, and input word representations. MomentumOptimizer model optimization, lack of intelligence and high efficiency.

The classifier is the core of the model. Noise contrastive estimation loss has excellent performance. Softmax classifier modeling. tf.nn.nce_loss New random vector negative sample (comparison sample), approximate softmax classifier.

The training model ends and the final word vector is written to the file. A subset of the Wikipedia corpus was trained on a normal CPU for 5 hours, and the NumPy array embedding representation was obtained. Complete corpus: . The AttrDict class is equivalent to a Python dict, with keys accessible as attributes.

import bz2
   import collections
   import os
   import re
   from lxml import etree
   from helpers import download
   class Wikipedia:
        TOKEN_REGEX = re.compile(r'[A-Za-z]+|[!?.:,()]')
        def __init__(self, url, cache_dir, vocabulary_size=10000):
            self._cache_dir = os.path.expanduser(cache_dir)
            self._pages_path = os.path.join(self._cache_dir, 'pages.bz2')
            self._vocabulary_path = os.path.join(self._cache_dir, 'vocabulary.bz2')
            if not os.path.isfile(self._pages_path):
                print('Read pages')
                self._read_pages(url)
            if not os.path.isfile(self._vocabulary_path):
                print('Build vocabulary')
                self._build_vocabulary(vocabulary_size)
            with bz2.open(self._vocabulary_path, 'rt') as vocabulary:
                print('Read vocabulary')
                self._vocabulary = [x.strip() for x in vocabulary]
            self._indices = {x: i for i, x in enumerate(self._vocabulary)}
        def __iter__(self):
            with bz2.open(self._pages_path, 'rt') as pages:
                for page in pages:
                    words = page.strip().split()
                    words = [self.encode(x) for x in words]
                    yield words
        @property
        def vocabulary_size(self):
            return len(self._vocabulary)
        def encode(self, word):
            return self._indices.get(word, 0)
        def decode(self, index):
            return self._vocabulary[index]
        def _read_pages(self, url):
            wikipedia_path = download(url, self._cache_dir)
            with bz2.open(wikipedia_path) as wikipedia, \
                bz2.open(self._pages_path, 'wt') as pages:
                for _, element in etree.iterparse(wikipedia, tag='{*}page'):
                    if element.find('./{*}redirect') is not None:
                        continue
                    page = element.findtext('./{*}revision/{*}text')
                    words = self._tokenize(page)
                    pages.write(' '.join(words) + '\n')
                    element.clear()
        def _build_vocabulary(self, vocabulary_size):
            counter = collections.Counter()
            with bz2.open(self._pages_path, 'rt') as pages:
                for page in pages:
                    words = page.strip().split()
                    counter.update(words)
            common = [''] + counter.most_common(vocabulary_size - 1)
            common = [x[0] for x in common]
            with bz2.open(self._vocabulary_path, 'wt') as vocabulary:
                for word in common:
                    vocabulary.write(word + '\n')
        @classmethod
        def _tokenize(cls, page):
            words = cls.TOKEN_REGEX.findall(page)
            words = [x.lower() for x in words]
            return words

import tensorflow as tf
   import numpy as np
   from helpers import lazy_property
   class EmbeddingModel:
        def __init__(self, data, target, params):
            self.data = data
            self.target = target
            self.params = params
            self.embeddings
            self.cost
            self.optimize
        @lazy_property
        def embeddings(self):
            initial = tf.random_uniform(
                [self.params.vocabulary_size, self.params.embedding_size],
                -1.0, 1.0)
            return tf.Variable(initial)
        @lazy_property
        def optimize(self):
            optimizer = tf.train.MomentumOptimizer(
                self.params.learning_rate, self.params.momentum)
            return optimizer.minimize(self.cost)
        @lazy_property
        def cost(self):
            embedded = tf.nn.embedding_lookup(self.embeddings, self.data)
            weight = tf.Variable(tf.truncated_normal(
                [self.params.vocabulary_size, self.params.embedding_size],
                stddev=1.0 / self.params.embedding_size ** 0.5))
            bias = tf.Variable(tf.zeros([self.params.vocabulary_size]))
            target = tf.expand_dims(self.target, 1)
            return tf.reduce_mean(tf.nn.nce_loss(
                weight, bias, embedded, target,
                self.params.contrastive_examples,
                self.params.vocabulary_size))

import collections
   import tensorflow as tf
   import numpy as np
   from batched import batched
   from EmbeddingModel import EmbeddingModel
   from skipgrams import skipgrams
   from Wikipedia import Wikipedia
   from helpers import AttrDict
   WIKI_DOWNLOAD_DIR = './wikipedia'
   params = AttrDict(
        vocabulary_size=10000,
        max_context=10,
        embedding_size=200,
        contrastive_examples=100,
        learning_rate=0.5,
        momentum=0.5,
        batch_size=1000,
   )
   data = tf.placeholder(tf.int32, [None])
   target = tf.placeholder(tf.int32, [None])
   model = EmbeddingModel(data, target, params)
   corpus = Wikipedia(
        'https://dumps.wikimedia.org/enwiki/20160501/'
        'enwiki-20160501-pages-meta-current1.xml-p000000010p000030303.bz2',
        WIKI_DOWNLOAD_DIR,
        params.vocabulary_size)
   examples = skipgrams(corpus, params.max_context)
   batches = batched(examples, params.batch_size)
   sess = tf.Session()
   sess.run(tf.initialize_all_variables())
   average = collections.deque(maxlen=100)
   for index, batch in enumerate(batches):
        feed_dict = {data: batch[0], target: batch[1]}
        cost, _ = sess.run([model.cost, model.optimize], feed_dict)
        average.append(cost)
        print('{}: {:5.1f}'.format(index + 1, sum(average) / len(average)))
        if index > 100000:
            break
   embeddings = sess.run(model.embeddings)
   np.save(WIKI_DOWNLOAD_DIR + '/embeddings.npy', embeddings)

The above is the detailed content of Detailed example of word vector embedding. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template