使用tensorflow實現word2vec中文詞向量的訓練

一、word2vec簡要介紹

word2vec 是 Google 於 2013 年開源推出的一個用於獲取 word vector 的工具包,它簡單、高效,因此引起了很多人的關注。對word2vec數學原理感興趣的可以移步word2vec 中的數學原理詳解,這裡就不具體介紹。word2vec對詞向量的訓練有兩種方式,一種是CBOW模型,即通過上下文來預測中心詞;另一種skip-Gram模型,即通過中心詞來預測上下文。其中CBOW對小型數據比較適合,而skip-Gram模型在大型的訓練語料中表現更好。兩種模型結構如下:

二、使用word2vec對中文訓練詞向量

word2vec的源碼github上可以找到點這裡,這裡面已經實現了對英文的訓練。不過要想運行的話的要小小改動一個地方,修改後如下:

loss = tf.reduce_mean(tf.nn.nce_loss(weights=nce_weights, biases=nce_biases, inputs=embed, labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))

對英文的訓練就不再介紹,這裡主要講如何對中文進行詞向量的訓練(採用skip-Gram模型)。相對於英文來說稍微繁瑣一點,不過對中文的訓練的代碼和英文訓練的代碼大多數都一樣,只要改動前面一部分關於獲取詞列表的代碼即可。對中文的訓練步驟有:

  1. 對文本進行分詞,採用的jieba分詞
  2. 將語料中的所有片語成一個列表,為構建詞頻統計,詞典及反轉詞典。因為計算機不能理解中文,我們必須把文字轉換成數值來替代。
  3. 構建skip-Gram模型需要的訓練數據:由於這裡採用的是skip-Gram模型進行訓練,即通過中心詞預測上下文。因此中心詞相當於x,上下文的詞相當於y。這裡我們設置上下文各為一個詞,假設我要對「恐怕 頂多 只 需要 三年 時間」這段話生成樣本,我們應該通過「頂多」預測「恐怕」和「只」;通過「只」預測「頂多」和「需要」依次下去即可。最終的訓練樣本應該為(頂多,恐怕),(頂多,只),(只,頂多),(只,需要),(需要,只),(需要,三年)。

#!usr/bin/env python# -*- coding:utf-8 -*-"""Created on Sat Sep 1 21:39:20 2017@author: Deermini"""from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionimport collectionsimport mathimport randomimport jiebaimport numpy as npfrom six.moves import xrangeimport tensorflow as tf# Step 1: Download the data.# Read the data into a list of strings.def read_data(): """ 對要訓練的文本進行處理,最後把文本的內容的所有詞放在一個列表中 """ #讀取停用詞 stop_words = [] with open(stop_words.txt,"r",encoding="UTF-8") as f: line = f.readline() while line: stop_words.append(line[:-1]) line = f.readline() stop_words = set(stop_words) print(停用詞讀取完畢,共{n}個詞.format(n=len(stop_words))) # 讀取文本,預處理,分詞,得到詞典 raw_word_list = [] with open(doupocangqiong.txt,"r", encoding=UTF-8) as f: line = f.readline() while line: while
in line: line = line.replace(
,) while in line: line = line.replace( ,) if len(line)>0: # 如果句子非空 raw_words = list(jieba.cut(line,cut_all=False)) raw_word_list.extend(raw_words) line=f.readline() return raw_word_list#step 1:讀取文件中的內容組成一個列表words = read_data()print(Data size, len(words))# Step 2: Build the dictionary and replace rare words with UNK token.vocabulary_size = 50000def build_dataset(words): count = [[UNK, -1]] count.extend(collections.Counter(words).most_common(vocabulary_size - 1)) print("count",len(count)) dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) data = list() unk_count = 0 for word in words: if word in dictionary: index = dictionary[word] else: index = 0 unk_count += 1 data.append(index) count[0][1] = unk_count reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data, count, dictionary, reverse_dictionarydata, count, dictionary, reverse_dictionary = build_dataset(words)del words #刪除words節省內存print(Most common words (+UNK), count[:5])print(Sample data, data[:10], [reverse_dictionary[i] for i in data[:10]])data_index = 0# Step 3: Function to generate a training batch for the skip-gram model.def generate_batch(batch_size, num_skips, skip_window): global data_index assert batch_size % num_skips == 0 assert num_skips <= 2 * skip_window batch = np.ndarray(shape=(batch_size), dtype=np.int32) labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32) span = 2 * skip_window + 1 # [ skip_window target skip_window ] buffer = collections.deque(maxlen=span) for _ in range(span): buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) for i in range(batch_size // num_skips): target = skip_window # target label at the center of the buffer targets_to_avoid = [skip_window] for j in range(num_skips): while target in targets_to_avoid: target = random.randint(0, span - 1) targets_to_avoid.append(target) batch[i * num_skips + j] = buffer[skip_window] labels[i * num_skips + j, 0] = buffer[target] buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) return batch, labelsbatch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)for i in range(8): print(batch[i], reverse_dictionary[batch[i]],->, labels[i, 0], reverse_dictionary[labels[i, 0]])# Step 4: Build and train a skip-gram model.batch_size = 128embedding_size = 128 skip_window = 1 num_skips = 2 valid_size = 9 #切記這個數字要和len(valid_word)對應,要不然會報錯哦 valid_window = 100 num_sampled = 64 # Number of negative examples to sample.#驗證集valid_word = [蕭炎,靈魂,火焰,蕭薰兒,葯老,天階,"雲嵐宗","烏坦城","驚詫"]valid_examples =[dictionary[li] for li in valid_word]graph = tf.Graph()with graph.as_default(): # Input data. train_inputs = tf.placeholder(tf.int32, shape=[batch_size]) train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # Ops and variables pinned to the CPU because of missing GPU implementation with tf.device(/cpu:0): # Look up embeddings for inputs. embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) embed = tf.nn.embedding_lookup(embeddings, train_inputs) # Construct the variables for the NCE loss nce_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size))) nce_biases = tf.Variable(tf.zeros([vocabulary_size]),dtype=tf.float32) # Compute the average NCE loss for the batch. # tf.nce_loss automatically draws a new sample of the negative labels each # time we evaluate the loss. loss = tf.reduce_mean( tf.nn.nce_loss(weights=nce_weights,biases=nce_biases, inputs=embed, labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size)) # Construct the SGD optimizer using a learning rate of 1.0. optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss) # Compute the cosine similarity between minibatch examples and all embeddings. norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True)) normalized_embeddings = embeddings / norm valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset) similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True) # Add variable initializer. init = tf.global_variables_initializer()# Step 5: Begin training.num_steps = 2000000with tf.Session(graph=graph) as session: # We must initialize all variables before we use them. init.run() print("Initialized") average_loss = 0 for step in xrange(num_steps): batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window) feed_dict = {train_inputs: batch_inputs, train_labels: batch_labels} # We perform one update step by evaluating the optimizer op (including it # in the list of returned values for session.run() _, loss_val = session.run([optimizer, loss], feed_dict=feed_dict) average_loss += loss_val if step % 2000 == 0: if step > 0: average_loss /= 2000 # The average loss is an estimate of the loss over the last 2000 batches. print("Average loss at step ", step, ": ", average_loss) average_loss = 0 # Note that this is expensive (~20% slowdown if computed every 500 steps) if step % 10000 == 0: sim = similarity.eval() for i in xrange(valid_size): valid_word = reverse_dictionary[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[:top_k] log_str = "Nearest to %s:" % valid_word for k in xrange(top_k): close_word = reverse_dictionary[nearest[k]] log_str = "%s %s," % (log_str, close_word) print(log_str) final_embeddings = normalized_embeddings.eval()# Step 6: Visualize the embeddings.def plot_with_labels(low_dim_embs, labels, filename=tsne.png,fonts=None): assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings" plt.figure(figsize=(18, 18)) # in inches for i, label in enumerate(labels): x, y = low_dim_embs[i, :] plt.scatter(x, y) plt.annotate(label, fontproperties=fonts, xy=(x, y), xytext=(5, 2), textcoords=offset points, ha=right, va=bottom) plt.savefig(filename,dpi=600)try: from sklearn.manifold import TSNE import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties #為了在圖片上能顯示出中文 font = FontProperties(fname=r"c:windowsfontssimsun.ttc", size=14) tsne = TSNE(perplexity=30, n_components=2, init=pca, n_iter=5000) plot_only = 500 low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only, :]) labels = [reverse_dictionary[i] for i in xrange(plot_only)] plot_with_labels(low_dim_embs, labels,fonts=font) except ImportError: print("Please install sklearn, matplotlib, and scipy to visualize embeddings.")

經過大約三小時的訓練後,使用s-TNE把詞向量降至2維進行可視化,部分詞可視化結果如下:

隨機對幾個詞進行驗證,得到的結果為:

Nearest to 蕭炎: 蕭炎, 他, 韓楓, 林焱, 古元, 蕭厲, 她, 葉重,Nearest to 靈魂: 靈魂, 鬥氣, 觸手可及, 烏鋼, 探頭探腦, 能量, 莊嚴, 晉階,Nearest to 火焰: 火焰, 異火, 能量, 黑霧, 火苗, 砸場, 雷雲, 火海,Nearest to 天階: 天階, 地階, 七品, 相媲美, 斗帝, 碧蛇, 稍有不慎, 玄階,Nearest to 雲嵐宗: 雲嵐宗, 炎盟, 魔炎谷, 磐門, 丹塔, 蕭家, 葉家, 花宗,Nearest to 烏坦城: 烏坦城, 加瑪, 大殿, 丹域, 獸域, 大廳, 帝國, 內院,Nearest to 驚詫: 驚詫, 驚愕, 詫異, 震驚, 驚駭, 驚嘆, 錯愕, 好笑,

這裡只是隨便挑選的幾個詞進行驗證,看起來效果還不錯的樣子,大家可以自己討點有意思的詞進行驗證一下效果哦。

完整程序及數據見我的github,接下來我準備用把這篇小說做成詞雲進行展示,讓大家更直觀的了解這本小說大概在講什麼(正在更新中。。。。。。。。)。詞雲製作可以參考如何用Python做中文詞雲

以上只是個人最近所學,有錯誤的地方還請指正,謝謝。

上面只是針對tensorflow實現word2vec,另外還有一個非常好的gensim庫對word2vec已封裝好,用起來非常的得心應手。gensim的word2vec也已經有人寫好,具體參考利用gensim庫訓練word2vec中文模型。

三、參考資料

  1. 使用Tensorflow實現word2vec模型
  2. Tensorflow機器學習--圖文理解Word2Vec
  3. tensorflow實現中文詞向量訓練
  4. tensorflow實戰 (黃文堅、唐源 著)
  5. word2vec 中的數學原理詳解
  6. 如何用Python做中文詞雲

推薦閱讀:

怎樣計算兩篇文檔的相似度?
如何評價Word2Vec作者提出的fastText演算法?深度學習是否在文本分類等簡單任務上沒有優勢?
基於 word2vec 和 CNN 的文本分類 :綜述 & 實踐
如何用 word2vec 計算兩個句子之間的相似度?
詞向量、文本分類、word2vec?

TAG:TensorFlow | word2vec |