【NLP】情感分析kaggle比賽

021 【NLP】情感分析kaggle比賽

這幾天一直在做這個kaggle項目:Bag of Words Meets Bags of Popcorn

做這個項目的目的是學習如何使用word2vec模型,以及掌握ensemble的方法。我找了個項目,在其基礎上進行了更改。原項目在這裡:pangolulu/sentiment-analysis。我順便用python3更新了python2的代碼。在這個項目之前,基於同一個數據集,我還做了一個更初級的word2vec項目:word2vec-movies。可以看完word2vec-movies後,再看這個項目。

筆記里關於如何使用doc2vec模型有比較多的描述,而且在看過很多資料後(大部分代碼並不能運行),也算是能正常使用了。如果想要了解doc2vec如何使用的話,還是能幫上忙的。

https://github.com/BrambleXu/sentiment-analysisgithub.com

先說結論,這裡我實現的最好成績是0.89,無法做到原作者0.96的程度。有很多地方作者並沒有進行解釋,比如data中的feature_chi.txt文件是如何得到的,sentence_vector_org.txt是如何得到的。而且作者在使用word2vec訓練的時候,用的是C代碼,這部分我不熟悉就全部刪除了,自己查資料重新實現了一遍,可能是我自己的方法的問題,才導致無法做到0.96的。如果在使用這個項目的過程中,有能做到0.96的話,請告知一下我究竟是哪裡有問題。

最簡潔的實現部分請查看py文件,如果有些地方不理解的話,可以查看notebook部分,notebook部分我寫得較為繁瑣,看起來可能有些不便,但因為其中中文解釋比較多,對於理解代碼應該有幫助。

我對這個項目實現的效果還是不滿意,打算換一個更新一些的kaggle nlp比賽繼續進行學習。如果有朋友看到我的代碼里有哪些不合理的地方,或是有什麼改進意見,歡迎issue和pr。

  • Part 1 Shallow Model(冗長版)
  • Part 1.2 Shallow Model Prob(簡潔版)
  • Part 2 Doc2vec(原方法實現,冗長版)
  • Part 2.5 Doc2vec(不同模型參數嘗試)
  • Part 2.9 doc2vec_prob(簡潔版)
  • Part 3.2 combine
  • Part 3.5 ensemble

使用方法

三個模型分別存放在Sentiment/src/下面三個文件夾里,分別是bow, dov2vec, ensemble。具體預處理,模型構建,預測請參考這三個文件夾里的內容。

在項目根目錄下運行:

  • python Sentiment/src/bow/runBow.py
  • python Sentiment/src/doc2vec/doc2vec_lr.py
  • python Sentiment/src/ensemble/ensemble.py

requirements

python==3.5pandas==0.21.0numpy==1.13.3jupyter==1.0.0scipy==0.19.1scikit-learn==0.19.0nltk==3.2.1gensim==2.2.0

下面英文部分是原作者項目中的,中文部分是我添加的。

sentiment-classification

Kaggle challenge "Bag of words meets bags of popcorn". And ranked 57th/578, with precision 0.96255.

The website is kaggle.com/c/word2vec-n.

Method

My method contains three parts. One is learning a shallow model; the other is learning a deep model. And then I combine the two models to train an ensemble model.

Shallow Model

The method involves a bag-of-words model, which represents the sentence or document by a vector of words. But due to the sentences have lots of noises, so I use a feature selection process. And chi-square statistic is adopted by me. This will result in a feature vector that is more relevant to the classification label. Then I use the TF-IDF score as each dimension of feature vector. Although I have selected the features, the dimension of feature vector is still very high (19000 features I use in our model). So I can use logistic regression to train the classification model. And I use L1 regularization. The process of training a shallow model is as following. And I call the mothed as BOW.

Why I call this model shallow? MaiInly because it adopts a bag-of-words based model, which only extracts the shallow words frequency of the sentence. But it will not involve the syntactic and semantic of the sentence. So I call it a shallow model. And I will introduce a deep model which can capture more meanings of sentence.

我實現的版本最終效果是0.88。

Deep Model

Recently, Le & Mikolov proposed an unsupervised method to learn distributed representations of words and paragraphs. The key idea is to learn a compact representation of a word or paragraph by predicting nearby words in a fixed context window. This captures co-occurrence statistics and it learns embedding of words and paragraphs that capture rich semantics. Synonym words and similar paragraphs often are surrounded by similar context, and therefore, they will be mapped into nearby feature vectors (and vice versa). I call the method as Doc2Vec. Doc2Vec is a neural network like method, but it contains no hidden layers. And Softmax layer is the output. To avoid the high time complexity of Softmax output layer, they propose hierarchical softmax based on Huffman tree. The architecture of the model is as follows.

Such embeddings can then be used to represent sentences or paragraphs. And can be used as an input for a classifier. In my method, I first train a 200 dimensions paragraph vector. And then I adopt a SVM classifier with RBF kernel.

The process of training a shallow model is as following.

這個模型最好效果是0.87,doc2vec選取的向量為100維,分類器為SVM或logistic regression。SVM的訓練很花時間,可以把SVM變為logistic regression,效果沒有多大變化。這部分作者用C代碼寫了word2vec的訓練部分,我全部刪掉自己實現了一遍。主要用到了gensim中的doc2vec模型。這個模型可以對每一段文字輸出一個向量,對於情感分析非常方便,不過官方文檔寫得很爛,大部分只能靠自己查資料來實現。這裡介紹兩個不錯的資料:A gentle introduction to Doc2Vec and word2vec-sentiments

Ensemble Model

The ensemble model will involve the above two method (BOW and Doc2Vec). In practice, ensemble method can always result in high precise than single model. And the more diversity of the base models, the better performance the ensemble method can get. So combining the shallow model and the deep model is reasonable. Not just averaging the outputs of the two base models, I use the outputs of base models as input to another classifier. The architecture of the ensemble model is as follows.

And in L2 level learning, I use logistic regression.

ensemble的結果是得分最高的,0.89。

下面是我根據代碼畫的示意圖,能更好理解如何做ensemble。


推薦閱讀:

python
如何爬取知乎的ajax內容?
python 括弧檢測是否匹配?
神奇的Numpy計算
如何快速地注釋Python代碼?

TAG:Kaggle | Python | 文本情感分析 |