Skip to content

A Thai word tokenization library using Deep Neural Network

License

Notifications You must be signed in to change notification settings

prashantkodali/deepcut

 
 

Repository files navigation

Deepcut

License DOI

A Thai word tokenization library using Deep Neural Network.

model_structure

What's new

  • v0.7.0 Migrate from keras to TensorFlow 2.0
  • v0.6.0 Allow excluding stop words and custom dictionary, updated weight with semi-supervised learning
  • v0.5.2 Better pretrained weight matrix
  • v0.5.1 Faster tokenization by code refactorization
  • examples folder provide starter script for Thai text classification problem
  • DeepcutJS, you can try tokenizing Thai text on web browser here

Performance

The Convolutional Neural network is trained from 90 % of NECTEC's BEST corpus (consists of 4 sections, article, news, novel and encyclopedia) and test on the rest 10 %. It is a binary classification model trying to predict whether a character is the beginning of word or not. The results calculated from only 'true' class are as follow

Precision Recall F1
97.8% 98.5% 98.1%

Installation

Install using pip for stable release,

pip install deepcut

For latest development release (recommended),

pip install git+git://github.com/rkcosmos/deepcut.git

We do not add tensorflow in automatic installation process because it has cpu and gpu version. Installing cpu version to everyone might break those who already have gpu version installed. So please install tensorflow yourself following this guildline.

Docker

Install Docker on your machine

For Linux:

curl -sSL https://get.docker.com | sudo sh
docker build -t deepcut .

For other OS: see docker installation page

To run this Docker image:

docker run --rm -it deepcut

It will open a shell for us to play with deepcut.

Usage

import deepcut
deepcut.tokenize('ตัดคำได้ดีมาก')

Output will be in list format

['ตัดคำ','ได้','ดี','มาก']

Bag-of-word transformation

We implemented a tokenizer which works similar to CountVectorizer from scikit-learn . Here is an example usage:

from deepcut import DeepcutTokenizer
tokenizer = DeepcutTokenizer(ngram_range=(1,1),
                             max_df=1.0, min_df=0.0)
X = tokenizer.fit_tranform(['ฉันบินได้', 'ฉันกินข้าว', 'ฉันอยากบิน']) # 3 x 6 CSR sparse matrix
print(tokenizer.vocabulary_) # {'บิน': 0, 'ได้': 1, 'ฉัน': 2, 'อยาก': 3, 'ข้าว': 4, 'กิน': 5}, column index of sparse matrix

X_test = tokenizer.transform(['ฉันกิน', 'ฉันไม่อยากบิน']) # use built tokenizer vobalurary to transform new text
print(X_test.shape) # 2 x 6 CSR sparse matrix

tokenizer.save_model('tokenizer.pickle') # save the tokenizer to use later

You can load the saved tokenizer to use later

tokenizer = deepcut.load_model('tokenizer.pickle')
X_sample = tokenizer.transform(['ฉันกิน', 'ฉันไม่อยากบิน'])
print(X_sample.shape) # getting the same 2 x 6 CSR sparse matrix as X_test

Custom Dictionary

User can add custom dictionary by adding path to .txt file with one word per line like the following.

ขี้เกียจ
โรงเรียน

The file can be placed as an argument in tokenize function e.g.

deepcut.tokenize('ตัดคำได้ดีมาก', custom_dict='/path/to/custom_dict.txt')
deepcut.tokenize('ตัดคำได้ดีมาก', custom_dict=['ดีมาก']) # alternatively, you can provide a list of custom dictionary

Notes

Some texts might not be segmented as we would expected (e.g.'โรงเรียน' -> ['โรง', 'เรียน']), this is because of

  • BEST corpus (training data) tokenizes word this way (They use 'Compound words' as a criteria for segmentation)
  • They are unseen/new words -> Ideally, this would be cured by having better corpus but it's not very practical so I am thinking of doing semi-supervised learning to incorporate new examples.

Any suggestion and comment are welcome, please post it in issue section.

Contributors

Citations

If you use deepcut in your project or publication, please cite the library as follows

Rakpong Kittinaradorn, Titipat Achakulvisut, Korakot Chaovavanich, Kittinan Srithaworn,
Pattarawat Chormai, Chanwit Kaewkasi, Tulakan Ruangrong, Krichkorn Oparad.
(2019, September 23). DeepCut: A Thai word tokenization library using Deep Neural Network. Zenodo. http://doi.org/10.5281/zenodo.3457707

or BibTeX entry:

@misc{Kittinaradorn2019,
    author       = {Rakpong Kittinaradorn, Titipat Achakulvisut, Korakot Chaovavanich, Kittinan Srithaworn, Pattarawat Chormai, Chanwit Kaewkasi, Tulakan Ruangrong, Krichkorn Oparad},
    title        = {{DeepCut: A Thai word tokenization library using Deep Neural Network}},
    month        = Sep,
    year         = 2019,
    doi          = {10.5281/zenodo.3457707},
    version      = {1.0},
    publisher    = {Zenodo},
    url          = {http://doi.org/10.5281/zenodo.3457707}
}

Partner Organizations

  • True Corporation

And we are open for contribution and collaboration.

About

A Thai word tokenization library using Deep Neural Network

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 88.2%
  • Jupyter Notebook 8.9%
  • Dockerfile 2.9%