--- author: James date: 2017-07-25 11:02:42+00:00 medium_post: - O:11:"Medium_Post":11:{s:16:"author_image_url";N;s:10:"author_url";N;s:11:"byline_name";N;s:12:"byline_email";N;s:10:"cross_link";s:2:"no";s:2:"id";N;s:21:"follower_notification";s:3:"yes";s:7:"license";s:19:"all-rights-reserved";s:14:"publication_id";s:2:"-1";s:6:"status";s:4:"none";s:3:"url";N;} post_meta: - date preview: /social/0f9c30a5c78f1aa443ef6ca6603efeb50c4f22f7f162b26f5c7c46fb71a1cab4.png tags: - machine learning - python - topic model - PhD - open source title: Dialect Sensitive Topic Models type: posts url: /2017/07/25/dialect-sensitive-topic-models/ --- As part of my PhD I’m currently interested in topic models that can take into account the dialect of the writing. That is, how can we build a model that can compare topics discussed in different dialectical styles, such as scientific papers versus newspaper articles. If you’re new to the concept of topic modelling then [this article][1] can give you a quick primer. ## Vanilla LDA Vanilla topic models such as [Blei’s LDA][2] are great but start to fall down when the wording around one particular concept varies too much. In a scientific paper you might expect to find words like “gastroenteritis”, “stomach” and “virus” whereas in newspapers discussing the same topic you might find “tummy”, “sick” and “bug”. A vanilla LDA implementation might struggle to understand that these concepts are linked unless the contextual information around the words is similar (e.g. both articles have “uncooked meat” and “symptoms last 24 hours”). We define a set of toy documents that have 3 main topics around sickness and also around health and going to the gym. Half of the documents are written in “layman’s” english and the other half “scientific” english. The documents are shown below
doc1 = ["tummy", "ache", "bad", "food","poisoning", "sick"]
doc2 = ["pulled","muscle","gym","workout","exercise", "cardio"]
doc3 = ["diet", "exercise", "carbs", "protein", "food","health"]
doc4 = ["stomach", "muscle", "ache", "food", "poisoning", "vomit", "nausea"]
doc5 = ["muscle", "aerobic", "exercise", "cardiovascular", "calories"]
doc6 = ["carbohydrates", "diet", "food", "ketogenic", "protein", "calories"]
doc7 = ["gym", "food", "gainz", "protein", "cardio", "muscle"]
doc8 = ["stomach","crunches", "muscle", "ache", "protein"]
doc9 = ["gastroenteritis", "stomach", "vomit", "nausea", "dehydrated"]
doc10 = ["dehydrated", "water", "exercise", "cardiovascular"]
doc11 = ['drink', 'water', 'daily','diet', 'health']
Using a normal implementation of LDA with 3 topics, we get the following results after 30 iterations:
It is fair to say that Vanilla LDA didn’t do a terrible job but it did make end up with some strange decisions like putting poisoning (as in ‘food poisoning’ in with cardio and calories). The other two topics seem fairly consistent and sensible.
## DiaTM
Crain et al. 2010 paper [_**“Dialect topic modeling for improved consumer medical**_ search.”][3] proposes a modified LDA that they call “DiaTM”.
DiaTM works in the same way as LDA but also introduces the concept of collections and dialects. A collection defines a set of documents from the same source or written with a similar dialect – you can imagine having a collection of newspaper articles and a collection of scientific papers for example. Dialects are a bit like topics – each word is effectively “generated” from a dialect and the probability of a dialect being used is defined at collection level.
The handy thing is that words have a probability of appearing in every dialect which is learned by the model. This means that words common to all dialects (such as ‘diet’ or ‘food’) can weighted as such in the model.
Running DiaTM on the same corpus as above yields the following results:
You can see how the model has effectively identified the three key topics in the documents above but has also segmented the topics by dialect. Topic 2 is mainly concerned with food poisoning and sickness. In dialect 0 the words “sick” and “bad” appear but in dialect 1 the words “vomit” and “gastroenteritis” appear.
## Open Source Implementation
I have tried to turn my experiment into a Python library that others can make use of. It is currently early stage and a little slow but it works. The code is [available here][4] and pull requests are very welcome.
The library offers a ‘Scikit-Learn-like’ interface where you fit the model to your data like so:
doc1 = ["tummy", "ache", "bad", "food","poisoning", "sick"]
doc2 = ["pulled","muscle","gym","workout","exercise", "cardio"]
doc3 = ["diet", "exercise", "carbs", "protein", "food","health"]
doc4 = ["stomach", "muscle", "ache", "food", "poisoning", "vomit", "nausea"]
doc5 = ["muscle", "aerobic", "exercise", "cardiovascular", "calories"]
doc6 = ["carbohydrates", "diet", "food", "ketogenic", "protein", "calories"]
doc7 = ["gym", "food", "gainz", "protein", "cardio", "muscle"]
doc8 = ["stomach","crunches", "muscle", "ache", "protein"]
doc9 = ["gastroenteritis", "stomach", "vomit", "nausea", "dehydrated"]
doc10 = ["dehydrated", "water", "exercise", "cardiovascular"]
doc11 = ['drink', 'water', 'daily','diet', 'health']
collection1 = [doc1,doc2,doc3, doc7, doc11]
# 'scientific' documents
collection2 = [doc4,doc5,doc6, doc8, doc9, doc10]
collections = [collection1, collection2]
dtm = DiaTM(n_topic=3, n_dialects=2)
dtm.fit(X)
Fitting the model to new documents using transform() will be available soon as will finding the log probability of the current model parameters.
[1]: http://www.kdnuggets.com/2016/07/text-mining-101-topic-modeling.html
[2]: http://dl.acm.org/citation.cfm?id=2133826
[3]: http://www.ncbi.nlm.nih.gov/pubmed/21346955
[4]: https://github.com/ravenscroftj/diatm