brainsteam.co.uk/2015/07/15/sssplit-improvements/index.html

181 lines
14 KiB
HTML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge"><title>SSSplit Improvements - Brainsteam</title><meta name="viewport" content="width=device-width, initial-scale=1">
<meta itemprop="name" content="SSSplit Improvements">
<meta itemprop="description" content="Introduction As part of my continuing work on Partridge, Ive been working on improving the sentence splitting capability of SSSplit the component used to split academic papers from PLosOne and PubMedCentral into separate sentences.
Papers arrive in our system as big blocks of text with the occasional diagram, formula or diagram and in order to apply CoreSC annotations to the sentences we need to know where each sentence starts and ends."><meta itemprop="datePublished" content="2015-07-15T19:33:29&#43;00:00" />
<meta itemprop="dateModified" content="2015-07-15T19:33:29&#43;00:00" />
<meta itemprop="wordCount" content="1153">
<meta itemprop="keywords" content="demo,improvements,java,partridge,python,regex,sapienta,split,sssplit,test," /><meta property="og:title" content="SSSplit Improvements" />
<meta property="og:description" content="Introduction As part of my continuing work on Partridge, Ive been working on improving the sentence splitting capability of SSSplit the component used to split academic papers from PLosOne and PubMedCentral into separate sentences.
Papers arrive in our system as big blocks of text with the occasional diagram, formula or diagram and in order to apply CoreSC annotations to the sentences we need to know where each sentence starts and ends." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://brainsteam.co.uk/2015/07/15/sssplit-improvements/" /><meta property="article:section" content="posts" />
<meta property="article:published_time" content="2015-07-15T19:33:29&#43;00:00" />
<meta property="article:modified_time" content="2015-07-15T19:33:29&#43;00:00" />
<meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="SSSplit Improvements"/>
<meta name="twitter:description" content="Introduction As part of my continuing work on Partridge, Ive been working on improving the sentence splitting capability of SSSplit the component used to split academic papers from PLosOne and PubMedCentral into separate sentences.
Papers arrive in our system as big blocks of text with the occasional diagram, formula or diagram and in order to apply CoreSC annotations to the sentences we need to know where each sentence starts and ends."/>
<link href='https://fonts.googleapis.com/css?family=Playfair+Display:700' rel='stylesheet' type='text/css'>
<link rel="stylesheet" type="text/css" media="screen" href="https://brainsteam.co.uk/css/normalize.css" />
<link rel="stylesheet" type="text/css" media="screen" href="https://brainsteam.co.uk/css/main.css" />
<link id="dark-scheme" rel="stylesheet" type="text/css" href="https://brainsteam.co.uk/css/dark.css" />
<script src="https://brainsteam.co.uk/js/feather.min.js"></script>
<script src="https://brainsteam.co.uk/js/main.js"></script>
</head>
<body>
<div class="container wrapper">
<div class="header">
<div class="avatar">
<a href="https://brainsteam.co.uk/">
<img src="/images/avatar.png" alt="Brainsteam" />
</a>
</div>
<h1 class="site-title"><a href="https://brainsteam.co.uk/">Brainsteam</a></h1>
<div class="site-description"><p>The irregular mental expulsions of a PhD student and CTO of Filament, my views are my own and do not represent my employers in any way.</p><nav class="nav social">
<ul class="flat"><li><a href="https://twitter.com/jamesravey/" title="Twitter" rel="me"><i data-feather="twitter"></i></a></li><li><a href="https://github.com/ravenscroftj" title="Github" rel="me"><i data-feather="github"></i></a></li><li><a href="/index.xml" title="RSS" rel="me"><i data-feather="rss"></i></a></li></ul>
</nav></div>
<nav class="nav">
<ul class="flat">
<li>
<a href="/">Home</a>
</li>
<li>
<a href="/tags">Tags</a>
</li>
<li>
<a href="https://jamesravey.me">About Me</a>
</li>
</ul>
</nav>
</div>
<div class="post">
<div class="post-header">
<div class="meta">
<div class="date">
<span class="day">15</span>
<span class="rest">Jul 2015</span>
</div>
</div>
<div class="matter">
<h1 class="title">SSSplit Improvements</h1>
</div>
</div>
<div class="markdown">
<h2 id="introduction">Introduction</h2>
<p>As part of my continuing work on <a href="http://papro.org.uk">Partridge</a>, Ive been working on improving the sentence splitting capability of SSSplit the component used to split academic papers from PLosOne and PubMedCentral into separate sentences.</p>
<p>Papers arrive in our system as big blocks of text with the occasional diagram, formula or diagram and in order to apply CoreSC annotations to the sentences we need to know where each sentence starts and ends. Of course that means we also have to take into account the other stuff (listed above) floating around in the documents too. We cant just ignore formulae and citations theyre pretty important! Thats what SSSplit does. It carves up papers into sentence (<em><s></em>) elements whilst also leaving the XML structure of the rest of the document in tact.</p>
<p>The original SSSplit utility was written a number of years ago in Java and used Regular Expressions to parse XML (something that readers of <a href="http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454">this StackOverflow article</a> will already know has a propensity to summon eldrich abominations from the otherworld). Due to the complex regular expressions, the old splitter was not particularly performant . Especially given the complex nature of some of the expressions (if youre interested, check out one of the <em>simpler</em> ones <a href="https://www.debuggex.com/r/vEyxqRg6xgN9ui_P">here</a>).</p>
<p>Now, I can definitely see what the original authors were going for here. Regular expressions are very good for splitting sentences but not sentences inside complex XML documents. XML parsers are not particularly good for splitting sentences but are obviously good at parsing XML. I also understand that the original splitter was designed and then new bits glued on to make it suitable for new and different standards of XML leading to the gargantuan expressions like the one linked to above. I think they did a pretty good job given the information available to them at the time of writing.</p>
<p>I decided that the splitter needed a rewrite and went straight to my comfort zone to get it done: Python. Im very familiar with the language to the point now that I can write a fairly complicated program in it in a day if Ive had enough coffee and sugar.</p>
<h2 id="writing-sssplit-20">Writing SSSplit 2.0</h2>
<p>I decided that we needed to try and minimise excessive uses of regular expressions for both performance and maintainence/readability reasons.  I decided to try and do as much of the parsing of the document structure as possible using a traditional XML parser. Id heard good things about <a href="https://docs.python.org/2/library/xml.etree.elementtree.html">etree</a> which is part of the standard Python library and provides an informal dom-like interface. I used etree to inspect what I dubbed P-level xml elements first. These are elements that I consider to be at a “paragraph” level. Any sentences inside these elements are completely contained they do not cross the boundaries into the next container (unless the author is a poet/fiction writer/doesnt do English very well I think its a safe bet that they wouldnt finish a paragraph mid-sentence). Within the p-level containers, I sweep for any sort of XML node were interested in text nodes but also any sort of formatting like bold (<b>) elements.</p>
<p>When a text node is encountered, thats when regular expressions start to kick in. We do a very simple match for punctuation just in front of a space and a capital letter and run it over the text node these are “potential” splits. We also look for full stops at the very end of the text.</p>
<pre lang="python">pattern = re.compile('(\.|\?|\!)(?=\s*[A-Z0-9$])|\.$')
m = pattern.search(txt)
</pre>
<p>Of course this generates lots of false positives what if weve found a decimal point inside a number? What if its an abbreviation like e.g. or i.e. or an initial like J. Ravenscroft? There is another regular expression check for decimal points and the string around the punctuation is checked against a list of common abbreviations. Theres also a list of authors both the writers of the paper in question and those who are cited in the paper too. The function checks that the full stop is not part of one of these authors names.</p>
<p>Theres an important factor to remember: Text node does not imply finished sentence they are interspersed with formulae and references as explained above. Therefore we cant just finish the current sentence when we reach the end of a text node only when we encounter a full stop (not part of an abbreviation or number), question mark or explanation mark. We also know that we can complete the current sentence at the end of a p-level container as I explained above.</p>
<p>Every time we start parsing a sentence, text nodes and other stuff deemed to be inside that sentence is accumulated into a list. Once we encounter the end of the sentence, the list is glued together and turned into an XML <s> element.</p>
<p>The next step was to see how effective the new splitter was against the old splitter and also manual annotation by professional scientific literature readers.</p>
<h2 id="testing-the-splitter">Testing the splitter</h2>
<p>To test the system I originally wrote a simple script that takes a set of manually annotated papers strips them of their annotations so that the new splitter doesnt get any clues runs the new routine over them and then compares the output. This was very rudimentary as I was in a rush and didnt tell me much about the success rate of my splitter. It did display the first and last words of each “detected” sentence for both manual and automatic annotation so I could at least see how well (if at all) the two lined up. I had to run the script on a paper-by-paper basis.</p>
<p>I managed to get the splitter working really well on a number of papers (were talking a 100% match) using this tool. However I realised that the majority of papers were still not being matched and it was becoming more and more of a chore to find which ones werent matching.</p>
<p>Thats why I decided to write a web-based visualisation tool for checking the splitter. The idea is that it runs on all papers giving an overall percentage of how well the automated splitter is working vs the manual splitter but also gives a per-paper figure. If you want to see which papers the system is really struggling with you can inspect them by clicking on them. This brings up a list of all the sentences and whether or not they align.</p>
<p>The tool is pretty useful as it gives me a clue as to which papers I need to tune the splitter with next.</p>
<p>Heres a quick demo video of me using the tool to find papers that dont match very well.</p>
<div class="jetpack-video-wrapper">
<span class="embed-youtube" style="text-align:center; display: block;"><iframe class='youtube-player' width='660' height='372' src='https://www.youtube.com/embed/o1EpJ_zJcno?version=3&#038;rel=1&#038;showsearch=0&#038;showinfo=1&#038;iv_load_policy=1&#038;fs=1&#038;hl=en-US&#038;autohide=2&#038;wmode=transparent' allowfullscreen='true' style='border:0;' sandbox='allow-scripts allow-same-origin allow-popups allow-presentation'></iframe></span>
</div>
<h2 id="next-steps">Next steps</h2>
<p>A lot of tuning has been done on how this system works but theres still a long way to go yet. Ill probably post another article talking about what further changes had to be made to make the parser effective!</p>
</div>
<div class="tags">
<ul class="flat">
<li><a href="/tags/demo">demo</a></li>
<li><a href="/tags/improvements">improvements</a></li>
<li><a href="/tags/java">java</a></li>
<li><a href="/tags/partridge">partridge</a></li>
<li><a href="/tags/python">python</a></li>
<li><a href="/tags/regex">regex</a></li>
<li><a href="/tags/sapienta">sapienta</a></li>
<li><a href="/tags/split">split</a></li>
<li><a href="/tags/sssplit">sssplit</a></li>
<li><a href="/tags/test">test</a></li>
</ul>
</div><div id="disqus_thread"></div>
<script type="text/javascript">
(function () {
if (window.location.hostname == "localhost")
return;
var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
var disqus_shortname = 'brainsteam';
dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
})();
</script>
<noscript>Please enable JavaScript to view the </a></noscript>
<a href="http://disqus.com/" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a>
</div>
</div>
<div class="footer wrapper">
<nav class="nav">
<div>2021 © James Ravenscroft 2020 | <a href="https://github.com/knadh/hugo-ink">Ink</a> theme on <a href="https://gohugo.io">Hugo</a></div>
</nav>
</div>
<script type="application/javascript">
var doNotTrack = false;
if (!doNotTrack) {
window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date;
ga('create', 'UA-186263385-1', 'auto');
ga('send', 'pageview');
}
</script>
<script async src='https://www.google-analytics.com/analytics.js'></script>
<script>feather.replace()</script>
</body>
</html>