295 lines
25 KiB
HTML
295 lines
25 KiB
HTML
<!DOCTYPE html>
|
|
<html>
|
|
<head>
|
|
<meta charset="utf-8" />
|
|
<meta http-equiv="X-UA-Compatible" content="IE=edge"><title>Serving NLP Models with MLflow - Brainsteam</title><meta name="viewport" content="width=device-width, initial-scale=1">
|
|
<meta itemprop="name" content="Serving NLP Models with MLflow">
|
|
<meta itemprop="description" content="Serving NLP models with MLflow is a little trickier than serving models expecting tabular input. In this post we explore one possible solution with code examples."><meta itemprop="datePublished" content="2020-12-29T09:50:28+00:00" />
|
|
<meta itemprop="dateModified" content="2020-12-29T09:50:28+00:00" />
|
|
<meta itemprop="wordCount" content="1827"><meta itemprop="image" content="https://brainsteam.co.uk/2020/12/29/serving-nlp-models-with-mlflow/images/feature.jpg">
|
|
<meta itemprop="keywords" content="machine-learning,python,ai,devops,mlops,nlp,spacy," /><meta property="og:title" content="Serving NLP Models with MLflow" />
|
|
<meta property="og:description" content="Serving NLP models with MLflow is a little trickier than serving models expecting tabular input. In this post we explore one possible solution with code examples." />
|
|
<meta property="og:type" content="article" />
|
|
<meta property="og:url" content="https://brainsteam.co.uk/2020/12/29/serving-nlp-models-with-mlflow/" /><meta property="og:image" content="https://brainsteam.co.uk/2020/12/29/serving-nlp-models-with-mlflow/images/feature.jpg"/><meta property="article:section" content="posts" />
|
|
<meta property="article:published_time" content="2020-12-29T09:50:28+00:00" />
|
|
<meta property="article:modified_time" content="2020-12-29T09:50:28+00:00" />
|
|
|
|
<meta name="twitter:card" content="summary_large_image"/>
|
|
<meta name="twitter:image" content="https://brainsteam.co.uk/2020/12/29/serving-nlp-models-with-mlflow/images/feature.jpg"/>
|
|
<meta name="twitter:title" content="Serving NLP Models with MLflow"/>
|
|
<meta name="twitter:description" content="Serving NLP models with MLflow is a little trickier than serving models expecting tabular input. In this post we explore one possible solution with code examples."/>
|
|
<link href='https://fonts.googleapis.com/css?family=Playfair+Display:700' rel='stylesheet' type='text/css'>
|
|
<link rel="stylesheet" type="text/css" media="screen" href="https://brainsteam.co.uk/css/normalize.css" />
|
|
<link rel="stylesheet" type="text/css" media="screen" href="https://brainsteam.co.uk/css/main.css" />
|
|
|
|
<link id="dark-scheme" rel="stylesheet" type="text/css" href="https://brainsteam.co.uk/css/dark.css" />
|
|
|
|
<script src="https://brainsteam.co.uk/js/feather.min.js"></script>
|
|
|
|
<script src="https://brainsteam.co.uk/js/main.js"></script>
|
|
</head>
|
|
|
|
<body>
|
|
<div class="container wrapper">
|
|
<div class="header">
|
|
|
|
<div class="avatar">
|
|
<a href="https://brainsteam.co.uk/">
|
|
<img src="/images/avatar.png" alt="Brainsteam" />
|
|
</a>
|
|
</div>
|
|
|
|
<h1 class="site-title"><a href="https://brainsteam.co.uk/">Brainsteam</a></h1>
|
|
<div class="site-description"><p>The irregular mental expulsions of a PhD student and CTO of Filament, my views are my own and do not represent my employers in any way.</p><nav class="nav social">
|
|
<ul class="flat"><li><a href="https://twitter.com/jamesravey/" title="Twitter" rel="me"><i data-feather="twitter"></i></a></li><li><a href="https://github.com/ravenscroftj" title="Github" rel="me"><i data-feather="github"></i></a></li><li><a href="/index.xml" title="RSS" rel="me"><i data-feather="rss"></i></a></li></ul>
|
|
</nav></div>
|
|
|
|
<nav class="nav">
|
|
<ul class="flat">
|
|
|
|
<li>
|
|
<a href="/">Home</a>
|
|
</li>
|
|
|
|
<li>
|
|
<a href="/tags">Tags</a>
|
|
</li>
|
|
|
|
<li>
|
|
<a href="https://jamesravey.me">About Me</a>
|
|
</li>
|
|
|
|
</ul>
|
|
</nav>
|
|
</div>
|
|
|
|
<div class="post">
|
|
<div class="post-header">
|
|
|
|
<div class="meta">
|
|
<div class="date">
|
|
<span class="day">29</span>
|
|
<span class="rest">Dec 2020</span>
|
|
</div>
|
|
</div>
|
|
|
|
<div class="matter">
|
|
<h1 class="title">Serving NLP Models with MLflow</h1>
|
|
</div>
|
|
</div>
|
|
|
|
<div class="markdown">
|
|
<figure>
|
|
<img src="images/feature.jpg"/>
|
|
</figure>
|
|
|
|
<p><a href="https://www.mlflow.org/">MLFlow</a> is a powerful open source MLOps platform with <a href="https://www.mlflow.org/docs/latest/models.html#deploy-mlflow-models">built in framework for serving your trained ML models as REST APIs</a>. The REST framework will load data provided in a JSON or CSV format compatible with <a href="https://pandas.pydata.org/">pandas</a> and pass this directly into your model. This can be handy when your model is expecting a tabular list of numerical and categorical features. However it is less clear how to serve with models and pipelines that are expecting unstructured text data as their primary input. In this post we will explore how to train and then serve an NLP model using MLFlow, <a href="https://scikit-learn.org/">scikit-learn</a> and <a href="https://spacy.io/">spacy</a>.</p>
|
|
<h2 id="what-youll-need-and-installing-dependencies">What you’ll need and Installing dependencies</h2>
|
|
<p>In order to use MLFlow and to train our NLP model you’re going to need Python 3.6+. I’m a big fan of using <a href="https://docs.conda.io/en/latest/miniconda.html">miniconda</a> to manage Python dependencies and MLFlow uses conda to manage ML server environments. Therefore, it’s the logical choice for managing our project and for the remainder of this post I will provide instructions for this. If you’re handy with pip or pip-based dependency managers like <a href="https://python-poetry.org/">Poetry</a> or <a href="https://pypi.org/project/pipenv/">pipenv</a> then you should find its easy enough to follow along but YMMV (especially when it comes to the environments MLFlow generates).</p>
|
|
<p>First I’m going to create a new conda environment with the requirements we need installed already:</p>
|
|
<div class="highlight"><pre style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-shell" data-lang="shell">conda create -n mlflow-nlp-model -c conda-forge python==3.7 pandas scikit-learn mlflow spacy pip notebook
|
|
</code></pre></div><p>This may take a couple of minutes to resolve but you should be able to accept (type ‘y’ when prompted) and wait for conda to download and install the requirements.</p>
|
|
<p>Now we can activate our environment by running <code>conda activate mlflow-nlp-model</code></p>
|
|
<h2 id="collecting-and-preparing-our-data">Collecting and preparing our data</h2>
|
|
<p>We are going to train a model to classify email messages from the <a href="https://scikit-learn.org/stable/datasets/real_world.html#the-20-newsgroups-text-dataset">20 newsgroups</a> dataset provided as part of Scikit learn. Of course the techniques we use here could be applied to other real world datasets too.</p>
|
|
<p>Firstly (assuming you have a jupyter notebook or Python prompt ready), we’re going to download the data and turn it into a Pandas dataframe:</p>
|
|
<div class="highlight"><pre style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python"><span style="color:#00f">from</span> sklearn.datasets <span style="color:#00f">import</span> fetch_20newsgroups
|
|
<span style="color:#00f">import</span> pandas <span style="color:#00f">as</span> pd
|
|
|
|
<span style="color:#00f">def</span> df_from_20ng(subset):
|
|
newsgroups_train = fetch_20newsgroups(subset=<span style="color:#a31515">'train'</span>)
|
|
ngdata = {<span style="color:#a31515">"text"</span>: newsgroups_train.data, <span style="color:#a31515">"target"</span>: newsgroups_train.target}
|
|
df = pd.DataFrame.from_dict(ngdata)
|
|
df[<span style="color:#a31515">'target_name'</span>] = df.target.apply(<span style="color:#00f">lambda</span> x: newsgroups_train.target_names[x])
|
|
|
|
<span style="color:#00f">return</span> df
|
|
|
|
|
|
df_train = df_from_20ng(<span style="color:#a31515">'train'</span>)
|
|
df_test = df_from_20ng(<span style="color:#a31515">'test'</span>)
|
|
|
|
X_train = df_train.drop(columns=[<span style="color:#a31515">'target'</span>,<span style="color:#a31515">'target_name'</span>])
|
|
y_train = df_train[<span style="color:#a31515">'target_name'</span>]
|
|
X_test = df_test.drop(columns=[<span style="color:#a31515">'target'</span>,<span style="color:#a31515">'target_name'</span>])
|
|
y_test = df_test[<span style="color:#a31515">'target_name'</span>]
|
|
</code></pre></div><p>The above code will automatically fetch the example dataset from scikit learn’s servers (or use a local cache after the first time you run it). We iterate over the data and load it as a Pandas dataframe.</p>
|
|
<p>The data is already conveniently partitioned into <em>test</em> and <em>train</em> sets but if you are using your own data you could generate a single dataframe and then use <a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html">train_test_split()</a> to partition it - this function works fine on dataframes.</p>
|
|
<p>We end up with <code>X_train</code> and <code>X_test</code> which are pandas data frames containing just the text from each email and data frames <code>y_train</code> and <code>y_test</code> which contain the corresponding ground truth classifier labels for the emails.</p>
|
|
<p>You might have noticed that our <code>X_train</code> and <code>X_test</code> dataframes only contain one column and you might wonder why we bother using a dataframe here when a 1 dimensional array or list would suffice. Well, the reason is that using a dataframe makes it possible for us to simply pass in CSV and JSON data to the REST API - hopefully it will become a bit clearer below.</p>
|
|
<h2 id="defining-our-ml-pipeline">Defining our ML pipeline</h2>
|
|
<p>The next step is to define our feature transformer and model pipeline. We’re going to use Scikit-learn’s <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html">Pipeline</a> construct which allows us to easily define the components that we want to chain together.</p>
|
|
<p>For our first experiment we are going to keep things simple by using a <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html">TF-IDF Vectorizer</a> which models each word (up to a vocabulary limit) as a separate sparse feature and takes into account the ratio of each word’s term frequency (how many times it appears in a document) divided by word document frequency (how many documents each word appears in). You can read more about TF-IDF in the <a href="https://scikit-learn.org/stable/modules/feature_extraction.html#tfidf-term-weighting">scikit-learn documentation</a>. TF-IDF is older and simpler than current state of the art feature extraction methods but it can often work well as a lightweight baseline for text representation. We’ll look at more complicated techniques in our next experiment.</p>
|
|
<p>We’re also going to use a <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html">RandomForestClassifier</a> for our classification model. Again, RF models serve as a relatively low-compute-intensity baseline and a starting point for our modelling.</p>
|
|
<p>The final component that you may not recognise is the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html">ColumnTransformer</a>. This provides a user friendly way for scikit-learn to interact with pandas dataframes and it offers some very powerful matching for larger data frames. In this case we are just using it to extract the <code>text</code> column from the emails which is then passed to our TFIDF Vectorizer for feature extraction and finally to the classifier for training or prediction.</p>
|
|
<p>The code looks like this:</p>
|
|
<div class="highlight"><pre style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python"><span style="color:#00f">from</span> sklearn.compose <span style="color:#00f">import</span> ColumnTransformer
|
|
<span style="color:#00f">from</span> sklearn.feature_extraction.text <span style="color:#00f">import</span> TfidfVectorizer
|
|
<span style="color:#00f">from</span> sklearn.pipeline <span style="color:#00f">import</span> Pipeline
|
|
<span style="color:#00f">from</span> sklearn.ensemble <span style="color:#00f">import</span> RandomForestClassifier
|
|
|
|
ct = ColumnTransformer([
|
|
(<span style="color:#a31515">'tfidf'</span>, TfidfVectorizer(max_features=5000), <span style="color:#a31515">'text'</span>)
|
|
])
|
|
|
|
pipe = Pipeline([
|
|
(<span style="color:#a31515">'ctransformer'</span>, ct),
|
|
(<span style="color:#a31515">'clf'</span>, RandomForestClassifier(n_estimators=10, max_depth=20))
|
|
])
|
|
|
|
</code></pre></div><p>Next we can train our model and log it and our initial evaluation metrics to MLFLow:</p>
|
|
<div class="highlight"><pre style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-python" data-lang="python"><span style="color:#00f">import</span> mlflow
|
|
<span style="color:#00f">import</span> mlflow.sklearn
|
|
<span style="color:#00f">import</span> json
|
|
<span style="color:#00f">import</span> os
|
|
<span style="color:#00f">import</span> tempfile
|
|
|
|
<span style="color:#00f">from</span> sklearn.metrics <span style="color:#00f">import</span> f1_score, classification_report, plot_confusion_matrix
|
|
<span style="color:#00f">from</span> mlflow.models.signature <span style="color:#00f">import</span> infer_signature
|
|
|
|
mlflow.set_experiment(<span style="color:#a31515">"My NLP Model"</span>)
|
|
|
|
|
|
<span style="color:#00f">with</span> mlflow.start_run(run_name=<span style="color:#a31515">"TFIDF + Random Forest"</span>):
|
|
|
|
pipe.fit(X_train,y_train)
|
|
|
|
y_pred = pipe.predict(X_test)
|
|
|
|
mlflow.set_tag(<span style="color:#a31515">'client'</span>,<span style="color:#a31515">'That Email Company'</span>)
|
|
|
|
signature = infer_signature(X_test, y_test)
|
|
|
|
mlflow.log_metric(<span style="color:#a31515">'f1'</span>, f1_score(y_test, y_pred, average=<span style="color:#a31515">'micro'</span>))
|
|
mlflow.sklearn.log_model(pipe, <span style="color:#a31515">"model"</span>, signature=signature)
|
|
|
|
<span style="color:#00f">with</span> tempfile.TemporaryDirectory() <span style="color:#00f">as</span> tmpdir:
|
|
|
|
report = classification_report(y_test, y_pred, output_dict=True)
|
|
|
|
<span style="color:#00f">with</span> open(os.path.join(tmpdir, <span style="color:#a31515">"classification_report.json"</span>),<span style="color:#a31515">'w'</span>) <span style="color:#00f">as</span> f:
|
|
json.dump(report, f, indent=2)
|
|
|
|
mlflow.log_artifacts(tmpdir, <span style="color:#a31515">"reporting"</span>)
|
|
</code></pre></div><p>We train the model with <code>pipe.fit()</code> and then get predictions on the test set with <code>pipe.predict(X_test)</code>. This allows us to generate our classification report detailing Precision and Recall per class by comparing <code>y_pred</code> and <code>y_test</code> - the predicted and actual labels for our test set respectively. We also report the overall micro-averaged F1 score for the model to give a high level indication of how it is performing.</p>
|
|
<p>The <code>infer_signature()</code> function is quite important here. This is where we tell MLFlow what the inputs and outputs for this model look like. By passing in our <code>X_test</code> and <code>y_test</code> variables, mlflow will identify that it should expect a dataframe with a column called <em>text</em>.</p>
|
|
<p>You can verify that the signature was captured correctly by opening the run in the MlFlow server GUI (run <code>mlflow server</code> and navigate to http://localhost:5000) and viewing the MLmodel file. You should see something like this:</p>
|
|
<div class="highlight"><pre style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-yaml" data-lang="yaml">...
|
|
signature:
|
|
inputs: <span style="color:#a31515">'[{"name": "data", "type": "string"}]'</span>
|
|
outputs: <span style="color:#a31515">'[{"type": "string"}]'</span>
|
|
</code></pre></div><h1 id="running-our-model">Running our model</h1>
|
|
<p>Now we are going to run our model as a REST API and make some API calls to it. Firstly you are going to need to find the full URI to the model that we just trained. the easiest way is to open up the MLFlow server GUI (run <code>mlflow server</code> and navigate to http://localhost:5000), open up the run we just created and copy the path from there:</p>
|
|
<figure>
|
|
<img src="images/model-select.jpg"/> <figcaption>
|
|
<h4>The full path to the models directory within the run is what we need - if it is shortened with elipses you may need to expand your browser window to make sure you copy all of it.</h4>
|
|
</figcaption>
|
|
</figure>
|
|
|
|
<p>Now we can simply run the MlFlow model server script in order to test it. The first time you run this it might take a few minutes to initialize since it will try to create a new conda environment for each model (based on the <code>run_id</code>). However, you should find it’s pretty speedy for subsequent loads.</p>
|
|
<p>FYI if you are using mlflow with cloud backed storage (i.e. S3 or GCP instead of local filesystem) then this should work but you will need to set environment variables so
|
|
that the script can find the relevant security tokens etc as <a href="https://www.mlflow.org/docs/latest/tracking.html#artifact-stores">documented here</a>. You can just substitute out the <code>file:///</code> uri for the relevant string from your model run (i.e. <code>gs://</code>)</p>
|
|
<p>You should see some output like this:</p>
|
|
<div class="highlight"><pre style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-shell" data-lang="shell">> mlflow models serve -m file:///home/james/workspace/mlflow-example-project/notebooks/mlruns/1/872d6cd4b0874c99808c5259d9eb823b/artifacts/model master [0ea16fd] modified untracked
|
|
2020/12/29 14:00:28 INFO mlflow.models.cli: Selected backend <span style="color:#00f">for</span> flavor <span style="color:#a31515">'python_function'</span>
|
|
2020/12/29 14:00:29 INFO mlflow.pyfunc.backend: === Running command <span style="color:#a31515">'source /home/james/miniconda3/bin/../etc/profile.d/conda.sh && conda activate mlflow-6fd5007aa398d705b7ced4118b6b9ddf2ad4c4e4 1>&2 && gunicorn --timeout=60 -b 127.0.0.1:5000 -w 1 ${GUNICORN_CMD_ARGS} -- mlflow.pyfunc.scoring_server.wsgi:app'</span>
|
|
[2020-12-29 14:00:29 +0000] [1063058] [INFO] Starting gunicorn 20.0.4
|
|
[2020-12-29 14:00:29 +0000] [1063058] [INFO] Listening at: http://127.0.0.1:5000 (1063058)
|
|
[2020-12-29 14:00:29 +0000] [1063058] [INFO] Using worker: sync
|
|
[2020-12-29 14:00:29 +0000] [1063064] [INFO] Booting worker with pid: 1063064
|
|
</code></pre></div><h2 id="using-the-model">Using the model</h2>
|
|
<p>Now we should be able to test the model. Here’s where it all comes together! Since we used the column transformer and used <code>infer_signature</code> when we logged our model, the server should:</p>
|
|
<ul>
|
|
<li>provide a basic level of input validation and provide user errors if columns the model doesn’t know about are submitted</li>
|
|
<li>understand that the unstructured text input will come from a column named <code>text</code> in a dataframe provided via CSV or JSON.</li>
|
|
</ul>
|
|
<p>Without using the ColumnTransformer, the model may have behaved incorrectly or unpredictably by interpretting the first column in the input as the text input regardless of what it contained. The ColumnTransformer lets us specify an explicit contract with the REST server and the model signature provides clear instructions to the user (via validation error messages) on how to format the model input.</p>
|
|
<p>Using CURL, you can run the following in your shell session:</p>
|
|
<div class="highlight"><pre style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-shell" data-lang="shell">curl --request POST <span style="color:#a31515">\
|
|
</span><span style="color:#a31515"></span> --url http://127.0.0.1:5000/invocations <span style="color:#a31515">\
|
|
</span><span style="color:#a31515"></span> --header <span style="color:#a31515">'Content-Type: application/json; format=pandas-records'</span> <span style="color:#a31515">\
|
|
</span><span style="color:#a31515"></span> --data <span style="color:#a31515">'[
|
|
</span><span style="color:#a31515">{"text":"hey, I have an old bicycle for sale in the Southampton area"}
|
|
</span><span style="color:#a31515">]'</span>
|
|
</code></pre></div><p>Hopefully you will see the following response</p>
|
|
<div class="highlight"><pre style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">[<span style="color:#a31515">"misc.forsale"</span>]
|
|
</code></pre></div><p>It looks like our model worked. Hooray! Now look what happens when we have a typo in our input data</p>
|
|
<div class="highlight"><pre style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-shell" data-lang="shell">curl --request POST <span style="color:#a31515">\
|
|
</span><span style="color:#a31515"></span> --url http://127.0.0.1:5000/invocations <span style="color:#a31515">\
|
|
</span><span style="color:#a31515"></span> --header <span style="color:#a31515">'Content-Type: application/json; format=pandas-records'</span> <span style="color:#a31515">\
|
|
</span><span style="color:#a31515"></span> --data <span style="color:#a31515">'[
|
|
</span><span style="color:#a31515">{"txt":"hey, I have an old bicycle for sale in the Southampton area"}
|
|
</span><span style="color:#a31515">]'</span>
|
|
</code></pre></div><p>We get a response like so:</p>
|
|
<div class="highlight"><pre style="background-color:#fff;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
|
|
"error_code": <span style="color:#a31515">"BAD_REQUEST"</span>,
|
|
"message": <span style="color:#a31515">"Model input is missing columns ['text']. Note that there were extra columns: ['txt']"</span>
|
|
}
|
|
</code></pre></div><p>As you can see we get an error because the ‘text’ column is missing. We also get a hint about the fact that ‘txt’ is an unexpected column. If we were to pass in multiple columns (e.g. we get ‘text’ right but we also pass in ‘from’ containing the email address of the sender, the) the server would provide a response, silently discarding any columns that it does not recognise. It only warns about extra columns in the event that a required field is missing.</p>
|
|
<h1 id="conclusion">Conclusion</h1>
|
|
<p>In this post we’ve built an end-to-end script that trains and stores an NLP classification model in MLFlow and we’ve also looked at serving the model using MLFlow’s built in deployment tools. There are many ways to skin a cat as the saying goes but this is one tried and tested method for getting MLFlow’s built in REST server to play ball.</p>
|
|
<p>I’ve provided the training script as a <a href="https://gist.github.com/ravenscroftj/1167487c0262b8dd1d92bcf4c2b7efd2">Github gist</a>.</p>
|
|
<p>Tune in next time when we will be showing how to use SpaCy in our MLFlow NLP pipeline.</p>
|
|
|
|
</div>
|
|
|
|
<div class="tags">
|
|
|
|
|
|
<ul class="flat">
|
|
|
|
<li><a href="/tags/machine-learning">machine-learning</a></li>
|
|
|
|
<li><a href="/tags/python">python</a></li>
|
|
|
|
<li><a href="/tags/ai">ai</a></li>
|
|
|
|
<li><a href="/tags/devops">devops</a></li>
|
|
|
|
<li><a href="/tags/mlops">mlops</a></li>
|
|
|
|
<li><a href="/tags/nlp">nlp</a></li>
|
|
|
|
<li><a href="/tags/spacy">spacy</a></li>
|
|
|
|
</ul>
|
|
|
|
|
|
</div><div id="disqus_thread"></div>
|
|
<script type="text/javascript">
|
|
(function () {
|
|
|
|
|
|
if (window.location.hostname == "localhost")
|
|
return;
|
|
|
|
var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
|
|
var disqus_shortname = 'brainsteam';
|
|
dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
|
|
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
|
|
})();
|
|
</script>
|
|
<noscript>Please enable JavaScript to view the </a></noscript>
|
|
<a href="http://disqus.com/" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a>
|
|
</div>
|
|
</div>
|
|
<div class="footer wrapper">
|
|
<nav class="nav">
|
|
<div>2021 © James Ravenscroft 2020 | <a href="https://github.com/knadh/hugo-ink">Ink</a> theme on <a href="https://gohugo.io">Hugo</a></div>
|
|
</nav>
|
|
</div>
|
|
|
|
|
|
<script type="application/javascript">
|
|
var doNotTrack = false;
|
|
if (!doNotTrack) {
|
|
window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date;
|
|
ga('create', 'UA-186263385-1', 'auto');
|
|
|
|
ga('send', 'pageview');
|
|
}
|
|
</script>
|
|
<script async src='https://www.google-analytics.com/analytics.js'></script>
|
|
<script>feather.replace()</script>
|
|
</body>
|
|
</html>
|