brainsteam.co.uk/brainsteam/content/legacy/posts/2016-12-08-ai-cant-solve-al...

56 lines
11 KiB
Markdown
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: AI cant solve all our problems, but that doesnt mean it isnt intelligent
author: James
type: post
date: 2016-12-08T10:08:13+00:00
url: /2016/12/08/ai-cant-solve-all-our-problems-but-that-doesnt-mean-it-isnt-intelligent/
medium_post:
- 'O:11:"Medium_Post":11:{s:16:"author_image_url";s:69:"https://cdn-images-1.medium.com/fit/c/200/200/0*naYvMn9xdbL5qlkJ.jpeg";s:10:"author_url";s:30:"https://medium.com/@jamesravey";s:11:"byline_name";N;s:12:"byline_email";N;s:10:"cross_link";s:2:"no";s:2:"id";s:12:"e3e315592001";s:21:"follower_notification";s:3:"yes";s:7:"license";s:19:"all-rights-reserved";s:14:"publication_id";s:12:"6fc55de34f53";s:6:"status";s:6:"public";s:3:"url";s:117:"https://medium.com/@jamesravey/ai-cant-solve-all-our-problems-but-that-doesn-t-mean-it-isn-t-intelligent-e3e315592001";}'
categories:
- PhD
- Work
tags:
- AI
- machine learning
- philosophy
---
<figure id="attachment_150" aria-describedby="caption-attachment-150" style="width: 285px" class="wp-caption alignright"><img loading="lazy" class="wp-image-150 size-medium" src="https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/Thomas_Hobbes_portrait.jpg?resize=285%2C300&#038;ssl=1" width="285" height="300" srcset="https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/Thomas_Hobbes_portrait.jpg?resize=285%2C300&ssl=1 285w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/Thomas_Hobbes_portrait.jpg?resize=768%2C810&ssl=1 768w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/Thomas_Hobbes_portrait.jpg?resize=971%2C1024&ssl=1 971w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/Thomas_Hobbes_portrait.jpg?w=1109&ssl=1 1109w" sizes="(max-width: 285px) 100vw, 285px" data-recalc-dims="1" /><figcaption id="caption-attachment-150" class="wp-caption-text">Thomas Hobbes, perhaps most famous for his thinking on western politics, was also thinking about how the human mind &#8220;computes things&#8221; 500 years ago.</figcaption></figure>
[A recent opinion piece I read on Wired][1] called for us to stop labelling our current specific machine learning models AI because they are not intelligent. I respectfully disagree.
AI is not a new concept. The idea that a computer could &#8216;think&#8217; like a human and one day pass for a human has been around since Turing and even in some form long before him. The inner workings the human brain and how we carry out computational processes have even been discussed by great philosophers such as Thomas Hobbes who wrote in his book, De Corpore in 1655 that _&#8220;by reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract.&#8221;_ Over the years, AI has continued to capture the hearts and minds of great thinkers, scientists and of course creatives and artists.
<figure id="attachment_151" aria-describedby="caption-attachment-151" style="width: 300px" class="wp-caption alignleft"><img loading="lazy" class="wp-image-151 size-full" src="https://i1.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/The_Matrix_soundtrack_cover.jpg?resize=300%2C300&#038;ssl=1" width="300" height="300" srcset="https://i1.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/The_Matrix_soundtrack_cover.jpg?w=300&ssl=1 300w, https://i1.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/The_Matrix_soundtrack_cover.jpg?resize=150%2C150&ssl=1 150w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-151" class="wp-caption-text">The Matrix: a modern day telling of [Rene Descartes&#8217; &#8220;Evil Demon&#8221;][2] theorem</figcaption></figure>
Visionary Science Fiction authors of the 20th century: Arthur C Clarke, Isaac Asimov and Philip K Dick have built worlds of fantasy inhabited by self-aware artificial intelligence systems and robots, [some of whom could pass for humans unless subject to a very specific and complicated test][3].  Endless films have been released that &#8220;sex up&#8221; AI. The Terminator series, The Matrix, Ex Machina, the list goes on. However, like all good science fiction, these stories that paint marvellous and thrilling visions of futures that are still in the future even in 2016.
The science of AI is a hugely exciting place to be too (_I would say that, wouldn&#8217;t I). _In the 20th century we&#8217;ve mastered speech recognition, optical character recognition and machine translation good enough that I can visit Japan and communicate, via my mobile phone, with a local shop keeper without either party having to learn the language of their counterpart. We have arrived at a point where we can train machine learning models to do some specific tasks better than people (including drive cars and [diagnostic oncology][4]). We call these current generation AI models &#8220;weak AI&#8221;. Computers that can solve any problem we throw at them (in other words, ones that have generalised intelligence and known as &#8220;strong AI&#8221; systems) are a long way off. However, that shouldn&#8217;t detract from what we have solved already with weak AI.
One of the problems with living in a world of 24/7 new cycles and clickbait titles is that nothing is new or exciting any more. Every small incremental change in the world is reported straight away across the globe. Every new discovery, every fractional increase in performance from AI gets a blog post or a news article. It makes everything seem boring. _Oh Tesla&#8217;s cars can drive themselves? So what? Google&#8217;s cracked Go? Whatever&#8230; _
<figure id="attachment_152" aria-describedby="caption-attachment-152" style="width: 300px" class="wp-caption alignright"><img loading="lazy" class="wp-image-152 size-medium" src="https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/tom-Bathroom-scale-2400px.png?resize=300%2C300&#038;ssl=1" width="300" height="300" srcset="https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/tom-Bathroom-scale-2400px.png?resize=300%2C300&ssl=1 300w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/tom-Bathroom-scale-2400px.png?resize=150%2C150&ssl=1 150w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/tom-Bathroom-scale-2400px.png?resize=768%2C769&ssl=1 768w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/tom-Bathroom-scale-2400px.png?resize=1024%2C1024&ssl=1 1024w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/tom-Bathroom-scale-2400px.png?w=1320&ssl=1 1320w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/tom-Bathroom-scale-2400px.png?w=1980&ssl=1 1980w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-152" class="wp-caption-text">If you lose 0.2Kg overnight, your spouse probably won&#8217;t notice. Lose 50 kg and I can guarantee they would</figcaption></figure>
If you lose 50kgs in weight over 6 months, your spouse is only going to notice when you buy a new shirt that&#8217;s 2 sizes smaller or notice a change in your muscle when you get out of the shower. A friend you meet up with once a year is going to see a huge change because last time they saw you you were twice the size. In this day and age, technology moves on so quickly in tiny increments that we don&#8217;t notice the huge changes any more because we&#8217;re like the spouse &#8211; we constantly see the tiny changes.
What if we did see huge changes? What if we could cut ourselves off from the world for months at a time? If you went back in time to 1982 and told them that every day you talk to your phone using just your voice and it is able to tell you about your schedule and what restaurant to go to, would anyone question that what you describe is AI? If you told someone from 1995 that you can [buy a self driving car][5] via a small glass tablet you carry around in your pocket, are they not going to wonder at the world that we live in? We have come a long long way and we take it for granted. Most of us use AI on a day to day basis without even questioning it.
Another common criticism of current weak AI models is the exact lack of general reasoning skills that would make them strong AI.
> <span class="lede" tabindex="-1">DEEPMIND HAS SURPASSED </span>the <a href="https://www.wired.com/2016/03/googles-ai-taking-one-worlds-top-go-players/" target="_blank">human mind</a> on the Go board. Watson <a href="https://www.wired.com/2014/01/watson-cloud/" target="_blank">has crushed</a> Americas trivia gods on _Jeopardy_. But ask DeepMind to play Monopoly or Watson to play _Family Feud_, and they wont even know where to start.
That&#8217;s absolutely true. The AI/compsci definition of this constraint is the &#8220;no free lunch for optimisation&#8221; theorem. That is that you don&#8217;t get something for nothing when you train a machine learning model. In training a weak AI model for a specific task, you are necessarily hampering its ability to perform well at other tasks. I guess a human analogy would be the education system.
<figure id="attachment_153" aria-describedby="caption-attachment-153" style="width: 300px" class="wp-caption alignright"><img loading="lazy" class="wp-image-153 size-medium" src="https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/no_idea.jpg?resize=300%2C169&#038;ssl=1" width="300" height="169" srcset="https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/no_idea.jpg?resize=300%2C169&ssl=1 300w, https://i0.wp.com/brainsteam.co.uk/wp-content/uploads/2016/12/no_idea.jpg?w=625&ssl=1 625w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-153" class="wp-caption-text">If you took away my laptop and told me to run cancer screening tests in a lab, I would look like this</figcaption></figure>
Aged 14 in a high school in the UK, I was asked which 11 GCSEs I wanted to take. At 16 I had to reduce this scope to 5 A levels, aged 18 I was asked to specify a single degree and aged 21 I had to decide which tiny part of AI/Robotics (which I&#8217;d studied at degree level) I wanted to specialise in at PhD level. Now that I&#8217;m half way through a PhD in Natural Language Processing in my late 20s, would you suddenly turn around and say &#8220;actually you&#8217;re not intelligent because if I asked you to diagnose lung cancer in a child you wouldn&#8217;t be able to&#8221;? Does what I&#8217;ve achieved become irrelevant and pale against that which I cannot achieve? I do not believe that any reasonable person would make this argument.
The AI Singularity has not happened yet and it&#8217;s definitely a few years away. However, does that detract from what we have achieved so far? No. No it does not.
&nbsp;
[1]: https://www.wired.com/2016/12/artificial-intelligence-artificial-intelligent/
[2]: https://en.wikipedia.org/wiki/Brain_in_a_vat
[3]: https://en.wikipedia.org/wiki/Do_Androids_Dream_of_Electric_Sheep%3F
[4]: https://www.top500.org/news/watson-proving-better-than-doctors-in-diagnosing-cancer/
[5]: https://www.tesla.com/en_GB/models