48 lines
128 KiB
Markdown
48 lines
128 KiB
Markdown
|
---
|
|||
|
title: 'Unhashing LSH: Addressing the gap between theory and practice'
|
|||
|
author: James
|
|||
|
type: post
|
|||
|
date: -001-11-30T00:00:00+00:00
|
|||
|
draft: true
|
|||
|
url: /?p=308
|
|||
|
medium_post:
|
|||
|
- 'O:11:"Medium_Post":11:{s:16:"author_image_url";N;s:10:"author_url";N;s:11:"byline_name";N;s:12:"byline_email";N;s:10:"cross_link";N;s:2:"id";N;s:21:"follower_notification";N;s:7:"license";N;s:14:"publication_id";N;s:6:"status";N;s:3:"url";N;}'
|
|||
|
categories:
|
|||
|
- Uncategorized
|
|||
|
|
|||
|
---
|
|||
|
<p class="graf graf--p">
|
|||
|
I’ve recently been trying to become intimately familiar with how LSH works in theory and practice in order to solve some prickly comparison problems with O(n²) comparisons.
|
|||
|
</p>
|
|||
|
|
|||
|
<p class="graf graf--p">
|
|||
|
For the uninitiated, LSH or Locally Sensitive Hashing is a method frequently used for “sketching” large data structures in a way that allows quick comparison and grouping of similar items without having to compare every item with every other item in the dataset. It’s often mentioned in the same breath as “the curse of dimensionality”: the problem of dealing with complex data structures like documents and images that must be represented in terms of the words or pixels that they contain which quickly add up and require enormous amounts of memory and compute time for processing.
|
|||
|
</p>
|
|||
|
|
|||
|
<p class="graf graf--p">
|
|||
|
The literature on LSH is factually accurate and mathematically complete but at the same time it’s really hard going. At the other end of the spectrum are some incredibly helpful blog posts that tell you how LSH works in practice. This post aims to explain the connections between the two.
|
|||
|
</p>
|
|||
|
|
|||
|
## Nearest-Neighbour (NN) Problem
|
|||
|
|
|||
|
The nearest neighbour problem is the issue of finding data points or items “most similar” to a particular starting point or “query”. For example, we are building a music recommendation system and we know that the user likes song **q. **We want to find artists similar to song **q** to recommend to the user. We can represent each song as a vector of their attributes – let’s say for simplicity that we’re using 3 dimensions on a scale 1-10: Tempo of music (slow to fast), Singer Pitch (deep to high) and Heaviness (Pop Rock to Death Metal). If you plot all of the song in your catalogue in this way, the ones with a similar sound should end up clustered together.<figure style="width: 1137px" class="wp-caption alignnone">
|
|||
|
|
|||
|
<img loading="lazy" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABHEAAAI1CAYAAABR8WYBAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAIABJREFUeJzs3XtsK3l9P/y3L2PHuTnnJCeJnZz7/bJ7bnt2F1D7awuiLaqKCpS2KqWrVoI/WLUCFYk/6IOQUAVqVSiqVOlpEdqnINpSpCLRi1ouW0EXli2glqVcKrG77PFMfI/t2DP2eGaeP87vO2s7vnscz3jeLyk6ObGTjJOMPfOez/fzCViWBSIiIiIiIiIicrfgrDeAiIiIiIiIiIgGY4hDREREREREROQBDHGIiIiIiIiIiDyAIQ4RERERERERkQcwxCEiIiIiIiIi8gCGOEREREREREREHsAQh4iIiIiIiIjIAxjiEBERERERERF5AEMcIiIiIiIiIiIPYIhDREREREREROQB4RHvb01lK4iIiIiIiIiI/CswzJ1YiUNERERERERE5AEMcYiIiIiIiIiIPIAhDhERERERERGRBzDEISIiIiIiIiLyAIY4REREREREREQewBCHiIiIiIiIiMgDGOIQEREREREREXkAQxwiIiIiIiIiIg9giENERERERERE5AEMcYiIiIiIiIiIPIAhDhERERERERGRBzDEISIiIiIiIiLyAIY4REREREREREQewBCHiIiIiIiIiMgDGOIQEREREREREXkAQxwiIiIiIiIiIg9giENERERERERE5AEMcYiIiIiIiIiIPIAhTouXX34ZP/uzP4urV6/i+vXr+LM/+zP7tieeeAJnz57FzZs3cenSJbz97W9HKpU69DV+5Vd+Bbdu3cKFCxcQj8dx69Yt3Lp1C88888zI2/Onf/qn0DRtosdERERERERERPMhYFnWKPcf6c5eoygKFEXBnTt3UKlUcPfuXfzDP/wDrl27hieeeAK/9Eu/hLe85S2wLAsf+9jH8Bd/8Rd4/vnnEYlEDn2tp59+Gn/yJ3+CL3zhC2Nvz+7uLp5//nmsra1N8rCIiIiIiIiIyN0Cw9yJlTgtEokE7ty5AwBYWVnB1atXu1bbBAIBvPvd78b29jb++Z//eeiv/9xzz+H//J//g7t37+IXf/EXkU6noes67t69i6997WsAgPe+9734wAc+gI9+9KPIZDL4qZ/6Kbzuda9z5gESERERERERkWcxxOnhxRdfxHe+8x089thjPe9z584d/OAHPxjq69Xrdfz+7/8+Pve5z+Fb3/oW3va2t+EP//APIUkSPvnJT+Id73gH/vVf/xVf/vKX8f73vx/vfve7sbm5ia9+9av44he/6NTDIiIiIiIiIiKPCs96A9zo4OAAb37zm/Gxj30Mq6urPe83ylK073//+/je975nV9UYhoHd3V0AwMMPP4xf//Vfxxvf+EY8++yzkCRpsgdARERERERERHOHIU4HXdfx5je/Gb/5m7+JN73pTX3v+53vfAevfe1rh/q6lmXh4Ycfxle/+tWutz///POIx+PIZDIjbzMRERERERERzT8up2phWRZ+93d/F1evXsV73vOevvf7+Mc/DkVR8Au/8AtDfe1r164hlUrhm9/8JgCg0Wjge9/7HgDgb//2b3FwcICnn34a73rXu1AulwE86MtTqVQmfFRERERERERENA8Y4rT4j//4D/z1X/81vvzlL9ujwf/pn/7Jvv29732vPWL8ueeew1e+8pWuk6m6iUaj+Pu//3u85z3vwc2bN3H79m08++yzyGQyeP/734+//Mu/xJUrV/DOd74T7373uwEA73jHO/C6172OjY2JiIiIiIiIiCPGiYiIiIiIiIhmjCPGiYiIiIiIiIjmBUMcIiIiIiIiIiIPYIhDREREREREROQBDHGIiIiIiIiIiDyAIQ4RERERERERkQcwxCEiIiIiIiIi8oDwrDeAiIho3liWZb8Fg0EEAkNNjCQiIiIi6oshDhERkUNM04RhGDAMA7quo9lsIhAIIBAIIBgMIhgMIhQK2cGO+BgABj1ERERENFDAsqxR7j/SnYmIiOadZVl2cGOaJoAHgYz4WOv9Wv8VNE1DrVbDxsaGHfCEQiE75BEhEEMeIiIiork21MEeK3GIiIhGZFkWTNNEs9nECy+8gNOnT/cMW8T/e4Uwuq6jUChgfX0dzWbzUMgjiKqd1moehjxERERE/sIQh4iIaAgiuGmtsAkEAlAUBWfPnh3764rlVOLfXt8bAAzDsIOeztCmNeTpVc1DRERERN7GEIeIiKgH0ZxYBDciTBk2FOkWtnQKBAL2Mqx+92n9t9v3AQaHPIFAoO+SLSIiIiJyN4Y4RERELfoFN/2qZYRms4m9vT0oigLDMLCwsGC/xWIxLCwsIBqNtjU0HrE/3SHDhjyikkh8rPX+4vG1LtVqbcDMkIeIiIho9hjiEBERoXeD4mGCG9M0kcvlIMsyarUatra2cOXKFQBAo9GApmnQNA25XA6apqFer8M0TYRCIYTDYWiahlQqZYc8CwsLQ33fYY0T8nT7Gr168jDkISIiIjoanE5FRES+1Su4af233+fu7+/j29/+NqLRKDY2NpBMJrGysmJPp9J1vW8Y02w2sb+/j5deeglbW1t22KNpGkzTRDAYPFTFI95CoZBzP4gBWidr9Tpu4Bh1IiIioolwOhUREVGnXg2Kh+0LU6lUoCgKstksVldXEYlE8KpXvWqsUCUcDmN5eRmSJGF3d/fQ7YZhtAU7+/v79vuGYSAYDCIajR4KeBYWFhAOO/cSP0olj2ma0HW96/1aAx6OUSciIiIaHUMcIiKae619bprNpv3xYYMDTdOgKAr29vYQjUaRTCZx/vx5hEIhPPvssxP1tOnXEycUCmFpaQlLS0tdbzdNsy3kKZfLyGQy0DQNzWYTgUCgb8jjVGgyTPWS+B2IxsuDJmx1VvMw5CEiIiJiiENERHNq0slSuq4jnU5DURRYloVEIoG7d+8iEom03W/SxsSTfH4wGMTi4iIWFxe73m6aJur1uh3yHBwcIJfLQVXVtpCnM+BZWFiAJEmOhiaDfu4co05EREQ0GEMcIiKaK6LaY5zJUqZpIpvNQpZlqKqK7e1t3LhxA7FYrOfnBIPBgSPC+3FiOlUvwWAQsVis5/ZbltUW8tRqNeTzeWiaBl3XEQgEEIlEuoY8kUjE8ZCn9d9u2wpwjDoRERH5G0McIiLyvF4Nioc5cbcsC8ViEbIso1QqYWNjAxcuXMDKyspQ33vSEGbSEGgSgUDADmW6sSyrbbpWa1+eer0OAJAkqetyrWg0OpOQh2PUiYiIaJ4xxCEiIk8SDYqbzebIwQ3woEGxLMvI5XKIx+NIJBK4fv36yCfyvUKYYb/ONCtxJiWWW0WjUcTj8UO3W5YFXdcP9eVRVRWNRgOWZUGSpK6VPNFo1LVj1MX7kUiEIQ8RERG5CkMcIiLyjEkbFKuqCkVRkE6nsbCwgEQigQsXLkw0rjsQCLh2OdW0iaAjEolgdXX10O1iaVtryJPNZu1KHsuyEA6Hu4Y8CwsLMwt5CoUCisUizp0713b7MI2XGfQQERHRNDHEISIiV3OiQfHe3h4URQEAJJNJPPLII5AkyZHtCwaDEzc2nleBQACSJEGSpJ7L0zpDHtGTp16vwzRNhEKhniHPJOFbt20V/7b21hE4Rp2IiIjcgCEOERG5klj2Mk6DYsMwkM1moSgKNE3D9vY2Hn744Z69XyYxy5428yAcDmN5eRnLy8tdbzcMww54VFVFsVi0/2+aJoLB4KFwR/TomVbI0wvHqBMREdG0McQhIiLX6NWgeJjgxrIsFAoFyLKMcrk8coPicXl5OZQXhEIhLC0tYWlpqevthmHYE7ZUVUWpVEI6nYamaTAMA8Fg8NAYdRHyhMPOHgZxjDoRERFNG0McIiKaqUkaFFuW1dag+NixY9jZ2cGNGzeO7GS3XyUOT7inLxQKYXFxEYuLi11vN00T9XodqqpC0zRUKhW7L0+z2bSbN7cGPPV63a4A4xh1IiIichOGOEREdOREcCOqboDRJkupqgpZlpFOp7G4uIhEIoFLly45
|
|||
|
|
|||
|
In order to find the nearest neighbours for a given data point, for example **q is “Van Halen – Jump” **– we have to loop over all items, find the [euclidean distance][1] between the points and then take the point with the smallest distance as the most similar, in this case, its Queen’s Boh-rhap! In this case there are 6 songs and therefore only 5 comparisons. What if we have a music library of millions of songs? That’s an awful lot of comparisons!
|
|||
|
|
|||
|
It’s also reasonable to assume that we’d be interested in comparing more than 3 attributes of a song – we can’t render more than 3 dimensions on a diagram but as you can imagine if there are 1000 or even 10,000 attributes then working out the euclidean distance becomes much harder. How can we reduce the number of comparisons needed?
|
|||
|
|
|||
|
## Approximate Nearest-Neighbour (NN) Problem
|
|||
|
|
|||
|
In order to speed up the recommendation process, we need to artificially reduce the number of comparisons that our system has to make. What if we had some prior knowledge about which part of the feature space song **q** is in and chose only to compare it with other songs from that space?
|
|||
|
|
|||
|
Drawing on from the example above, let’s imagine that we divide our space into two buckets: ‘rock’ and ‘metal’ – if we already know that Van Halen – Jump is a rock song then we can immediately discount Dragonforce, Slipknot and System of a Down as possible nearest neighbours and compare only with Aerosmith and Queen.
|
|||
|
|
|||
|
You may have already noticed that there’s a catch here. We’re at risk of missing potential nearest neighbours that sit on the border of our divisions. Metallica – Nothing Else Matters is an unusually slow balladic number from the thrash metal heavyweights and many people who don’t otherwise like Metallica might enjoy it – especially if they like Aerosmith’s Don’t Wanna’ Miss a Thing and other pop-rock ballads. The trade-off here is one of speed versus accuracy. By drawing lines of division down through our collection, we reduce the number of comparisons we usually need to make but risk missing near neighbours that are “on the edge” in the process. We can somewhat address this problem that by dividing our collection up into buckets in a few different ways and checking a handful of them. For example Queen – Bohemian Rhapsody could belong to “singers with high voices” and “rock ballads”.
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
|
|||
|
[1]: https://en.wikipedia.org/wiki/Euclidean_distance
|