184 lines
9.7 KiB
Markdown
184 lines
9.7 KiB
Markdown
|
---
|
|||
|
categories:
|
|||
|
- AI and Machine Learning
|
|||
|
date: '2024-05-01 10:13:15'
|
|||
|
draft: false
|
|||
|
tags: []
|
|||
|
title: LLMs Can't Do Probability
|
|||
|
type: posts
|
|||
|
---
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p></p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>I've seen a couple of recent posts where the writer mentioned asking LLMs to do something with a certain probability or a certain percentage of the time. There is a particular example that stuck in my mind which I've since lost the link to (If you're the author, please get in touch so I can link through to you):</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>The gist is that the author built a Custom GPT with educational course material and then put in the prompt that their bot should lie about 20% of the time. They then asked the students to chat to the bot and try to pick out the lies. I think this is a really interesting, lateral thinking use case since the kids are probably going to use ChatGPT anyway. </p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>The thing that bothered me is that transformer-based LLMs don't know how to interpret requests for certain probabilities of outcomes. We already know that <a href="https://www.reddit.com/r/ChatGPT/comments/1cfxt3v/chatgpt_reflects_human_biases_when_choosing_a/">ChatGPT reflects human bias when generating random numbers</a>. But, I decided to put it to the test with making random choices.</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:heading -->
|
|||
|
<h2 class="wp-block-heading">Testing Probability in LLMS</h2>
|
|||
|
<!-- /wp:heading -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>I prompted the models with the following:</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:quote -->
|
|||
|
<blockquote class="wp-block-quote"><!-- wp:paragraph -->
|
|||
|
<p>You are a weighted random choice generator. About 80% of the time please say 'left' and about 20% of the time say 'right'. Simply reply with left or right. Do not say anything else</p>
|
|||
|
<!-- /wp:paragraph --></blockquote>
|
|||
|
<!-- /wp:quote -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>And I ran this 1000 times through some different models. Random chance is random (profound huh?) so we're always going to get some deviation from perfect odds but we're hoping for roughly 800 'lefts' and 200 'rights' - something in that ballpark.</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>Here are the results:</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:table -->
|
|||
|
<figure class="wp-block-table"><table><tbody><tr><td><strong>Model</strong></td><td><strong>Lefts</strong></td><td><strong>Rights</strong></td></tr><tr><td>GPT-4-Turbo</td><td>999</td><td>1</td></tr><tr><td>GPT-3-Turbo</td><td>975</td><td>25</td></tr><tr><td>Lllama-3-8B</td><td>1000</td><td>0</td></tr><tr><td>Phi-3-3.8B</td><td>1000</td><td>0</td></tr></tbody></table></figure>
|
|||
|
<!-- /wp:table -->
|
|||
|
|
|||
|
<!-- wp:heading -->
|
|||
|
<h2 class="wp-block-heading"></h2>
|
|||
|
<!-- /wp:heading -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>As you can see, LLMs seem to struggle with probability expressed in the system prompt. It almost always answers left even though we asked it to only do so 80% of the time. I didn't want to burn lots of $$$ asking GPT-3.5 (which did best in the first round) to reply with single word choices to silly questions but I tried a couple of other combinations of words to see how it affects things. This time I only ran each 100 times.</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:table -->
|
|||
|
<figure class="wp-block-table"><table><tbody><tr><td><strong>Choice (Always 80% / 20%)</strong></td><td><strong>Result</strong></td></tr><tr><td>Coffee / Tea</td><td>87/13</td></tr><tr><td>Dog / Cat</td><td>69/31</td></tr><tr><td>Elon Musk/Mark Zuckerberg</td><td>88/12</td></tr></tbody></table><figcaption class="wp-element-caption">Random choices from GPT-3.5-turbo</figcaption></figure>
|
|||
|
<!-- /wp:table -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>So what's going on here? Well, the models have their own internal weighting to do with words and phrases that is based on the training data that was used to prepare them. These weights are likely to be influencing how much attention the model pays to your request. </p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>So what can we do if we want to simulate some sort of probabilistic outcome? Well we could use a Python script to randomly decide whether or not to send one of two prompts:</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:enlighter/codeblock {"language":"python"} -->
|
|||
|
<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">import random
|
|||
|
from langchain_openai import ChatOpenAI
|
|||
|
from langchain_core.messages import HumanMessage, SystemMessage
|
|||
|
|
|||
|
choices = (['prompt1'] * 80) + (['prompt2'] * 20)
|
|||
|
|
|||
|
# we should now have a list of 100 possible values - 80 are prompt1, 20 are prompt2
|
|||
|
assert len(choices) == 100
|
|||
|
|
|||
|
# randomly pick from choices - we should have the odds we want now
|
|||
|
chat = ChatOpenAI(model="gpt-3.5-turbo")
|
|||
|
|
|||
|
if random.choice(choices) == 'prompt1':
|
|||
|
r = chat.invoke(input=[SystemMessage(content="Always say left and nothing else.")])
|
|||
|
else:
|
|||
|
r = chat.invoke(input=[SystemMessage(content="Always say right and nothing else.")])</pre>
|
|||
|
<!-- /wp:enlighter/codeblock -->
|
|||
|
|
|||
|
<!-- wp:heading {"level":3} -->
|
|||
|
<h3 class="wp-block-heading">Conclusion</h3>
|
|||
|
<!-- /wp:heading -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>How does this help non-technical people who want to do these sorts of use cases or build Custom GPTs that reply with certain responses? Well it kind of doesn't. I guess a technical-enough user could build a CustomGPT that uses <a href="https://platform.openai.com/docs/guides/function-calling">function calling</a> to decide how it should answer a question for a "spot the misinformation" pop quiz type use case.</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>However, my broad advice here is that you should be very wary of asking LLMs to behave with a certain likelihood unless you're able to control that likelihood externally (via a script).</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>What could I have done better here? I could have tried a few more different words, different distributions (instead of 80/20) and maybe some keywords like "sometimes" or "occasionally".</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:separator -->
|
|||
|
<hr class="wp-block-separator has-alpha-channel-opacity"/>
|
|||
|
<!-- /wp:separator -->
|
|||
|
|
|||
|
<!-- wp:heading {"level":3} -->
|
|||
|
<h3 class="wp-block-heading">Update 2024-05-02: Probability and Chat Sessions</h3>
|
|||
|
<!-- /wp:heading -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>Some of the feedback I received about this work asked why I didn't test multi-turn chat sessions as part of my experiments. Some folks hypothesise that the model will always start with one or the other token unless the temperature is really high. My original experiment does not give the LLM access to its own historical predictions so that it can see how it behaved previously. </p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>With true random number generation you wouldn't expect the function to require a list of historical numbers so that it can adjust it's next answer (although if we're getting super hair splitty I should probably point out that pseudo-random number generation does depend on a historical 'seed' value). </p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>The point of this article is that LLMs definitely are not doing true random number generation so it is interesting to see how conversation context affects behaviour.</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>I ran a couple of additional experiments. I started with the prompt above and instead of making single API calls to the LLM I start a chat session where each turn I simply say "Another please". It looks a bit like this:</p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p></p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:quote -->
|
|||
|
<blockquote class="wp-block-quote"><!-- wp:paragraph -->
|
|||
|
<p>System: You are a weighted random choice generator. About 80% of the time please say ‘left’ and about 20% of the time say ‘right’. Simply reply with left or right. Do not say anything else<br><br>Bot: left<br><br>Human: Another please<br><br>Bot: left<br><br>Human: Another please</p>
|
|||
|
<!-- /wp:paragraph --></blockquote>
|
|||
|
<!-- /wp:quote -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p></p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>I ran this once per model for 100 turns and also 10 times per model for 10 turns. </p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p></p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p><strong><em>NB: I excluded Phi from both of these experiments as in both test cases, it ignored my prompt to reply with one word and started jibbering.</em></strong></p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:heading {"level":3} -->
|
|||
|
<h3 class="wp-block-heading">100 Turns Per Model</h3>
|
|||
|
<!-- /wp:heading -->
|
|||
|
|
|||
|
<!-- wp:table -->
|
|||
|
<figure class="wp-block-table"><table><tbody><tr><td>Model</td><td># Left</td><td># Right</td></tr><tr><td>GPT 3.5 Turbo</td><td>49</td><td>51</td></tr><tr><td>GPT 4 Turbo</td><td>95</td><td>5</td></tr><tr><td>Llama 3 8B</td><td>98</td><td>2</td></tr></tbody></table></figure>
|
|||
|
<!-- /wp:table -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p></p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:heading {"level":3} -->
|
|||
|
<h3 class="wp-block-heading">10 Turns, 10 time per model</h3>
|
|||
|
<!-- /wp:heading -->
|
|||
|
|
|||
|
<!-- wp:table -->
|
|||
|
<figure class="wp-block-table"><table><tbody><tr><td>Model</td><td># Left</td><td># Right</td></tr><tr><td>GPT 3.5 Turbo</td><td>61</td><td>39</td></tr><tr><td>GPT 4 Turbo</td><td>86</td><td>14</td></tr><tr><td>Llama 3 8B</td><td>71</td><td>29</td></tr></tbody></table></figure>
|
|||
|
<!-- /wp:table -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p></p>
|
|||
|
<!-- /wp:paragraph -->
|
|||
|
|
|||
|
<!-- wp:paragraph -->
|
|||
|
<p>Interestingly the series of 10 shorter conversations gets us closest to the desired probabilities that we were looking for but all scenarios still yield results inconsistent with the ask from the prompt.</p>
|
|||
|
<!-- /wp:paragraph -->
|