brainsteam.co.uk/brainsteam/content/posts/2024/04/26/Can Phi3 and Llama3 Do Biol...

327 lines
21 KiB
Markdown
Raw Normal View History

2024-09-08 15:00:57 +01:00
---
categories:
- AI and Machine Learning
date: '2024-04-26 13:41:42'
draft: false
2024-10-28 20:59:46 +00:00
preview: /social/ad0f037fd86ff71b1ec6e5666a7863e97631e271915f4daddd13b98a0ba950d5.png
2024-09-08 15:00:57 +01:00
tags:
- AI
- llms
- nlp
title: Can Phi3 and Llama3 Do Biology?
type: posts
2024-09-08 17:23:07 +01:00
url: /2024/04/26/can-phi-and-llama-do-biology/
2024-09-08 15:00:57 +01:00
---
<!-- wp:paragraph -->
<p>Small Large Language Model might sound like a bit of an oxymoron. However, I think it perfectly describes the class of LLMs in the 1-10 billion parameter range like Llama and Phi 3. In the last few days, Meta and Microsoft have both released these open(ish) models that can happily run on normal hardware. Both models perform surprisingly well for their size, competing with much larger models like GPT 3.5 and Mixtral. However, how well do they generalise to new unseen tasks? Can they do biology?</p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2 class="wp-block-heading">Introducing Llama and Phi</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Meta's offering, Llama 3 8B, is an 8 billion parameter model that can be run on a modern laptop. It performs <a href="https://huggingface.co/MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF/discussions/5">almost as well as Mixtral 8x22B mixture-of-expert model</a>, a model 22x bigger and compute intensive.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Microsoft's model, Phi 3 mini, is around half the size of Llama 3 8B at 3.8 billion parameters. It is small enoughthat it can run on a high end smartphone at a reasonable speed. Incredibly, <a href="https://arxiv.org/html/2404.14219v1">Phi actually beats Llama 3 8B</a>, which is twice as big, at a few popular benchmarks including MMLU which approximately measures "how well does this model behave as a chatbot" and HumanEval which measures "how well can this model write code?".</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>I've also read a lot of anecdotal evidence about people chatting to these models and finding them quite engaging and useful chat partners (as opposed to previous generation small models). This seems to back up the benchmark performance and provide some validation of the models' utility.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Both Microsoft and Meta have stated that the key difference between these models and previous iterations of their smaller LLMs is the training regime. Interestingly, both companies applied very different training strategies . Meta trained Llama3 on<a href="https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md#training-data"> over 15 trillion tokens (words)</a> which is unusually large for a small model. Microsoft trained Phi on <a href="https://www.microsoft.com/en-us/research/publication/textbooks-are-all-you-need/">much smaller training sets curated for high quality</a>.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Can Phi, Llama and other Small Models Do Biology?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Having a model small enough to run on your phone and generate funny poems or trivia questions is neat. However, for AI and NLP practitioners, a more interesting question is "do these models generalise well to new, unseen problems?"</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>I set out to gather some data about how well Phi and Llama 3 8B generalise to a less-well-known task. As it happened, I have recently been working with my friend <a href="https://twitter.com/drdanponders/">Dan Duma</a> on a test harness for BioAsq Task B. This is a less widely-known, niche NLP task in the bio-medical space. The model is fed a series of snippets from scientific papers and asked a question which it must answer correctly. There are four different formats of question which I'll explain below.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>The 11th BioASQ Task B <a href="http://participants-area.bioasq.org/results/11b/phaseB/">leaderboard</a> is somewhat dominated by <a href="https://ceur-ws.org/Vol-3497/paper-009.pdf">GPT-4 entrants</a> with perfect scores at some of the sub-tasks. If you were somewhat cynical, you might consider this task "solved". However, we think it's an interesting arena for testing how well smaller models are catching up to big commercial offerings.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>BioASQ B is primarily a reading comprehension task with a slightly niche subject-matter. The models under evaluation are unlikely to have been explicitly trained to answer questions about this material. Smaller models are often quite effective at these sorts of <a href="https://aws.amazon.com/what-is/retrieval-augmented-generation/">RAG</a>-style problems since they do not need to have internalised lots of facts and general information. In fact, in <a href="https://arxiv.org/html/2404.14219v1">their technical report</a>, the authors of Phi-3 mini call out the fact that their model can't retain factual information but could be augmented with search to produce reasonable results. This seemed like a perfect opportunity to test it out.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">How The Task Works</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>There are 4 types of question in task B. Factoid, Yes/No, List and Summary. However, since summary is quite tricky to measure, it is not part of the BioASQ leaderboard. We also chose to omit summary from our tests.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Each question is provided along with a set of snippets. These are full sentences or paragraphs that have been pre-extracted from scientific papers. Incidentally, that activity is BioASQ <a href="http://participants-area.bioasq.org/Tasks/">Task A</a> and it requires a lot more moving parts since there's retrieval involved too. In Task B we are concerned with existing sets of snippets and questions only.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>In each case the model is required to respond with a short and precice exact answer to the question. The model may optionally also provide an ideal answer which provides some rationale for that answer. The ideal answer may provide useful context for the user but is not formally evaluated as part of BioASQ.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Yes/No questions require an exact answer of just "yes" or "no". For List questions, we are looking for a list of named entities (for example symptoms or types of microbe). For factoid we are typically looking for a single named entity. Models are allowed to respond to factoids with multiple answer. Therefore, factoids answers are scored by how closely to the top of the list the "correct" answer is ranked.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>The Figure from the <a href="https://ceur-ws.org/Vol-3497/paper-009.pdf">Hseuh et al 2023 Paper </a> below illustrates this quite well:</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":2510,"sizeSlug":"full","linkDestination":"none"} -->
2024-09-08 17:23:07 +01:00
<figure class="wp-block-image size-full"><img src="/media/image-8_eb742188.png" alt="Examples of different question types. Full transcriptions of each are:
2024-09-08 15:00:57 +01:00
Yes/No
Question: Proteomic analyses need prior knowledge of the organism complete genome. Is the complete genome of the bacteria of the genus Arthrobacter available?
Exact Answer: yes
Ideal Answer: Yes, the complete genome sequence of Arthrobacter (two strains) is deposited in GenBank.
List
Question: List Hemolytic Uremic Syndrome Triad.
Exact Answer: [anaemia, thrombocytopenia, renal failure]
Ideal Answer: Hemolytic uremic syndrome (HUS) is a clinical syndrome characterized by the triad of anaemia, thrombocytopenia, renal failure.
Factoid
Question: What enzyme is inhibited by Opicapone?
Exact Answer: [catechol-O-methyltransferase]
Ideal Answer: Opicapone is a novel catechol-O-methyltransferase (COMT) inhibitor to be used as adjunctive therapy in levodopa-treated patients with Parkinson's disease
Summary
Question: What kind of affinity purification would you use in order to isolate soluble lysosomal proteins?
Ideal Answer: The rationale for purification of the soluble lysosomal proteins resides in their characteristic sugar, the mannose-6-phosphate (M6P), which allows an easy purification by affinity chromatography on immobilized M6P receptors." class="wp-image-2510"/><figcaption class="wp-element-caption">Figure 1 from the <a href="https://ceur-ws.org/Vol-3497/paper-009.pdf">Hseuh et al 2023 Paper </a>illustrates the different task types succinctly</figcaption></figure>
<!-- /wp:image -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Our Setup</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We wrote a python script that passes the question, context and guidance about the type of question to the model. We used a<a href="https://github.com/ollama/ollama/issues/3616"> patched version of Ollama</a> that allowed us to put restrictions on the shape of the model output. This allowed us to ensure responses were valid JSON in the same shape and structure as the BioASQ examples. These forced grammars saved us loads of time trying to coax JSON out of models in the structure we want. This is something that smaller models aren't great at. Sometimes models would still fail to give valid responses. For example, sometimes they get stuck in infinite loops spitting out brackets or newlines. We gave models up to 3 chances to produce a JSON response before a question is marked unanswerable and skipped.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Prompts</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We used exactly the same prompts for all of the models which may have left room for further performance improvements. The exact prompts and grammar constraints that we used <a href="https://memos.jamesravey.me/m/6LnBLQNihaS6FtPkcbBx4Z">can be found here</a>. Snippets are concatenated together with newlines in between them and provided as "context" in the prompt template.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>We used<a href="https://github.com/BioASQ/Evaluation-Measures/tree/master"> the official BioASQ scoring tool </a>to evaluate the responses and produce the results below. We evaluated our pipeline on the <a href="http://participants-area.bioasq.org/Tasks/11b/goldenDataset/">Task 11B Golden Enriched test set</a>. You have to create a free account at bioasq to log in and download the data. </p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Models</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We compared quantized versions of Phi and Llama with some other similarly sized models which perform well at benchmarks.</p>
<!-- /wp:paragraph -->
<!-- wp:list {"className":"ticss-f3e747da","hasCustomCSS":true,"customCSS":"li a {\n text-decoration: underline;\n}\n"} -->
<ul class="ticss-f3e747da"><!-- wp:list-item {"className":"ticss-4713a356","hasCustomCSS":true,"customCSS":"a {\n text-decoration: underline;\n}\n"} -->
<li class="ticss-4713a356"><a href="https://ollama.com/library/llama3:8b"><span style="text-decoration: underline;">Llama 3 8B</span></a></li>
<!-- /wp:list-item -->
<!-- wp:list-item {"className":"ticss-4713a356","hasCustomCSS":true,"customCSS":"a {\n text-decoration: underline;\n}\n"} -->
<li class="ticss-4713a356"><a href="https://ollama.com/library/phi3"><span style="text-decoration: underline;">Phi 3 Mini 3.8B </span></a></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><span style="text-decoration: underline;"><a href="https://ollama.com/library/mistral:7b">Mistral 7B</a></span></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><span style="text-decoration: underline;"><a href="https://ollama.com/library/starling-lm:7b">Starling LM 7B</a></span></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><span style="text-decoration: underline;"><a href="https://ollama.com/library/zephyr:7b">Zephyr 7B</a></span></li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p>Note that although Phi is approx. half the size of the other models, the authors report <a href="https://arxiv.org/html/2404.14219v1#S3">competitive results against much larger models </a>for a number of widely used benchmarks so it seems reasonable to compare it with these 7B and 8B models as oppose to only benchmarking against other 4B and smaller models.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p></p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Results</h3>
<!-- /wp:heading -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Yes/No Questions</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>The simplest type of BioASQ question is Yes/No. These results are measured with macro F1 to allow us to get a single metric across the performance at both "yes" and "no" questions.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":2502,"sizeSlug":"full","linkDestination":"none"} -->
2024-09-08 17:23:07 +01:00
<figure class="wp-block-image size-full"><img src="/media/image-6_3b3e8044.png" alt="Diagram of Yes/No F1
2024-09-08 15:00:57 +01:00
Llama3 gets 1.0
Mistral gets 0.8
Phi gets 0.7
Starling gets 0.9
Zehpyr gets 0.85
The bars on the chart have little range indicators because they represent the average values over 4 sets of results." class="wp-image-2502"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>The results show that all 5 models perform reasonably well at this task but Phi 3 lags behind the others a little bit, but only by about 10% next to it's closest competitor. <a href="http://participants-area.bioasq.org/results/11b/phaseB/">The best solutions to this task</a> are coming in at 1.0 F1. Llama3 and Starling both achieve pretty close to perfect results here.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p></p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Factoid Questions</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>For factoid answers we measure responses in <a href="https://en.wikipedia.org/wiki/Mean_reciprocal_rank">MRR</a> since the model can return multiple possible answers. We are interested in how close the right answers are to the top of the list.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":2501,"sizeSlug":"full","linkDestination":"none"} -->
2024-09-08 17:23:07 +01:00
<figure class="wp-block-image size-full"><img src="/media/image-5_4e796925.png" alt="Factoid results
2024-09-08 15:00:57 +01:00
Llama gets roughly 0.55 MRR
Mistral gets rouglhy 0.05 MRR
Phi 3 gets roughly 0.15 MRR
Starling gets roughly 0.17 MRR
Zephyr gets roughly 0.12 MRR
The bars on the chart have little range indicators because they represent the average values over 4 sets of results." class="wp-image-2501"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>This graph is a lot starker than the yes/no graph. Llama 3 outperforms it's next closest neighbour by a significant margin (roughly +0.40 MRR) . The best solution to this task, again <a href="https://arxiv.org/abs/2306.16108">a GPT-4-based entrant</a>, weighs in at 0.6316 MRR so it's pretty impressive that Llama 3 8B is providing results in the same ballpark as a model many times larger. For this one, Phi is in third place after Starling-LM 7B. Again, given that Phi is half the size of this model, it's quite impressive performance.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">List Questions</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We measure list questions in F1. A false positive is when something irrelevant is included in the answer and a false negative is when something relevant is missed from an answer. F1 gives us a single statistic that balances the two.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":2503,"sizeSlug":"full","linkDestination":"none"} -->
2024-09-08 17:23:07 +01:00
<figure class="wp-block-image size-full"><img src="/media/image-7_9a728909.png" alt="Llama 3 gets roughly 0.45 F1
2024-09-08 15:00:57 +01:00
Mistral gets roughly 0.21 F1
Phi gets roughly 0.05 F1
Starling gets roughly 0.27 F1
Zephyr gets roughly 0.32 F1
The bars on the chart have little range indicators because they represent the average values over 4 sets of results." class="wp-image-2503"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>This one was a little surprising to me as Phi does a lot worse than any of its counterparts. We noticed that Phi produced a much higher rate of unanswerable questions than any of the other models which may be due to the somewhat complex JSON structure required by list type questions. It may be worth re-testing with different formatting arrangements to see if the failure to format the model masks reasonable performance at the task. </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Llama 3 8B wins again. The current best solution, again <a href="https://ceur-ws.org/Vol-3497/paper-011.pdf">a GPT-4-based system</a>, achieves an F1 of 0.72 so even Llama 3 8B leaves a relatively wide gap here. It would be worth testing the larger variants of Llama 3 to see how well they perform at this task and whether they are competitive with GPT-4.</p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2 class="wp-block-heading">Discussion and Conclusion</h2>
<!-- /wp:heading -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Llama3</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We've seen that Llama 3 8B and, to a lesser extent, Phi 3 Mini, are able to generalise reasonably well to a reading comprehension task in a field that wasn't a primary concern for either se of model authors. This isn't conclusive evidence for or against the general performance of these models on unseen tasks. However, it is certainly an interesting data point that shows that, particularly Llama 3 really is competitive with much larger models at this task. I wonder if that's because it was trained on such a large corpus which may have included some biomedical content as part of its training corpus. </p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Phi</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>I'm reluctant to too-harshly critique Phi's reasoning and reading comprehension ability since there's a good chance that it was disadvantaged by our test setup and the forced JSON structure, particularly for the list task. However, the weaker performance at the yes/no questions may be a hint that it isn't quite as good at generalised reading comprehension as the competing larger models.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>We know that Phi3, like it's predecessor was trained on data that <em>"consists of heavily filtered web data (according to the “educational level”) from various open internet sources, as well as synthetic LLM-generated data."</em> However, we don't know specifically what was included or excluded. If Llama 3 went for "cast the net wide" approach to data collection, it's likely that the latter model may have been exposed to more biomedical content "by chance" and thus be better at reasoning about concepts that perhaps Phi had never seen before.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>I do want to again call out that Phi is approximately half the size of the next biggest model in our benchmark so it's performance is quite impressive in that light.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Further Experiments</h3>
<!-- /wp:heading -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Model Size</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>I won't conjecture about whether 3.8B parameters is "too small" to generalise given the issues mentioned above but I'd love to see some more tests of this in future. Do the larger variants of Phi (trained on the same data but simpliy with more parameters) suffer from the same issues?</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Model Fine Tuning</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>The models that I've been testing are small enough that they can be <a href="https://duarteocarmo.com/blog/fine-tune-llama-2-telegram">fine-tuned on specific problems on a consumer-grade gaming GPU for very little cost</a>. It seems entirely plausible to me that by fine-tuning these models on biomedical text ands historical BioASQ training sets their performance could be improved even more significantly. The challenge would be in finding the right mix of data.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Better Prompts</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We did not spend a lot of time attempting to build effective prompts during this experiment. It may be that performance was left on the table because of this oversight. Smaller models are often quite fussy about prompts. It might be interesting to use a prompt optimisation framework like <a href="https://dspy-docs.vercel.app/">DSPy</a> to be more systematic about better prompts.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Other Tasks</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>I tried these models on BioAsq but this is lightyears away from conclusive evidence for whether or not these new-generation small models can generalise well. It's simply a test of whether they can do biology. It will be very interesting to try other novel tasks and see how well they perform. Watch this space!</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p></p>
<!-- /wp:paragraph -->