2.9 KiB
date | description | mp-syndicate-to | post_meta | tags | title | type | url | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
2022-12-06T10:15:08.312352 | I'm sure everyone's had enough of ChatGPT hot takes by now. Here's a more balanced view from an NLP specialist |
|
|
|
Some Nuanced Thoughts on ChatGPT | posts | /posts/2022/12/06/some-nuanced-thoughts-on-chatgpt1670321708 |
ChatGPT is a really impressive tech demo and it shows us the power of large language models but it's important to remember that ChatGPT is a machine learning model and, like any AI, it's only as good as the data it's trained on. This means that it's prone to making errors, and it's important for humans to validate the answers it produces. I fully expect any executives wringing their hands with glee about "cutting resource" and making redundancies are going to have a real shock when they realise that they still need those people to supervise the model and verify its outputs. So maybe our relationship with coding changes and the quality and speed with which we can build systems increases. However, would you ask GPT "generate the control code for a pacemaker" and trust that device to help your own Grandma or would you prefer a team of medical systems engineer with 20+ years experience to review that code first?
Secondly, The company may be called OpenAI but GPT-3 is not open (sure they released their scientific papers but the trained model is locked away behind a paywall and you'd need ££££ to train your own from scratch by reproducing the paper). I'm expecting expect some competition between OpenAI, Google, Meta, Amazon et al but ultimately If your entire business model and IP is GPT + some postprocessing (i) you are at the beck and call of the pricing strategies the companies in this space set and (ii) your business has no moat. By all means use these models but make sure you have something defensible and unique in there and a backup plan for changing provider too. Incidentally given that the interface here is chat-based I suspect that vendor lock-in will be less of a thing - just send your prompts to a different endpoint!
So you might be thinking "Oh well you're an NLP specialist, you're bound to be against this tech, after all you're out of a job" - well no not at all - I'm really pleased to see this progress - it's the sort of tech I dreamed of as a kid and the amazing thing is that the #transformer models it's built on didn't even exist 7 years ago when I started my #PhD. There are still plenty of unsolved challenges that will keep me occupied (some of which I've just described) and I'm looking forward to getting stuck in!
Also, I even used ChatGPT to generate parts of this post - can you spot them?