update thumbnails
Deploy Website / build (push) Successful in 16s
Details
Deploy Website / build (push) Successful in 16s
Details
This commit is contained in:
parent
63cd494a6b
commit
ae02fc644c
Binary file not shown.
|
@ -23,6 +23,8 @@ Rumours that the famed Transformer architecture that powers GPT, Claude and Llam
|
|||
|
||||
So what happens next if these rumours are true and where does that leave the AI bubble? How quickly and likely are we to break out of a plateau and get to more intelligent models?
|
||||
|
||||
**Update: Overnight [Ilya Sutskever himself](https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/) has added his voice to the choir of experts claiming that the scaling laws have hit a wall. This is a significant inflection point since Sutskever is responsible for leading much of OpenAI's work on GPT-3 and GPT-4 before leaving to start his own research company earlier this year. He is big on AGI and superintelligence so this admission carries a lot of weight.***
|
||||
|
||||
## A Brief History of LLMs and How We Got Here
|
||||
|
||||
To those unfamiliar with AI and NLP research, ChatGPT might appear to have been an overnight sensation from left field. However, science isn't a "big bang" that happens in a vaccum. Sudden-seeming breakthroughs are usually the result of many years of incremental research reaching an inflection point. GPT-4o and friends would not be possible without the invention of The Transformer 5 years prior and a number of smaller advancements leading up to the launch of ChatGPT in 2022.
|
||||
|
|
Loading…
Reference in New Issue