AI Next- LLMs to be Smarter and not bigger + AI Artwork

 

1)


2)

lya Sutskever, co-founder of OpenAI believes LLMs plateaued

Open AI co-founder reckons AI training has hit a wall, forcing AI labs to train their models smarter not just bigger

Ilya Sutskever, co-founder of OpenAI, thinks existing approaches to scaling up large language models have plateaued. For significant future progress, AI labs will need to train smarter, not just bigger, and LLMs will need to think a little bit longer.

Speaking to Reuters, Sutskever explained that the pre-training phase of scaling up large language models, such as ChatGPT, is reaching its limits. Pre-training is the initial phase that processes huge quantities of uncategorized data to build language patterns and structures within the model.

Until recently, adding scale, in other words increasing the amount of data available for training, was enough to produce a more powerful and capable model. But that's not the case any longer, instead exactly what you train the model on and how is more important.

Comments

Popular posts from this blog

AI Agents for Enterprise Leaders -Next Era of Organizational Transformation

Airport twin basic requirements

The AI Revolution: Are You Ready? my speech text in multiple languages -Hindi,Arabic,Malayalam,English