Vogon Today

Selected News from the Galaxy

StartMag

What does it mean that the new ChatGpt can think?

What does it mean that the new ChatGpt can think?

OpenAI has released a new set of “thinking” AI models. Apparently, despite the speed that characterizes the sector, they will be a little slower but better at reasoning. However, fears about this new technology do not go away. All the details

The software house led by Sam Altman slows down. It's a bucking announcement in the world of artificial intelligence (AI), yet it seems time to take more time to make models more thoughtful, capable of providing answers to increasingly difficult questions.

Meanwhile, OpenAI is currently in talks to raise $6.5 billion from investors in a new funding round that would value it at $150 billion. This would be the largest raising since the $10 billion invested by Microsoft in January 2023 and could also include financing from Apple and Nvidia.

O1-PREVIEW, THE NEW CHATGPT THAT THINKS

Last week the company released its infamous and feared Strawberry , its reasoning-focused AI model. OpenAI o1, this is the official name, is the new series of AI models that claims to be capable of "thinking". In practice he was taught to think before speaking. These models are in fact "designed to spend more time thinking before responding" and are able to "reason" about more complex scientific, coding and mathematical tasks and problems than previous models.

This is because, the company explained, the models were trained to take longer to solve problems before responding, “just like a person would”, and were also trained “to refine their thinking process, to try different strategies and to recognize one's mistakes".

Meanwhile, the company also published ratings of the next update of the model, which would perform similarly to doctoral students in tests on physics, chemistry and biology tasks.

However, the new model, unlike the current version of ChatGpt, does not yet have some "useful" functions, such as web browsing and uploading files and images.

WHAT ARE THE NEW MODELS

The first models of OpenAI o1 are already available in preview for ChatGpt Plus and Team users on ChatGpt and through the company's API [application programming interface], which has also announced OpenAI o1-mini, a smaller and cheaper version of the new model, which can help developers accomplish coding tasks.

From this week ChatGpt Enterprise and Edu users will also have access to the templates and, finally, it will be the turn of free ChatGpt users with o1-mini.

OpenAI also announced that the weekly limit will be 30 messages for o1-preview and 50 for o1-mini.

WHAT DOES OPENAI DO TO ENSURE SAFE AI?

To reassure concerns surrounding the Strawberry project, the company said it has developed new security training measures for o1's reasoning capabilities, to ensure models follow its security and alignment guidelines. For example, the new model scored higher in one of OpenAI's "hardest jailbreak tests" than the newer model, Gpt-4o.

Furthermore, according to the press release, to continue its commitment to AI safety, OpenAI has recently formalized agreements with the AI ​​Safety Institutes of the United States and the United Kingdom, which it has begun to operationalize by granting it the Early access to a research version of the models for evaluation before and after their public release.

THE SCALE THAT MEASURES AI'S PROGRESS

But concerns about AI getting out of hand aren't entirely allayed. In fact, in July, Quartz recalls, OpenAI shared with employees a five-level evaluation system that it developed to monitor the progress of artificial general intelligence.

The levels range from conversational AI currently available (i.e. chatbots) to AI that could one day do the same job as an organization. While OpenAI executives believe the startup's technology is at the first level, a spokesperson told Bloomberg that the company is close to the second, described by OpenAI as “ reasoners ,” or AI that can to perform basic problem solving, which in reality is supposedly equivalent to the level of a human with a PhD but no access to the tools.

However, last week, in Oprah Winfrey's TV lounge , Altman – who says he has conversations with someone from the government almost every day – launched an appeal to the executive branch to start conducting safety tests on artificial intelligence, as it does with airplanes or with new drugs. A position which, however, contrasts with the CEO's decision to dissolve the "Superalignment" team, which was working precisely on the problem of the existential dangers of AI.

WHAT AI IS LACKING TO OVERCOME HUMAN

AI analysis specialist Eitan Michael Azoff has no doubts about the certainty that man can design superior artificial intelligence. According to what is written in his latest book Towards Human-Level Artificial Intelligence: How Neuroscience can Inform the Pursuit of Artificial General Intelligence, this will be possible as soon as the 'neural code' is deciphered, i.e. "the way in which the human brain encodes sensory information and how it moves information within the brain to perform cognitive tasks, such as thinking, learning, problem solving, internal visualization, and internal dialogue.”

This, in fact, for Azoff represents the qualitative leap that will allow the emulation of consciousness in computers: “Once we have deciphered the neural code, we will design faster and superior brains, with greater capacity, speed and support technology that will surpass the human brain. We will do this by first modeling visual processing, which will allow us to emulate visual thinking.”

Finally, however, the expert also issues a warning: “Until we have more confidence in the machines we build, we will have to ensure, first, that humans have exclusive control of the off switch and, second, we must build AI systems with behavioral safety rules”.


This is a machine translation from Italian language of a post published on Start Magazine at the URL https://www.startmag.it/innovazione/cosa-significa-che-nuovo-chatgpt-openai-riesce-pensare/ on Mon, 16 Sep 2024 14:04:22 +0000.