Explore
Settings

Settings

×

Reading Mode

Adjust the reading mode to suit your reading needs.

Font Size

Fix the font size to suit your reading preferences

Language

Select the language of your choice. NewsX reports are available in 11 global languages.
we-woman
Advertisement

Sam Altman Calls For Name Change For ChatGPT Following New Model Launch

After launching the latest model, the ChatGPT-4o Mini, OpenAI CEO Sam Altman acknowledged the necessity for a "naming scheme revamp" for ChatGPT and its various versions. On July 18, OpenAI introduced a new model, described as "our most cost-efficient small model."

Sam Altman Calls For Name Change For ChatGPT Following New Model Launch

After launching the latest model, the ChatGPT-4o Mini, OpenAI CEO Sam Altman acknowledged the necessity for a “naming scheme revamp” for ChatGPT and its various versions. On July 18, OpenAI introduced a new model, described as “our most cost-efficient small model.”

Altman shared a post on his X account (formerly Twitter), stating, “15 cents per million input tokens, 60 cents per million output tokens, 82% MMLU, and fast.” He emphasized, “Most importantly, we think people will really, really like using the new model.”

While the majority of users expressed appreciation for the product, one user humorously remarked on the names of the ChatGPT models, which have proliferated as OpenAI’s development has progressed, suggesting they needed a change. Responding to Mr. Altman’s post, Shea commented, “You guys need a naming scheme revamp so bad.” Acknowledging the suggestion, the entrepreneur replied, “lol yes we do.”

READ MORE: Donald Trump Takes Aim At Kamala Harris, Calling Her An Easier Target After Biden Drops Out Of Presidential Race

One of OpenAI’s most efficient AI models, GPT-4o Mini, is compact and offers minimal latency (the time it takes to generate a response). OpenAI asserts that it will handle text, images, videos, and audio, in addition to the vision (image processing) and text already supported by the API. “The model features a context window of 128K tokens, supports up to 16K output tokens per request, and has knowledge up to October 2023. Enhanced by the improved tokenizer shared with GPT-4o, managing non-English text is now even more cost-effective,” the blog post highlighted.

OpenAI stated that according to its Preparedness Framework, it employed both automated and human evaluations for safety. To identify potential risks, the company also subjected the AI model to evaluation by 70 external experts from various disciplines.

mail logo

Subscribe to receive the day's headlines from NewsX straight in your inbox