OpenAI Unveils New API for ChatGPT, Dedicated Capacity for Businesses

OpenAI Unveils New API for ChatGPT, Dedicated Capacity for Businesses



OpenAI has introduced an API that would enable businesses to incorporate ChatGPT tech into their apps, websites, products and services.

This advanced text-generating model is available for a rate of $0.002 per 1,000 tokens or 750 words, and is already being utilized by early adopters such as Snap, Quizlet, Instacart, and Shopify. The API can drive a range of experiences, including “non-chat” applications

The ChatGPT API is powered by the same AI model behind OpenAI’s wildly popular ChatGPT, dubbed “gpt-3.5-turbo.” GPT-3.5 is the most powerful text-generating model OpenAI offers today through its API suite; the “turbo” refers to an optimized, more responsive version of GPT-3.5 that OpenAI’s been quietly testing for ChatGPT.

Key applications of the API include My AI, Snap’s chatbot for Snapchat+ subscribers, Quizlet’s Q-Chat virtual tutor feature, Shopify’s personalized assistant for shopping recommendations, and Instacart’s upcoming Ask Instacart tool.

Shopify used the ChatGPT API to build a personalized assistant for shopping recommendations, while Instacart leveraged it to create Ask Instacart, an upcoming toll that’ll allow Instacart customers to ask about food and get “shoppable” answers informed by product data from the company’s retail partners.



Large businesses with larger budgets can now gain greater control of system performance with the introduction of dedicated capacity plans. 

OpenAI API’s dedicated capacity offers customers the possibility to extend context limits. Context limits refer to the text the model is made to consider before producing more text; extended context limits enable the model to “recall” more text. While higher context limits may not solve all bias and toxicity issues, it could lead models like gpt-3.5-turbo to reduce fabrication of facts without disclosing them. 

Customers of dedicated capacity can anticipate gpt-3.5-turbo models with a context window 16k long, meaning it can take into account four times more tokens than the regular ChatGPT model. This could enable someone to paste in multiple pages of tax code and obtain sensible answers from the model, for example. The feature is still being worked on.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *