GPT-3.5 Turbo fine-tuning and API updates

OpenAI has announced the availability of fine-tuning for its GPT-3.5 Turbo model, with fine-tuning for GPT-4 coming later this year. This update allows developers to customize models to better suit their specific use cases and run these custom models at scale. Early tests have shown that a fine-tuned version of GPT-3.5 Turbo can match or even outperform the capabilities of the base GPT-4 on certain narrow tasks. OpenAI assures that the data sent in and out of the fine-tuning API is owned by the customer and not used by OpenAI or any other organization to train other models. Fine-tuning can be used to improve steerability, reliable output formatting, and custom tone to create unique experiences for users. It also allows businesses to shorten prompts and reduce costs. Fine-tuning with GPT-3.5 Turbo can handle up to 4k tokens, double the capacity of previous models. To ensure safety, fine-tuning training data goes through OpenAI’s moderation system. The pricing for fine-tuning includes both training and usage costs. OpenAI has also introduced updated GPT-3 models (babbage-002 and davinci-002) as replacements for the

To top