Our humble attempt at “how much data do you need to fine-tune”

In this article, the authors discuss their findings from utilizing the OpenAI fine-tuning API. They focused on two use cases: reliable output formatting and custom tone. Through their experiments, they found that fine-tuning with approximately 100 data points resulted in significant improvements in these tasks. Additionally, they observed that fine-tuned GPT-3.5 models were faster than the base model. The authors also touch on cost and latency considerations when it comes to fine-tuning. They acknowledge that there are still many unanswered questions and areas for further research in this field. Overall, their findings show the potential of fine-tuning for achieving better performance in certain tasks.

https://barryzhang.substack.com/p/our-humble-attempt-at-fine-tuning

To top