The author of this web content discusses the concept of fine tuning, a technology that aims to enhance GPT-4’s capabilities while being more efficient. Despite the prevalence of fine tuning discussions, there is a lack of content on its effectiveness and difficulty. To investigate this, the author conducted experiments using Magic the Gathering draft as a test task. They found that fine tuned models performed well, even surpassing GPT-4 and achieving human-level performance. However, fine tuning is still an experimental and costly process. The author highlights the challenges of prompt engineering and the large size of language models. They also mention the importance of building a good evaluation for experiments. Overall, fine tuning can be highly effective but requires specialized skills. The author describes their experience with a Magic draft AI, which demonstrates promising and humanlike behavior.
https://generallyintelligent.substack.com/p/fine-tuning-mistral-7b-on-magic-the