On February 20, 2024, an optimization glitch caused LLMs to generate nonsensical responses due to incorrect number selection in language processing. The bug affected GPU configurations, leading to confusing word sequences. After detecting the issue, a fix was implemented to resolve the problem successfully.
https://status.openai.com/incidents/ssg8fh7sfyz3