Introducing Phind-70B, a powerful model that closes the code quality gap with GPT-4 Turbo while running 4 times faster. With up to 80 tokens per second, Phind-70B provides high-quality answers for technical topics without long wait times. It outperforms GPT-4 Turbo in some tasks and is faster, generating detailed code examples without hesitation. Phind-70B, based on the CodeLlama-70B model, scores 82.3% on HumanEval and 59% on Meta’s CRUXEval dataset. This model is available for free to try today, with higher limits available through Phind Pro. Stay tuned for more open-source releases and improvements to Phind-70B’s speed. Also, fun fact: We melted an H100 during Phind-70B’s training!
https://www.phind.com/blog/introducing-phind-70b