Lessons after a Half-billion GPT Tokens

My startup, Gettruss.io, has been utilizing OpenAI models, primarily GPT-4 and GPT-3.5, focusing on text analysis and extraction for our B2B use case. We’ve found that being less specific in prompts produces better results, revealing GPT’s higher-order thinking. We’ve also discovered that features like Langchain and embeddings are unnecessary for our needs. Additionally, we’ve noticed that GPT struggles with returning blank outputs and has limitations in output size. Surprisingly, GPT’s hallucination is minimal when extracting information from text. Overall, while GPT-4 is beneficial, GPT-5’s incremental improvements may not revolutionize the field due to exponential costs and limited gains.

https://kenkantzer.com/lessons-after-a-half-billion-gpt-tokens/

To top