Despite the increasing popularity of AI models, little is known about how they are actually used. Understanding their uses is crucial for safety and privacy reasons. Anthropic’s Claude models prioritize user privacy by not training on user conversations by default. Their automated analysis tool, Clio, provides insights into real-world language model use while preserving privacy. Some surprising use cases for Claude.ai include dream interpretation, disaster preparedness, and analyzing soccer matches. Clio has helped improve safety measures by identifying coordinated misuse and monitoring for high-stakes events. It also helps reduce false negatives and false positives in Trust and Safety enforcement. Ethical considerations such as false positives and user privacy are also addressed.
https://www.anthropic.com/research/clio