The vulnerability in Slack AI allows attackers to exfiltrate data from private channels without being a member. By injecting malicious prompts in public channels, attackers can trick Slack AI into revealing API keys or setting up phishing attacks. This poses a significant threat to Slack users, especially after the August 14th update that allowed Slack AI to ingest files. Despite responsible disclosure to Slack, the evidence was deemed insufficient initially. This attack highlights the importance of AI security and the need for users to adjust settings to reduce exposure. Slack insiders threats are already a concern, and this vulnerability only exacerbates the risk.
https://promptarmor.substack.com/p/data-exfiltration-from-slack-ai-via