Slack tweaks its principles in response to user outrage at AI slurping

Several people are typing. And something might be learning...

Salesforce division Slack has responded to criticism by users outraged that its privacy principles allowed the messaging service to slurp customer data for AI training unless specifically told not to, claiming the data never leaves the platform and isn't used to train "third party" models.

The app maker said its ML models were "platform level" for things like channel and emoji recommendations and search results, and it has now updated the principles "to better explain the relationship between customer data and generative AI in Slack."

It said it wanted to clarify that:

Slack has industry-standard platform-level machine learning models to make the product experience better for customers, like channel and emoji recommendations and search results. These models do not access original message content in DMs, private channels, or public channels to make these suggestions. And we do not build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data.

Slack uses generative AI in its Slack AI product offering, leveraging third-party LLMs. No customer data is used to train third-party LLMs. Slack AI uses off-the-shelf LLMs where the models don't retain customer data. Additionally, because Slack AI hosts these models on its own AWS infrastructure, customer data never leaves Slack's trust boundary, and the providers of the LLM never have any access to the customer data.

The privacy principles were overhauled in 2023 and contained the text: "To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content and files) submitted to Slack."

Yes, that's correct – the principles that got customers so upset allowed Slack to analyze messages in order to train its models. In fact, pretty much anything dropped into Slack channels could be used. The implications of this are far-reaching, and users for whom the penny dropped were vocal in their criticism.

For its part, Slack has insisted that data will not leak across workspaces, although it did admit that its global models used customer data. Messages within the workspace, however, were presumably fair game.

The principles have since been tweaked slightly, and now read: "To develop non-generative AI/ML models for features such as emoji and channel recommendations, our systems analyze Customer Data."

A Slack spokesperson told The Register "Note that we have not changed our policies or practices – this is simply an update to the language to make it more clear."

The slurping is also on by default, something that could raise an eyebrow with regulators. To turn it off, Slack demands the workspace owner email its customer experience team requesting an opt-out. It does not give any indication of how long that opt-out will take to be processed for customers who do not want to have their data used in the training of Slack's global models.

Opting out means that a customer will still enjoy the benefits of the globally trained models, just without being part of those global models.

The Register asked Slack why it did not choose an opt-in model and will update this piece should we receive an explanation.

Slack says it uses the data to better parse queries, help with autocomplete and come up with emoji suggestions.

According to the company's privacy principles, "These types of thoughtful personalizations and improvements are only possible if we study and understand how our users interact with Slack."

Over on Threads, someone who claims to be an engineer at Slack says that they don't train LLMs on customer data. Aaron Maurer, who according to their LinkedIn profile is a ML and AI person at Slack, says the org's "policy, as we have published numerous places, is we do not train LLMs on customer data. For instance, to add another spot:" However, in principle and according to Slack's Ts&Cs, it can if it wants to.

Matthew Hodgson, CEO at Element, told The Reg that he found it "utterly mind blowing" that Slack was "proposing training AI on private customer data."

"It's bad enough that cloud vendors like Slack and Teams have access to your unencrypted data in the first place, but to then feed it into an opaque and unpredictable LLM model is terrifying."

For context, Slack is not the only service to use customer data for model training. Reddit getting friendly with OpenAI and adding its forum posts to ChatGPT is another example, although customers paying a subscription to use Slack would be forgiven for being a little surprised to find their data being used as global training fodder unless they opt out.

Slack's change happened in 2023, and the furor highlights the need for users to check what their data is being used for as AI hype continues to surge through the tech industry. ®

More about


Send us news

Other stories you might like