This article is more than 1 year old
No swearing or off-brand comments: AWS touts auto-moderation messaging API
Automate everything – but while human moderation is hard, robot moderation tends not to work
AWS has introduced channel flows to its Chime messaging and videoconferencing API, the idea being to enable automatic moderation of profanity or content that "does not fit" the corporate brand.
Although Amazon Chime has a relatively small market share in the crowded videoconferencing market, the Chime SDK is convenient for developers building applications that include videoconferencing or messaging, competing with SDKs and services from the likes of Twilio or Microsoft's Azure Communication Services. In other words, this is aimed mainly at corporate developers building applications or websites that include real-time messaging, audio or videoconferencing.
The new feature is for real-time text chat rather than video and is called messaging channel flows. It enables developers to create code that intercepts and processes messaging before they are delivered. The assumption is that this processing code will run on AWS Lambda, its serverless platform.
A post by Chime software engineer Manasi Surve explains the thinking behind the feature in more detail. It is all about moderation, and Surve describes how to "configure a channel flow that removes profanity and certain personally identifiable information (PII) such as a social security number."
She further explains that corporations need to prevent accidental sharing of sensitive information, and that social applications need to "enforce community guidelines" as well as avoiding "content shared by users that does not fit their brand." A previous approach to the same problem worked only after the message had been posted – too late in many scenarios.
- AWS admits cloud ain't always the answer, intros on-prem vid-analysing box
- AWS Lambda was already serverless, now it can be x86-less too
- AWS US East region endures eight-hour wobble thanks to 'Stuck IO' in Elastic Block Store
- AWS announces new region in the Land of the Long White Cloud – New Zealand
It is telling that Surve observes that "human moderation requires significant human effort and does not scale."
Automate everything is a defining characteristic of today's cloud giants, even though moderation automation has not always been successful.
Surve said: "Amazon Comprehend helps remove many of the challenges," this service being for natural language processing and having the ability, when suitably trained, to detect "key phrases, entities and sentiment" to automate further actions.
The simple example presented by Surve does not use Comprehend for profanity but "simply… a banned word list," though she adds that "you can also use Comprehend for profanity, but you will need to train your own model." Comprehend is used for detecting a social security number.
Users are skilled in getting around automated filters and we suspect that training Comprehend to sanitise every kind of profanity or off-brand message a user could devise will be challenging.
There are other possible use cases for message flows – for example, looking up a support article automatically in order to show the user a link, sending an alert, or analysing sentiment – though in these cases it may not matter so much whether the processing takes place before or after a message is sent to others in the same channel. ®