No swearing or off-brand comments: AWS touts auto-moderation messaging API

Automate everything – but while human moderation is hard, robot moderation tends not to work


AWS has introduced channel flows to its Chime messaging and videoconferencing API, the idea being to enable automatic moderation of profanity or content that "does not fit" the corporate brand.

Although Amazon Chime has a relatively small market share in the crowded videoconferencing market, the Chime SDK is convenient for developers building applications that include videoconferencing or messaging, competing with SDKs and services from the likes of Twilio or Microsoft's Azure Communication Services. In other words, this is aimed mainly at corporate developers building applications or websites that include real-time messaging, audio or videoconferencing.

The new feature is for real-time text chat rather than video and is called messaging channel flows. It enables developers to create code that intercepts and processes messaging before they are delivered. The assumption is that this processing code will run on AWS Lambda, its serverless platform.

A post by Chime software engineer Manasi Surve explains the thinking behind the feature in more detail. It is all about moderation, and Surve describes how to "configure a channel flow that removes profanity and certain personally identifiable information (PII) such as a social security number."

She further explains that corporations need to prevent accidental sharing of sensitive information, and that social applications need to "enforce community guidelines" as well as avoiding "content shared by users that does not fit their brand." A previous approach to the same problem worked only after the message had been posted – too late in many scenarios.

It is telling that Surve observes that "human moderation requires significant human effort and does not scale."

Automate everything is a defining characteristic of today's cloud giants, even though moderation automation has not always been successful.

Surve said: "Amazon Comprehend helps remove many of the challenges," this service being for natural language processing and having the ability, when suitably trained, to detect "key phrases, entities and sentiment" to automate further actions.

The simple example presented by Surve does not use Comprehend for profanity but "simply… a banned word list," though she adds that "you can also use Comprehend for profanity, but you will need to train your own model." Comprehend is used for detecting a social security number.

Users are skilled in getting around automated filters and we suspect that training Comprehend to sanitise every kind of profanity or off-brand message a user could devise will be challenging.

There are other possible use cases for message flows – for example, looking up a support article automatically in order to show the user a link, sending an alert, or analysing sentiment – though in these cases it may not matter so much whether the processing takes place before or after a message is sent to others in the same channel. ®

Similar topics


Other stories you might like

  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • Declassified and released: More secret files on US govt's emergency doomsday powers
    Nuke incoming? Quick break out the plans for rationing, censorship, property seizures, and more

    More papers describing the orders and messages the US President can issue in the event of apocalyptic crises, such as a devastating nuclear attack, have been declassified and released for all to see.

    These government files are part of a larger collection of records that discuss the nature, reach, and use of secret Presidential Emergency Action Documents: these are executive orders, announcements, and statements to Congress that are all ready to sign and send out as soon as a doomsday scenario occurs. PEADs are supposed to give America's commander-in-chief immediate extraordinary powers to overcome extraordinary events.

    PEADs have never been declassified or revealed before. They remain hush-hush, and their exact details are not publicly known.

    Continue reading
  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading

Biting the hand that feeds IT © 1998–2022