Has science gone too far, part 97: Boffins craft code to find protesters on social networks, rate them on their violence

Image-recognition system posited as reporting tool


Mining social networks for every scrap of information about our online lives is now common practice for marketers, academics, government agencies, and so on.

Text in tweets, blogs and other posts is valuable because it's searchable, analyzable, and not terribly costly to crawl, fetch or store. But ongoing computer vision advancements have opened up the wealth of information encoded in images.

Earlier this week, researchers from University of California, Los Angeles described a way to analyze images to find protesters, to characterize their activities and to assess the level of violence depicted.

In a paper titled "Protest Activity Detection and Perceived Violence Estimation from Social Media Images," graduate student Donghyeon Won, assistant professor of public policy Zachary C Steinert-Threlkeld, and assistant professor of communication studies Jungseock Joo explore how imagery can be used to understand protests, because text may not be reliable.

"The important feature of our method is objectivity," said Joo in an email to The Register. "Text can be made up easily. There are lots of fake accounts too. Some people may say a protest is violent; others may say peaceful. But when you actually see a photograph with people shot and bleeding, you know it is violent."

What's novel here, said Joo, is activity recognition from images rather than text. "At the moment, no one else is capable of analyzing visual content in social media to characterize social movements," he said. "I am not aware of any other work using images."

The researchers collected, via keyword search and Twitter data stream, 10,000 images likely to be related to protests. They then trained a classification convolutional neural network to build a set of likely protest images and a set of negative examples, and passed the pictures for annotation by Amazon Turk workers.

protest imagery

The Turkers were directed to identify whether an image contained protest activity or protesters, to identify visual attributes in the scene, and to estimate the perceived level of violence and emotional sentiment.

Using models built from this data and other software, like OpenFace for face recognition and dlib for machine learning, the UCLA boffins designed a protest recognition system. They claim it performs very well for identifying violence in images and less well for identifying emotion, which they suggest may be attributable to the inconsistency of the training data generated by Amazon Mechanical Turk workers.

Applying their system to five protest events – the Women's March, Black Lives Matter, and protests in South Korea, Hong Kong, and Venezuela – the researchers contend that "protests in Venezuela are more violent and angrier than the other protests" and that "the Women's March is the least violent and angry" of all the protests studies.

They also note that there were more women at the Women's March, more African Americans at Black Lives Matter, and that the South Korean protests were the most well-organized, based on the greater presence of large groups in images.

While such conclusions may be intuitive, they're less easily challenged as opinion when derived from accepted algorithms. "Our paper will enable fair and objective reporting of protest events," Joo explained.

The researchers' model isn't perfect. They note that it failed to understand the meaning of symbolic acts like "die-ins" at the Black Lives Matter protest. That is to say, it may treat people pretending to be casualties as actual casualties, which could sway the computed level of violence. But refinement and iteration should be expected.

Who watches the social media watchers

In an email to The Register, Jeff Bigham, associate professor at Carnegie Mellon's Human-Computer Interaction Institute in Pittsburgh, Pennsylvania, said scientists have employed computer vision for activity tracking for a long time.

Such work, he said, invites questions about how it will be used.

"With growing abilities of computer vision and our political climate, many people have begun to worry about how computer vision could be used at scale by, for instance, an authoritarian regime to detect political dissidents," he said. "There was a paper out last week that claimed some accuracy in identifying protesters whose faces are partially obscured."

Without addressing the details of the UCLA research, Bigham said data generated through Amazon Mechanical Turk can present problems. "There's a lot of subtlety in whether what Turkers labeled as violent really was, and whether it's what authoritarians would target," he said.

Steinert-Threlkeld, in an email to The Register, said he and his colleagues approached this project without a specific end user in mind. "You can imagine why governments may be interested in detecting these activities, violent or not," he said. "On the other side, protest organizers could monitor photos to see whether or not a protest is becoming violent."

He suggested protest organizers might use this sort of technology to watch for emerging violence in order to defuse tensions, because violent movements tend to receive less support than non-violent ones.

Asked whether he worried that the development of capable image and text monitoring technology would have a chilling effect on social media participation, he said he was not concerned.

"The data-gathering and analytic capabilities of governments and major businesses far exceeds what we have for this project," said Steinert-Threlkeld. "They already use text to detect protests and predict changes, and I expect they are using images already." ®


Other stories you might like

  • Microsoft's do-it-all IDE Visual Studio 2022 came out late last year. How good is it really?

    Top request from devs? A Linux version

    Review Visual Studio goes back a long way. Microsoft always had its own programming languages and tools, beginning with Microsoft Basic in 1975 and Microsoft C 1.0 in 1983.

    The Visual Studio idea came from two main sources. In the early days, Windows applications were coded and compiled using MS-DOS, and there was a MS-DOS IDE called Programmer's Workbench (PWB, first released 1989). The company also came up Visual Basic (VB, first released 1991), which unlike Microsoft C++ had a Windows IDE. Perhaps inspired by VB, Microsoft delivered Visual C++ 1.0 in 1993, replacing the little-used PWB. Visual Studio itself was introduced in 1997, though it was more of a bundle of different Windows development tools initially. The first Visual Studio to integrate C++ and Visual Basic (in .NET guise) development into the same IDE was Visual Studio .NET in 2002, 20 years ago, and this perhaps is the true ancestor of today's IDE.

    A big change in VS 2022, released November, is that it is the first version where the IDE itself runs as a 64-bit process. The advantage is that it has access to more than 4GB memory in the devenv process, this being the shell of the IDE, though of course it is still possible to compile 32-bit applications. The main benefit is for large solutions comprising hundreds of projects. Although a substantial change, it is transparent to developers and from what we can tell, has been a beneficial change.

    Continue reading
  • James Webb Space Telescope has arrived at its new home – an orbit almost a million miles from Earth

    Funnily enough, that's where we want to be right now, too

    The James Webb Space Telescope, the largest and most complex space observatory built by NASA, has reached its final destination: L2, the second Sun-Earth Lagrange point, an orbit located about a million miles away.

    Mission control sent instructions to fire the telescope's thrusters at 1400 EST (1900 UTC) on Monday. The small boost increased its speed by about 3.6 miles per hour to send it to L2, where it will orbit the Sun in line with Earth for the foreseeable future. It takes about 180 days to complete an L2 orbit, Amber Straughn, deputy project scientist for Webb Science Communications at NASA's Goddard Space Flight Center, said during a live briefing.

    "Webb, welcome home!" blurted NASA's Administrator Bill Nelson. "Congratulations to the team for all of their hard work ensuring Webb's safe arrival at L2 today. We're one step closer to uncovering the mysteries of the universe. And I can't wait to see Webb's first new views of the universe this summer."

    Continue reading
  • LG promises to make home appliance software upgradeable to take on new tasks

    Kids: empty the dishwasher! We can’t, Dad, it’s updating its OS to handle baked on grime from winter curries

    As the right to repair movement gathers pace, Korea’s LG has decided to make sure that its whitegoods can be upgraded.

    The company today announced a scheme called “Evolving Appliances For You.”

    The plan is sketchy: LG has outlined a scenario in which a customer who moves to a locale with climate markedly different to their previous home could use LG’s ThingQ app to upgrade their clothes dryer with new software that makes the appliance better suited to prevailing conditions and to the kind of fabrics you’d wear in a hotter or colder climes. The drier could also get new hardware to handle its new location. An image distributed by LG shows off the ability to change the tune a dryer plays after it finishes a load.

    Continue reading

Biting the hand that feeds IT © 1998–2022