Logging and monitoring can be a form of bullying, and make for lousy infosec

Surveillance is a poor starting point for your security strategy

Comment Many information security practices use surveillance of users' activities. Logging, monitoring, observability – call it what you will, we have built a digital panopticon for our colleagues at work, and it's time to rethink this approach.

The flaws of surveillance-based infosec are already appreciated. The European Court of Justice (ECJ) recently found that mass surveillance of the population was an unjustified intrusion into privacy, even when the goal is to combat serious crime. Why, then, do we consider it reasonable to implement invasive surveillance to address the flawed computer systems we choose to use?

Does watching staff 24x7 really make things more secure?

Excessive monitoring is a form of bullying

No, according to a researcher at a major UK university, who asked not to be identified.

"Surveillance isn't making things more secure. It's reactive, not pro-active, and it's very expensive," the researcher said. "Surveillance is mostly used to find a scapegoat after the fact. It's for reinforcing the existing power structures, not creating systemic change."

The researcher argues that the choice to implement surveillance is more to do with the biases of those responsible for the system design than any objective measure of outcomes. "Fear and control doesn't improve productivity, but it is historically a favoured approach of bullies and authoritarians," the researcher says.

Lilly Ryan, a penetration tester and security specialist, believes organisations default to surveillance, so had little concern about extending it when remote working became more prevalent during the COVID-19 pandemic, business-managed devices became more common in people's private homes.

"When you take a corporate laptop home, it means you have a camera and a microphone in your private space," Ryan said. "I don't think that people realise just how much people are able to look at," she said. "There's the ability to watch virtually everything."

Unjustified surveillance is risky

Organisations that thoughtlessly decide to add surveillance systems may be creating new risks for themselves, not merely addressing existing risks.

"Staff who become aware of just how much monitoring they are under would have every right – and it would be a reasonable deduction – to consider it excessive monitoring, which is a form of bullying," says Dr Rebecca Michalak, Managing Director of PsychSafe, and a consultant on HR compliance and risk management.

"In terms of psychological risk management, excessive monitoring is actually a form of bullying, and bullying is a registered psychosocial hazard that you must prevent under Australian safety legislation," she added. "If you're going in and deliberately implementing a system that conducts excessive monitoring, you are deliberately engaging in bullying."

"Basically the organisation is saying to you, in a roundabout way, 'We don't trust you. We don't trust your capability and we don't trust your motivation, so now we're going to bully you on an ongoing basis.' That's how it actually needs to be framed," Dr Michalak said.

That stance, whether stated explicitly or implicitly, sees some people actively resist being surveilled to regain a feeling of autonomy or control. Those who choose to resist can sometimes expend considerable effort to do so, and create new own security risks by doing so.

"As soon as you implement a surveillance system, even at an organisational level, there's always ways that people can circumvent it," says Dr Monique Mann, senior lecturer in criminology at Australia's Deakin University. "That also perhaps creates greater risks to information security in some regards because people are resisting and not using the approved channels."

Another odd aspect of surveillance-based infosec is that it is often considered as inevitable or unavoidable, a stance that runs counter to the narrative that the information technology and security industries are built on relentless innovation.

"We're living in an era of technological innovation. What's interesting is that we think about innovation from the perspective of advancing technological capabilities, but we don't think about innovation from the perspective of how we implement this safely into our society," says Dr Zena Assaad, senior research fellow at Australian National University's College of Engineering and Computer Science.

"It's making me think that we're shifting the blame and the onus onto people rather than the system. I don't think that we're tackling the issues in a strategic way."

"If you're actually only collecting the bare minimum information, then there are fewer risks from an information security perspective, rather than just surveilling everything," says Dr Mann.

Do this instead

Nicola Nye, chief operating officer at email provider Fastmail, has tried another approach: engineering its systems to safeguard information by default, even from Fastmail's internal support staff.

"To support our customers, we know that our staff will need to have a look at people's support settings all the time, and we don't want to have to audit all of that activity," Nye said. "So we obfuscate everybody's personal data – their mail, their contacts, their calendar entries – it's all turned into lorem ipsum."

We're shifting the blame onto people rather than the system

Staff are still able to do their job, but without getting access to information they don't need. "We've made it easy for our staff to not get hold of this data, and then they can't accidentally or deliberately leak it," says Nye. "We don't have to surveil people because surveilling people is dumb and we have better things to do with our time."

"It's good for them, and it's good for us," she says.

Fastmail's practices suggest it is possible to design and implement IT systems that are secure and respect the inherent dignity of the humans that need to use them. But doing so will require infosec pros and their managers to stop defaulting to easy options like surveillance and consider alternatives.

"The question is really about how do we use these tools in a reasonable and justifiable manner," PsychSafe's Michalak says. "Have you been transparent with people about what is being done and why it's required, and do the people you want to monitor consider it to be reasonable, or excessive?"

"We need to treat our colleagues as colleagues, not subjects or prisoners," says Lily Ryan. "Human dignity needs to factor more into our decisions." ®

Other stories you might like

  • Cisco warns of security holes in its security appliances
    Bugs potentially useful for rogue insiders, admin account hijackers

    Cisco has alerted customers to another four vulnerabilities in its products, including a high-severity flaw in its email and web security appliances. 

    The networking giant has issued a patch for that bug, tracked as CVE-2022-20664. The flaw is present in the web management interface of Cisco's Secure Email and Web Manager and Email Security Appliance in both the virtual and hardware appliances. Some earlier versions of both products, we note, have reached end of life, and so the manufacturer won't release fixes; it instead told customers to migrate to a newer version and dump the old.

    This bug received a 7.7 out of 10 CVSS severity score, and Cisco noted that its security team is not aware of any in-the-wild exploitation, so far. That said, given the speed of reverse engineering, that day is likely to come. 

    Continue reading
  • Azure issues not adequately fixed for months, complain bug hunters
    Redmond kicks off Patch Tuesday with a months-old flaw fix

    Updated Two security vendors – Orca Security and Tenable – have accused Microsoft of unnecessarily putting customers' data and cloud environments at risk by taking far too long to fix critical vulnerabilities in Azure.

    In a blog published today, Orca Security researcher Tzah Pahima claimed it took Microsoft several months to fully resolve a security flaw in Azure's Synapse Analytics that he discovered in January. 

    And in a separate blog published on Monday, Tenable CEO Amit Yoran called out Redmond for its lack of response to – and transparency around – two other vulnerabilities that could be exploited by anyone using Azure Synapse. 

    Continue reading
  • CISA and friends raise alarm on critical flaws in industrial equipment, infrastructure
    Nearly 60 holes found affecting 'more than 30,000' machines worldwide

    Updated Fifty-six vulnerabilities – some deemed critical – have been found in industrial operational technology (OT) systems from ten global manufacturers including Honeywell, Ericsson, Motorola, and Siemens, putting more than 30,000 devices worldwide at risk, according to private security researchers. 

    Some of these vulnerabilities received CVSS severity scores as high as 9.8 out of 10. That is particularly bad, considering these devices are used in critical infrastructure across the oil and gas, chemical, nuclear, power generation and distribution, manufacturing, water treatment and distribution, mining and building and automation industries. 

    The most serious security flaws include remote code execution (RCE) and firmware vulnerabilities. If exploited, these holes could potentially allow miscreants to shut down electrical and water systems, disrupt the food supply, change the ratio of ingredients to result in toxic mixtures, and … OK, you get the idea.

    Continue reading

Biting the hand that feeds IT © 1998–2022