Cybersecurity has historically been presented to staff as a set of rules to abide by. Failure to comply results in staff being conceptualised as ‘offenders’. It also establishes a mindset amongst those who are responsible for security outcomes that the only way to achieve socio-behavioural security is to attempt to keep their staff under control, often leading to coercion and punishment-based controls. In many cases staff experience these controls and admonishments as unfair. Therefore, while each instance appears to be an effective step towards enhanced security among security professionals, it represents a small breach of the psychological contract from the perspective of staff, often worsening psychological safety and resulting in deviant behaviours and avoidance.
The most significant version of this is observable within phishing tests, wherein staff are deceptively exposed to an inert malicious email. Staff are expected to identify the deception and report in accordance with organisational policies. This has become the prevailing method for determining staff cyber-security efficacy, normally underpinned by statistics which cite that phishing remains the most common method employed by genuine malicious actors. Those staff who report the phish are understood to have complied with the standard and have achieved the minimum expected behaviour whereas those who fail to report, or those who fall victim to the deception, have failed and are often exposed to some form of remedial training as a result. When understood through the lens of psychological response, it is self-evident that such a process falls clearly within the scope of a psychological contract breach. Deception without a consent framework is exceedingly likely to be interpreted as such. Therefore, whilst staff may appear to comply with organisational phishing tests, the psychological outcomes can be frustration towards the activity itself including vocal dissent, deviant behaviour such as conscious link-clicking to adversely impact statistics, or a deliberately liberal approach to labelling emails as suspicious. Worse, it can breed avoidance at the conceptual level for security practices. Symptomatically, staff may report that “security is not their job”, or that they “aren’t good with technology”. They may delay or fail to complete annual security training. These, and other indicators, can be subtle cues of general avoidance. Simply put, someone who psychologically avoids security, is more likely to make mistakes and thus more likely to expose themselves and their employer to security vulnerabilities.
A significant body of research across numerous contexts strongly supports the case that degradation in psychological safety, whether through psychological contract breach or deformed and unhelpful social constructs, significantly reduces the likelihood that staff will communicate concerns. In the context of security, this means that staff are unlikely to report incidents. Given that each person has the potential to identify anomalous system behaviour, strange phone calls, or even unusual connections through social media, accessing this level of situational awareness could be the biggest tactical advantage when responding to threats. It therefore stands to reason that increasing propensity to report should be one of the most important objectives of cyber-security behavioural change, and that doing so without paying close attention to psychological safety is unlikely to attain optimal results.
The collective impact of both psychological safety and the psychological contract cannot be overstated. Those who fail to effectively account for even minor breaches and losses of trust can find a rapidly growing number of people who avoid security tasks, act in ways that seem contrary to security outcomes, and demonstrate lost trust through fewer and fewer reports. However, the social impact is relevant in both directions. Those organisations that champion psychological safety and take steps to effectively operate with attention to the psychological contract find that they will initially develop a small number of staff who role-model ideal security behaviours. These individuals have a profound influence on those around them and through this mechanism, it is possible to sow the seeds of an autonomous security culture.
Given the potential for impact if uncontrolled and potential benefits for enhanced security if mobilised, Recyber have made these themes foundational in the creation of not only the Republic technology, but also in all of the Recyber service offerings.