Alert management and insider risk continuous monitoring systems

What is ‘Continuous Monitoring’ for Insider Threat Detection?

A core component of any Insider Risk Management program is what is referred to as Continuous Monitoring by the U.S. Government, which involves the collection, correlation and analysis of data to identify patterns of behaviour, activity or indications that a trusted insider may pose a threat (i.e. an ‘insider threat’) or be progressing down the Critical Path.

To perform Continuous Monitoring, organisations are purchasing solutions such as DTEX, Exabeam, Secureonix, and Splunk or alternatively using existing analytics platforms to introduce some level of capability. Microsoft Purview Insider Risk Management, launched in 2019, is another option in the vendor landscape. Irrespective of what system you use, they all have one thing in common: they generate ‘alerts’.

What is an ‘alert’ anyway?

Advanced analytics systems (such as those used in insider threat detection, workforce intelligence, fraud detection or cybersecurity) generate what are colloquially referred to as ‘alerts‘. Alerts are simply instances of activity (e.g. transactions, behaviours, relationships, events) which meet the criteria configured in the advanced analytics system models.


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


Alerts that are generated are typically managed, or dispositioned, as a ‘case’ using some sort of case management system. Dispositioning an alert involves reviewing the information associated with that alert and potentially conducting further data collection or analysis specific to the alert’s “event type”, before determing what to do with it based on organisational policies. This sequential process is illustrated below:

Illustrating the sequential process from Event to Case or Closure (Curwell, 2022)

Some insider threat detection solutions offer detection analytics and case management as part of an integrated solution, some have no inbuilt case management functionality but easily integrate with a third party solution via API, and yet others accomodate both options. Case Management is a large topic in its own right which I will write about more in the future.

The three levels of insider risk ‘alert’ management

The literature on Insider Risk Management typically refers to three types of alert. Whilst the terminology and specifics is inconsistent between authors, audiences and vendors, the basic principles remain the same. My interpretation is explored more below:

Level 1 alert disposition comprises the steps take to review a system generated alert based on pre-defined or deployed detection models or rules. In some situations, Level 1 alerts may only comprise a single indicator, which is likely to give rise to more ‘false positives’ and may be easily triggered out of context. Level 1 alerts are typically anonymised or masked in many Insider Threat Detection systems on the market to prevent analysts identifying individuals and reducing opportunities for analytical bias. In terms of actions, a Level 1 analyst might:

  • Reject an alert as a false positive,
  • Place some sort of temporary increased monitoring on the individual if there are signs of suspicious behaviour but do not meet the organisation’s criteria for escalation, or,
  • Escalate the Level 1 alert to a Level 2 case where there characteristics of a case meet the businesses pre-defined criteria for escalation.

Level 1 alerts are usually the greatest in terms of volume, and are typically dispositioned by junior team members or in cases where risks are within tolerance, automated decision engines.

Photo by Tima Miroshnichenko on Pexels.com

Level 2 preliminary assessment is where the basics of what we consider a ‘real’ investigation begin, and may involve looking for patterns of behaviour, anomalies, or performing background investigations to gather context required to disposition what are often multiple alerts on the same individual, or which involve a single typology comprising multiple inter-related indicators or behavioural patterns.

Level 2 cases are often worked by more experienced team members. They typically commence with an anonymised case but if the case is not closed as a ‘false positive’, at some point the evidence may justify de-anonymising based on the organisation’s policies and procedures. The outcomes of a Level 2 case typically include:

  • Close a case as unsubstantiated / unable to substantiate / no case to answer;
  • Place the trusted insider or type of behaviour / activity on a watchlist so it can be more closely monitored in the future (often involving manual review without reliance on automated detection models);
  • Refer the matter to a line manager or other internal professional (e.g. HR, Compliance, Risk, IT) where action is required but criterial for Level 3 escalation is not met such as:
    • Trusted insiders who are at the early stages of progressing along the critical path and may benefit from counselling or individual support, and / or,
    • Staff who require more training, coaching or guidance to ensure proper compliance (i.e. ignorant or complacent insiders), or,
    • Identification of internal control gaps requiring remediation by the employer (i.e. cases where an employee is not a fault)
  • Escalate the case to Level 3 where an allegation of misconduct, fraud or other criminal behaviour is formed.

Level 3 comprises a formal internal investigation, performed by professionaly trained and appropriately accredited investigators (see ICAC, 2022). Sometimes it is appropriate for these investigations to be performed by external service providers – if unsure, guidance should be sought with General Counsel prior to commencing an investigation. These investigations involve not just evidence collection and data analysis from systems, but may also involve interviewing witnesses and suspects, taking statements, writing formal investigative reports and, in extreme cases, preparing briefs of evidence for criminal prosecution.

Understanding Insider Threat Detection Alerts (Curwell, 2022)

Level 3 investigations are not undertaken lightly

Just because a case is meets the organisation’s criteria and is escalated for Level 3 investigation does not necessarily mean that an investigation must or will commence (see ICAC, 2022). Businesses need strong governance and clear policies when it comes to internal investigations, starting with the management decision on whether a formal investigation is justified.

Typically this decision will be made by a special committee with delegated authority from the CEO or Board and comprising representation from senior management, legal, HR, risk, compliance, security and integrity, and sometimes internal audit. This decision is based off a number of factors which will be explored more in a future article, but the important thing is to have clear guidlines and evaluate each case in a consistent manner to avoid allegations of bias.

Importantly, even for Level 3 cases employers have a range of alternatives to a formal investigation, including changes to supervision or management arrangements, employee development, or other organisational action. Where a formal internal investigation is performed, employees must be afforded procedural fairness (also known as ‘natural justice’).

In my opinion, Level 2 alert dispositions are the most critical for any employer. They can identify and divert trusted insiders at early stages of progressing along the critical path, and whilst harm may have been done against the organised, this may be relatively minimal and / or recoverable for the organisation and trusted insider concerned. In contrast, it may not be possible or practical for malicious trusted insiders to recover from some types of Level 3 cases which are substantiated. It makes sense to disproportionately allocate organisational resources – including specialists from HR, Legal, IT, security, counsellors, and professional psychologists to resolve Level 2 issues, in comparison to Levels 1 and 3.

Level 2: source of greatest risk and greatest opportunity for diversion?

In contrast to Level 1 and Level 3 cases, Level 2 presents not only the greatest opportunity (as outlined above) but the greatest risk to the organisation. I have seen overzealous individiuals do substantial damage at this stage, far more so than Level 1 where opportunities to cause harm are limited due to viewing an anonymised alert in isolation, and Level 3 which are staffed by professional and experienced investigators, oversighted by appropriate governance and legal mechanisms and who have a deep understanding of how to perform their role.

Level 2 practitioners often have a combination of advanced skills, knowledge of the alert subject’s identity, however they typically lack of understanding of the law and protocols when conducting an internal investigation. This can lead to the commencement of what is effectively a Level 3 investigation without internal approval or oversight, potentially damaging employee engagement and trust in management, removal or termination of the insider risk management program, litigation or regulatory action, and even adverse mental health and welfare outcomes for the subject concerned.

It is imperative that Level 1 and 2 team members, particularly Level 2, recieve adequate training and guidance on what is and is not appropriate in their role. Any Insider Risk Management Program, including continuous monitoring, should be fair, transparent and developed in consultation with Legal, employees and where applicable unions. Poor practices or discipline in continuous monitoring can terminally damage organisational trust in such progams.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Understanding High Risk Roles

What are High Risk Roles?

Understanding the concept of High Risk Roles begins with the concept of assets. There are generally agreed to be two categories of asset – tangible (e.g. physical things) and intangible (e.g. knowledge). Examples of tangible assets include property (facilities), information (including intellectual property and trade secrets), reputation, people (workforce), systems and infrastructure, and stock or merchandise.

Every business is comprised of a variety of different roles, each of which poses a different risk.
Photo by Matheus Bertelli on Pexels.com

Whilst loss, degradation or compromise of an asset may cause a financial loss or inconvenience, not all assets are critical to an organisation’s survival: Those assets which are critical are often referred to as ‘critical assets‘.

Definition: Critical Assets
A ‘Critical Asset‘ is an asset which the organisation has a high level of dependence on; that is, without that critical asset the organisation may not be able to perform or function.

Paul Curwell (2022)

Critical assets typically comprise only a small fraction of all assets held by any organisation, but their loss causes a disproportionately high business impact. In security risk management, we never have enough resources to treat every risk, nor does it make sense to do so. By extension, an organisation’s critical assets are those assets which it must use disproprotionately more resources to protect. This may range from restricting access to the asset to prevent loss or damage through to providing multiple layers of redundancy and increasing organisational resilience in the event of unanticipated shocks or events.

Not every activity is critical: its important to identify these and focus limited resourced on what's really important.
Photo by Pixabay on Pexels.com

Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


High Risk Roles: What are they and why are they important?

High Risk Roles are those which confer privileged access to an organisation’s critical assets, as well as other types of access privileges, user privileges, or delegations of authority.

High and Low Risk Roles Defined

High Risk Roles – those which confer privileged access to Critical Assets (including information) or decision-making rights
Low Risk Roles – those which confer normal access to Critical Assets, information or decision-making rights (i.e., non-privileged).

Paul Curwell (2022)

The concept of privileged access to assets, including information, is very much situational within the organisation concerned. If an organisation has no controls to protect its critical assets from loss, damage or interference, then every role is effectively high risk.

In contrast, if some roles are subject to less controls, supervision or oversight; senior staff are easily able to bypass or compromise internal controls by virtue of their position (or coerce junior employees or subordinates into doing so); or are more readily able to access critical assets (such as in organisations where critical assets are closely guarded or ‘locked down’), then a higher degree of trust is inherently placed in those individuals. This degree of trust is reflected in their ‘privileged access’ to these assets – some organisations have historically used the term ‘positions of trust’ to refer to such roles.

What are some examples of privileged access which make a position ‘high risk’?

An organisation’s workforce must have access to its critical assets to perform its core functions. Members of the workforce with access to its critical assets may not just comprise trusted employees, but also contractors, suppliers and other third parties, making it essential to have a mechanism to track who has access to what as part of good governance, let alone risk management and assurance. Examples of postitions which an employer may deem ‘high risk roles’ based on a risk assessment process include:

Unless defined by legislation, what constitutes a High Risk Role will differ between organisations. Some organisations use the Personnel Security Risk Assessment as a tool for identifying these roles (refer below).

The more senior an employee's position, the greater the potential risk exposure.
Photo by Andrea Piacquadio on Pexels.com

Five suggested tools to manage High Risk Roles

As outlined in the preceding paragraphs, the purpose of defining High Risk Roles is to identify the subset of your overall workforce which has privileged access to critical assets. In most organisations, perhaps with the exception of smaller organisations such as startups, those in High Risk Roles will comprise a very small percentage of the overall workforce. There are five main steps in managing high risk roles, as follows:

1. Personnel Security Risk Assessment (PSRA)

The purpose of the PSRA is a structured approach to identifying those groups of roles, or even specific positions, in the organisation which may be defined as high risk. The PSRA helps inform development of a number of risk treatments and internal controls, including design of Employee Vetting and Supplier Vetting Standards (also known as Employment Screening, Workforce Screening, Employee Due Diligence or Supplier Due Diligence or Supplier Integrity standards) and Continuous Monitoring Programs.

This alignment helps ensuring that the vetting (background check) programs reconcile to the organisation’s inherent risks where the risk driver is a trusted insider with an adverse background, and that Continous Monitoring Programs are risk-based and justifiable. The relationships between these high level concepts is illustrated in the following figure:

Organisational context shapes and influences PSRA design. Personnel Security risk treatments should correspond to a specific risk.

See my article here for more detail on Personnel Security Risk Assessment process.

2. Identify your High Risk Roles

This involves an exercise to determine which position numbers (or groups / types of roles) have privileged access to your critical assets. This activity manually assigns a risk rating to each position, group or type of role in the company’s HR Position Control or HR Position Management registers extracted from the organisation’s Human Resources Information System and might be stored somewhere such as Active Directory.

An example of the process used to identify high risk roles.

In some cases, the identification of High Risk Roles is undertaken as part of the Personnel Security Risk Assessment, whilst other organisations chose to do this as a discreet exercise.

3. Apply enhanced vetting to individuals occupying High Risk Roles

Many organisations run multiple levels of workforce screening (employment screening) for prospective and ongoing employees. Importantly, vetting looks at the employees’ overall background but does not consider their activity, behaviours or conduct within the organisation or on its networks (this is the role of Continuous Monitoring, below).

To manage cost and minimise unnecessary privacy intrusions, low risk roles will typically be subject to minimal screening processes – perhaps Identity Verification, Right to Work Entitlement (e.g. Working Visa or Citizenship), and Criminal Record Check. Vetting programs for High Risk Roles should be treatments for some of the risks identified through the Personnel Security Risk Assessment.

4. Conduct periodic ICT User Access Reviews

This should be undertaken on an ongoing basis as part of your cybersecurity hygiene, but Users who have higher access privileges, administor access, or access to critical assets should be periodically re-evaluated by line management to ensure this access is still required in the course of work. It is common to find people who are promoted or move laterally to new roles who inherit access privileges from previous roles which may no longer be required in subsequent roles.

Restricting Administrative Privileges is one of Australia’s Essential 8 Strategies to Mitigate Cyber Security Incidents, as published by the Australian Cyber Security Centre, which recommends revalidation at least every 12 months and that privileged user account access is automatically suspended after 45 days of inactivity.

Australian Cyber SEcurity Centre (2022)

5. Apply continuous monitoring for users in high risk roles

Continuous Monitoring through the correlation of data points obtained through User Activity Monitoring and / or other advanced analytics or behavioural analytics-based insider risk detection solutions (such as DTEX Intercept, Microsoft Insider Risk or Exabeam) should be disproportionately focused towards those in High Risk Roles (see Albrethsen, 2017).

In summary, the identification and management of High Risk Roles should be a feature of any Insider Risk Management, Supply Chain Risk Management, or Research Security Program. Increasingly, various legislative frameworks – such as Anti-Money Laundering / Counter-Terrorist Financing (AML/CTF) regime – also consider the concept of High Risk Roles in their compliance programs as a way to manage personnel related risks. Don’t forget, given that High Risk Roles change periodically as the organisation changes, regular updates to related artefacts form part of a mature capability.

Further Reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Applying the critical-path approach to insider risk management

What is the critical-path in relation to insider risks?

The ‘critical-path method’ (critical path approach) is a decision science method developed in the 1960’s for process management (Levy, Thompson, Wiest, 1963). In 2015, Shaw and Sellers applied this method to historical trusted insider cases and identified a pattern of behaviours which ‘troubled employees’ typically traverse before materialising as a malicious insider risk within their organisation.

Employees with concerning behaviours can sometimes manifest in the workpalce
Photo by Inzmam Khan on Pexels.com

This research paper was written after a period of hightened malicious insider activity in the USA, including Edward Snowden, Bradley (Chelsea) Manning, Robert Hansen and Nidal Hasan. Shaw and Seller’s research identified four key steps down the ‘critical-path’ to becoming an insider threat, as follows:

  • Personal Predispositions: Hostile insider acts were found to be perpetrated by people with a range of specific predispositions
  • Personal, Professional and Financial Stressors: Individuals with these predispositions become more ‘at risk’ when they also experience life stressors which can push them further along the critical path;
  • Presence of ‘concerning behaviours’: Individuals may then exhibit problematic behaviours, such as violating internal policies or laws, or workplace misconduct
  • Problematic ‘organisational’ (employer) responses to those concerning behaviours: When the preceding events are not adequately addressed by the employer (either by a direct manager or the overall organisational response fails), concerning behaviours may progress to a hostile, destructive or malicious act.

Shaw and Sellers note that only a small percentage of employees will exhibit multiple risk factors at any given time, and that of this population, only a few will become malicious and engage in hostile or destructive acts. Shaw and Sellers also found a correlation between when an insider risk event actually transpires and periods of intense stress in that perpetrator’s life.


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


The ability to identify these risk factors early means managers may be able to help affected employees before they cross a red line and commit a hostile or destructive act from which there is no coming back – but only if a level of organisational trust exists and if co-workers / employees are aware of the signs. The research by Shaw and Sellers is summarised in the following figure, which has been overlaid against the typical ’employee lifecycle’ for context:

Graphic of the critical path in relation to the typical employee lifecycle
The ‘critical path’ in relation to the employee lifecycle (Paul Curwell, 2020)

Shaw and Sellers found the likelihood of someone becoming an insider risk increases with the accumulation of individual risk factors, making early identification a priority which should help inform decisions by people managers within an organisation.

The critical path should help inform people-management decisions

Over the past decade, the focus of emotional and mental health and well-being has grown in western society (as highlighted by COVID 19). On the supply side, tight labour markets have focussed the attention of managers towards maintaining employee engagement and retention. Society’s increasing openness to discussing mental health issues, including stress and anxiety, is helping provide a mechanism for earlier awareness of behavioural conditions which could trigger an employee or contractor to progress down the critical path and become a malicious insider.

Consequently, there are now various supports and interventions in the workplace and in society to help employees with personal predispositions who are experiencing life stressors. Examples of workplace assistance programs include:

  • Employee Assistance Programs – providing access to workplace psychological and counselling services
  • Financial counselling – for individuals who are over-extended in terms of credit or are struggling financially (this may include support restructuring personal debt to avoid bankruptcy)
  • Addiction-focused peer support and counselling – such as Gamblers Anonymous or Narcotics Anonymous

I’m sure that for some people, the increasing acceptance and willingness of society to be open to listening to colleagues who may be struggling helps to relieve the pressure somewhat, whereas historically these individuals may have been forced to suffer in silence.

It is critical employees feel adequately supported in the workplace to minimise insider risks
Photo by cottonbro on Pexels.com

The importance of these programs is that employees feel they are adequately supported, and that they are confident that if they self report an issue they will not be vilified, disadvantaged long term, or even fired for doing so. This concept is referred to by the CDSE as ‘organisational trust‘, which is a two-way street: Employers and managers must be able to trust their workforce, but workers must also be able to trust that management and the organisation will do the right thing by them.

The role of continuous monitoring (insider risk detection) systems and the critical path

Preceding paragraphs discussed the three main steps in the critical path, being personal predispositions, life stressors and concerning behaviors. Some of these may be visible to colleagues, such as an employee who is visibly angry. However, other indicators, such as accessing sensitive information, office access at odd hours, declining performance and engagement, may not be visible on the surface as ‘signs’ to co-workers.

Continous monitoring and evaluation tools, otherwise known as Insider Risk (Threat) Detection or Workforce Intelligence systems, are advanced analytics based solutions which integrate a variety of virtual (ICT), physical (e.g. access control badge data, shift rosters, employee performance reporting) and contextual information (e.g. employee is in a high risk role, information access is sensitive and not required in ordinary course of duty) in one central location.

Behavioural Analytics is typically marketed as a core component of software solutions on the market, although the way in which the behavioural analytics actually works may be a ‘black box’ with some vendors. These analytics tools are typically programmed to identify one or more indicators on the critical path, and generate ‘alerts’ or automated system notifications in response to an individual displaying the programmed indicators.

Most systems use some sort of identity masking, at least in the early stages of alert review and disposition, so that employees cannot be unncessarily targeted or vilified – at least until there is sufficient material evidence that suggests a problem which is sufficient to initate an investigation under the employer’s workplace policies.

Continuous monitoring is key to address behavioural change over time
Photo by Christina Morillo on Pexels.com

Continous monitoring systems require configuring for your organisation’s context

Importantly, as with any analytics-based intelligence or detection system, the system itself is only as good as what it is programmed to detect. Shaw and Sellers (2015) have this to say in relation to the blanket application of the Critical-Path Approach to every type of insider threat:

We do not suggest that this framework is a substitute for more specific risk evaluation methods, such as scales used for assessing violence risk, IP theft risk, or other specific insider activities. We suggest that the critical-path approach be used to detect the presence of general risk and the more specific scales be used to assess specific risk scenarios.

Shaw and Sellers (2015), Application of the Critical-Path Method
to Evaluate Insider Risks

This highlights the importance of ensuring your system is properly tuned to your organisation’s inherent risks, and could require multiple detection models, each of which focuses on a specific risk (e.g. sabotage, workplace violence). Models or rules used by these systems must be tuned to the organisation’s specific threats and risks, and configured in a way that reflects the organisation’s unique operating context.

The ‘garbage in, garbage out’ principle applies here: If your organisation only uses simple out of the box rules or detection models provided by the software vendor, it is unlikely these will detect the really critical risks to your business. Continous monitoring and evaluation for insider risks is an area which is developing quite rapidly, and is influenced by the convergence of cybersecurity with protective security and integrity more generally. I will discuss these continuous monitoring and evaluation concepts in more detail in future posts.

Further Reading

  • Centre for Development of Security Excellence [CDSE], (2022). Maximizing Organizational Trust, Defense Personnel and Security Research Center (PERSEREC), U.S. Government
  • Levy, F.K., Thompson, G.L, Wiest, J.D. (1963). The ABCs of the Critical Path Method, Process Management, Harvard Business Review, September 1963, https://hbr.org/1963/09/the-abcs-of-the-critical-path-method
  • Shaw, E. and Sellers, L. (2015). Application of the Critical-Path Method to Evaluate Insider Risks, Studies in Intelligence Vol 59, No. 2 (June 2015), pp. 1-8, accessible here.

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.

Typologies demystified – what are they and why are they important?

What are typologies and what role do they perform?

The term ‘typology’ is used in the sciences and social sciences and can be defined as “a system for dividing things into different types”. According to Solomon (1977) “a criminal typology offers a means of developing general summary statements concerning observed facts about a particular class of criminals who are sufficiently homogenous to be treated as a type“. Use of the term ‘typology’ in this way apparently dates back to italian criminologist Cesare Lombroso (1835–1909).

As we see the increasing convergence of financial crime, cybersecurity and physical threat detection in domains such as insider threats or fraud, it becomes increasingly important to have an end-to-end understanding of the path and actions that ‘bad actors’ must take to realise their objective, as well as other factors such as offender attributes / characteristics, motive, and overall threat posed. Amongst other things, constructing a fraud or insider threat typology requires a good understanding of how and where an organisation’s normal business processes can be exploited, including an understanding of the systems and data needed by offenders to be successful.

How do typologies, modus operandi and TTP’s differ?

The disciplines of fraud, cybersecurity, intelligence analysis, security risk analysis and others have largely evolved in isolation from each other as this is the way we design organisations (by functional specialisation which align to employee positions, not threats which align to the criminals targeting the organisation). This has given rise to a variety of different terms and approaches to doing effectively the same thing.


Does this article resonate with you? Please vote below or subscribe to get updates on my future articles


As disciplines converge, driven by the need for an end-to-end view of a threat in order to facilitate timely detection, professionals across these domains need to understand the practices and lexicon used by peers. In my experience and from research, a typology provides a broad overview of the threat and will comprise multiple data points, including but not limited to Modus Operandi / TTP’s:

Modus Operandi (MO) and Tactics, Techniques, and Procedures (TTPs) are effectively the same thing in practice and refer to the way a crime (or attack) is executed, the one difference being that MO has its roots in criminal law and TTPs in the military but today is heavily referenced in cybersecurity:

  • Tactics, Techniques and Procedures (TTPs) – “The behavior of an actor. A tactic is the highest-level description of this behavior, while techniques give a more detailed description of behavior in the context of a tactic, and procedures an even lower-level, highly detailed description in the context of a technique.” (NIST SP 800-150)
  • Modus Operandi (MO) – Latin meaning “mode of operating.” “In criminal law, modus operandi refers to a method of operation or pattern of criminal behavior so distinctive that separate crimes or wrongful conduct are recognised as the work of the same person” (Cornell Law School). For example, “it was argued that these features were sufficiently similar such that it was improbable that robberies with those features were committed by persons other than the respondents” (NSW Judicial Commission).

Everything we do leaves a trail, including in the digital world (often referred to as ‘digital exhaust‘). Detecting a potential ‘bad actors’ trail to prevent insider threats, financial crime and cybercrime requires both (a) understanding what to look for (which can comprise very subtle, highly nuanced signs amongst a sea of data), as well as (b) having tools sensitive and fast enough to collect, process and analyse these signs so as to prompt a response.

My favourite analogy for a typology is a recipe: If I am going to bake a cake, the typology is to a data scientist (who designs and runs the analytics models for detection) what the recipe is to the baker. In contrast, intelligence analysts are the recipe writers – they understand all the ingredients and how they need to come together. The skills of data scientists and intelligence professionals are complementary.

How do they relate to risks?

Should you choose to perform more research into the concept of typologies in criminology, you will find they can be developed for just about anything. But in the case of insider threats, financial crime and cybercrime, we are only interested in those threats which directly impact our respective organisation, customers, products, systems or assets. This means we need to link them to risks: Whilst we can develop other typologies, if the materialisation of the threat does not result in a risk to the organisation, then the exercise may be pointless.

To develop a typology that is capable of being used in an advanced analytics-based detection system, the typology needs to be as specific as possible. This means a typology should be developed for a specific, or highly detailed risk (i.e. 4th level risk). It is common to find there are one or more typologies associated for each 4th level risk. The following figure illustrates the relationship between risks, typologies and analytics-based detection models which generate ‘alerts’ (cases) for disposition and potential investigation:

Author: Paul Curwell (2022) (c) – how typologies bridge the gap between risks and analytics-based detection

Throughout my career I have worked with many typologies, and one of my early learnings was that typologies are highly contextualised. For example, an employee who has resigned and works in sales whose job involves sending out brochures to a prospective customer’s email address is not a problem, whilst an employee who has access to sensitive trade secrets and sends emails with attachments to a personal email address may well be.

Typologies need to address this level of specificity, which is part of the reason for aligning them to 4th level risks. Good typologies also include indicators specific to the parties involved in the activity, the context of the activity, and the associated threat.

What are the components of a typology and why?

Writing good typologies is hard (I refer to them as ‘deceptively simple’). Some typologies are quite generic, written so as to be implemented by any reader with any detection system (examples include those written for Anti-Money Laundering or Counter-Terrorist Financing by bodies such as FATF, FINCEN and AUSTRAC). Substantial work can be required to take these more generic typologies and implement them – sometimes this even requires complete rewriting.

Irrespective, there are a number of fundamental components of any typology. Note however, that some required fields will be specific to the detection system used (i.e. they may be required as inputs to design or build the models):

  • Typology name
  • Threat actor details (perpetrator, group affiliation, threat type etc)
  • Target(s)
  • Description of how the attack is perpetrated
  • Illustration (e.g. process map) for how the attack is perpetrated
  • Indicators (contextual, threat and party specific)
  • Data sources for each indicator
  • Description of the steps required for investigation and any associated analytical techniques

In my opinion, a typology is ‘finished’ when it can be readily understood and converted to analytics-based detection model by a data scientist with minimal rework or clarification being required. Often intelligence professionals (who are the experts in a particular threat) write typologies and hand them over to a data scientist, who then needs to become another expert in the threat to implement them! This is not a valuable use of resources and should be avoided. There will always be gaps in intelligence and threat actors keep changing to advoid detection – so a typology may never be 100% complete – but they should be written in a manner that addresses the information and design needs of its intended audience (i.e. data scientists, investigators and risk managers).

When building your typology library, it is good practice to map these to your 4th level risks to identify potential detection gaps. Steps involved in writing a typology will be explored in future posts.

Further reading

DISCLAIMER: All information presented on ForewarnedBlog is intended for general information purposes only. The content of ForewarnedBlog should not be considered legal or any other form of advice or opinion on any specific facts or circumstances. Readers should consult their own advisers experts or lawyers on any specific questions they may have. Any reliance placed upon ForewarnedBlog is strictly at the reader’s own risk. The views expressed by the authors are entirely their own and do not represent the views of, nor are they endorsed by, their respective employers. Refer here for full disclaimer.