“network situational awareness: sonification & visualization in the cyber battlespace”

This article treats computer networks as a cyber warfighting domain in which the maintenance of situational awareness is impaired by increasing traffic volumes and the lack of immediate sensory perception. Sonification (the use of non-speech audio for communicating information) is proposed as a viable means of monitoring a network in real time and a research agenda employing the sonification of a network’s self organized criticality within a context-aware affective computing scenario is given. The article views a computer network as a cyber battlespace with a particular operations spectrum and dynamics. Increasing network traffic volumes are interfering with the ability to present real-time intelligence about a network and so suggestions are made for how the context of a network might be used to help construct intelligent information infrastructures. Such a system would use affective computing principles to sonify emergent properties (such as self-organized criticality) of network traffic and behaviour to provide effective real-time situational awareness.

Introduction

This article explores some of the issues surrounding the problem of maintaining cyber situational awareness. Situational awareness is a term with its origins in military doctrine but has found its way into the mainstream and is especially applicable in the context of maintaining cyber security in computer networks. Networks are susceptible to a number of threats to their well-being from traffic congestion to deliberate attacks. In this article we show how the concept of cyber as a warfighting domain has traction and how applying a military understanding of the domain and situational awareness within it might help in finding new ways to maintain healthy networks. After explaining the underlying concepts of the cyber operations spectrum and the dynamics underpinning it we show where situational awareness fits into this understanding. Next we explore how the projected growth network traffic volumes may make maintaining situational awareness increasingly challenging, especially as the cyber domain is intrinsically inaccessible to sensory perception which is traditionally needed for situational awareness. The limitations of current approaches to network visualization are touched upon and the possible role of using sonification for situational awareness activities is explored. Following this contextualization we then offer suggestions for potentially fruitful avenues of investigation that may yield big benefits in maintaining network situational awareness.

Cyber: The New Battle Space?

There is significant debate in military circles about whether cyber has become the fifth warfighting domain. Traditional doctrine was directed towards operations on land and sea, and a combination of the two. History is well populated with examples of strategic operations combining operations on land supported by sea and vice versa. In the early 20th century, air was added as a third warfighting domain with increasing effect as a range of technologies have rapidly increased capability.

In the second half of the 20th century, space became the fourth warfighting domain and there is vigorous debate amongst practitioners and theorists about whether the cyber environment constitutes the fifth. There are a number of parallel lines of debate, however the central theme is focused on whether the cyber environment (sometimes known as cyberspace) is a discrete area of operations or whether it is a more pervasive concept that runs through all of the other domains.

Part of the principal challenge lies in the fact that whilst land, sea, air and space are physically distinct and are defined by similar criteria, cyberspace is defined in a different way, existing on an electronic plane rather than a physical and chemical one. Some would argue that cyber space is a vein which runs through the other four warfighting domains and exists as a common component rather than as a discrete domain. One can easily see how cyber operations can easily play a significant role in land, sea, air or space warfare, due to the technology employed in each of these domains.

On the other hand, this distinction is dependent on the way that we define the various domains. If our definitions are underpinned by a purely physical paradigm, then it is arguable that cyberspace is a very different type of context to the traditional warfighting domains. If, however, our definitions are based on an operational paradigm, then the distinction is less clear. It is possible to conduct entire operations in the cyber environment, made possible by the interconnected nature of the Internet and associated infrastructures. In the same way, it is common to have joint operations operating across multiple domains, including the cyber environment, and the cyber environment isn’t restricted to military warfighting scenarios.

A good example of a comprehensive cyber campaign occurred in April 2007, when Estonia was subjected to a wide range of concerted cyber attacks across a broad spectrum of government, commercial, industrial and media organizations. This sophisticated campaign effectively crippled a significant proportion of the Estonian National infrastructure whilst the attack was taking place. It is interesting that in the wake of the attack, Estonia has developed one of the most significant cyber defence infrastructures in existence.

Another example occurred a year later, in 2008 during the South Ossetia conflict where kinetic operations were preceded by a widespread cyber campaign which effectively blinded the defenders in advance of a rapid Russian advance. In this case cyber was used as part of a blended strategy which achieved strategic disruption of Georgian Public Service infrastructure thus enabling surprise. There are a range of other examples of the use of cyber as either a tool to achieve dislocation or disruption at a strategic level. The list grows steadily as more varied compromises are discovered across a range of government and industrial targets in a range of countries.

Cyber Operations Spectrum

Though operations in cyberspace are complex, they can be simplified, to some extent, by the cyber operations spectrum. This divides cyber operations into 3 areas:

  • Defence — Defensive operations take up approximately 80% of cyber activity. This constitutes the work that is (or should be) undertaken by all individuals or organizations. It ranges from simple protection of individual personal equipment to complex security management architectures.
  • Exploitation — Exploitation is covert activity conducted within an adversaries area of operations. This is generally invisible to the defender (unless compromised by the defender). Exploitation operations range from preparatory activity conducted to enable future activity to protracted information farming operations which are designed to generate intelligence over a protracted period of time.
  • Attack — The overt phase when effect is brought to bear on a target. There are a wide range of exploits and strategies associated with this phase. It should be noted that a visible attack may well have been preceded by invisible exploitation operations.

A knowledge of where current operations lie within the cyber spectrum is critical to a clear understanding of the cyber environment. It is also helpful to view the actions of adversaries in this context in order to try to understand the adversarial plan and predict their likely future actions.

Traditional protective strategies were often based on the defence of boundaries and perimeters. Whether defended by technology or, in some cases, complete air gaps, boundary based defence was initially effective until attackers found ways to achieve a breach, whether by compromising vulnerable technology or bridging air gaps, as could be seen, for example, in the Stuxnet attack on the Iranian nuclear processing facility (Kerr et al. 2010). This boundary-based model is increasingly seen as flawed due to the enormous complexity and granularity of the cyber environment. Increasingly, defensive architectures are seen to be resilient matrices of multiple defensive components. It is no longer credible for organizations to assume that they are completely safe. The sensible security strategy now focuses on raising the bar to reduce the likelihood of a successful attack, but to assume that a proportion of attacks will be successful, but to have the mechanisms in place to identify and manage these events when they occur. Organizations must also ensure that operational architectures are sufficiently resilient to enable them to continue to operate whilst ‘under fire’. This has resulted in a subtle but tangible shift from purely protective postures to proactive intelligence management within organizations.

In many cases, the compromise of technology is achieved indirectly. This often involves the compromise of people. A wide and often sophisticated range of social engineering attacks are employed in order to compromise technology using traditional human weaknesses, including greed, curiosity, insecurity and ignorance. The dependence of cyberspace on people also extends the scope of compromise from direct attacks on target systems, to indirect targeting of social, economic, commercial and financial architectures. The traditional ‘high threat club’ (those organizations who are known to represent high value targets to attackers) are no longer the only organizations with a requirement for active and dynamic information security infrastructures. Information security is now a critical aspect of corporate governance across the organizational spectrum.

Dynamics of the Cyber Environment

If we assume that warfare is generally a strategic approach by which one or more parties seek to impose their will on another by force, then the cyber environment provides a range of opportunities for attackers and defenders alike. At an operational and tactical level, disruption or dislocation operations can be mounted against a range of kinetic and information based targets. Objectives can range from the destruction of targets to rendering them unusable to an adversary (often through information attacks on the integrity of particular assets), through intelligence gathering, deception and other information operations. At a strategic level, cyber operations provide opportunities to compromise national infrastructures and populations at a systemic level, through attacks on critical national infrastructure targets and services such as financial services, utilities (water, power, waste, etc), telecommunications and emergency response frameworks.

An important driver for the cyber environment is that it effectively becomes an asymmetric enabler. Cyber operations provide a viable attack vector for small nations or influence groups that enables them to directly engage even the largest power bases (military or otherwise) worldwide. One of the effects of the advent of the cyber environment has been to remove much of what Clausewitz (1873) termed the friction of war. This is exacerbated by the fact that tempo changes are possible, where operations can move rapidly from slow, covert activity to high intensity attack activity with little physical impact.

History has shown that an ability to switch tempo in battle has enormous value in its ability to unhinge adversaries and to compromise their will and ability to fight. This is one of the characteristics that lies at the heart of the ‘manoeuverist’ doctrine that underpins much of the 20th century warfighting doctrine. Manoeuver warfare is a potentially complex doctrine which is built on simple principles which shape the chosen battlefield through knowledge, understanding and agility. The British Army describes the manoeuverist approach as follows:

“This is an indirect approach which emphasizes understanding and targeting the conceptual and moral components of an adversary’s fighting power as well as attacking the physical component. Influencing perceptions and breaking or protecting cohesion and will are essential. The approach involves using and threatening to use force in combinations of violent and non-violent means. It concentrates on seizing the initiative and applying strength against weakness and vulnerability, while protecting the same on our own side. The contemporary Manoeuvrist Approach requires a certain attitude of mind, practical knowledge and a philosophy of command that promotes initiative”

(Ministry of Defence, 2010, Chapter 5).

The cyber environment provides an additional dimension within which agility can be achieved, and initiative seized. It is, perhaps, instructive that the practical application of the manoeuverist approach is broken down into the following components:

  • Understanding the situation — using information, intelligence and intuition coupled with a sound understanding of objectives and desired outcomes.
  • Influencing perceptions — planning, gaining and maintaining influence, and the management of key stakeholders.
  • Seizing and holding the initiative — Ensuring that we hold the ability to dictate the course of events, through competitive advantage, awareness and anticipation.
  • Breaking cohesion and will in our adversaries — Preventing our adversaries from being able to co-ordinate actions effectively, and compromise their determination to persist.
  • Protecting cohesion and will in ourselves and our allies — Enabling our own freedom of action and ability to co-ordinate our resources, ensuring that we retain the will and coherence to operate.
  • Enhancing and evolving the approach through innovation — The approach is enhanced through simplicity, flexibility, tempo, momentum and simultaneity.

All of these components are areas where cyber operations can play a significant part both for the attacker and the defender. In military terms, cyber may be seen as a force multiplier, increasing the effect of existing operational capability. There is, however, another side, in that these principles and components can be applied to operations in the cyber environment and, if applied with flexibility, can provide structure to planning.

To return to the initial question — has cyber become the new battlespace? — whilst the role of the cyber environment as a fully-fledged warfighting domain is open to sustained debate, it is very clear that the cyber environment is one in which it is possible to conduct a range of targeted operations. It is also clear that these operations may be conducted in isolation, or in conjunction with operations in the kinetic sphere (in any of the four principal warfighting domains.)

However we eventually decide to classify this area, we must ensure that we are able to operate within it, at least as effectively as our adversaries are able to. As such, it would be prudent to consider it to be a battlespace, and a high tempo battlespace in which our native situational awareness is limited. It is also a battlespace in which our ability to maintain an agile, proactive posture is critical to our ability to gain and maintain the initiative.

Situational Awareness and the Information Infrastructure Battlespace.

In this section, the authors will attempt to elaborate on the association between the battlespace and situational awareness. As outlined above, terms such as battlespace and attack have become common parlance when discussing the protection of information infrastructures from a wide range of cyber-based information operations, as has another term, situational awareness. What do we mean by situational awareness? Well, Endsley’s (1995, p.36) definition,

“the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future”,

will serve as a useful introduction. However, what does that mean?

The study of situational awareness has its roots in military theory, and has the goal of understanding the state of a particular scope and using that understanding to make decisions about how to proceed and respond to events. There are different models and frameworks for situational awareness in the computer networks field, but there is general agreement that its core consists of three levels (Endsley, 1995):

  1. Perception — becoming aware of situational events;
  2. Comprehension — interpreting what is happening to form a situational understanding of the events;
  3. Projection (i.e., prediction) — using the understanding to inform what actions (if any) should be taken to control the network.

So in essence situational awareness is a process, consisting of (i) becoming conscious of the immediate environment, (ii) and understanding how temporal/spatial events (which you may or may not control) will impact on that environment. It is generally understood that inadequate situational awareness is an element of poor decision-making, especially in those situations that are composed of high information flow, with any disastrous consequences resulting from that poor decision-making being attributed to ‘human error’, e.g., fighting in a combat zone (MOD, 2010, chapter 5), piloting an airplane (John Boyd, see Angerman, 2004).

However, there are situations where having a good awareness of the current environment, and making the correct decision within a strict time frame is critical; air traffic controllers, network administrators for example. In these examples, monitoring systems providing multiple information sources and formats will assist the decision-making team, all of which will require the team to decipher, analyse and understand. In such time-critical situations, the situational awareness requirement will not remain stable. It could be argued that with an unforeseen event such as an air traffic incident, or malicious infrastructure activity, the successful response to those non-standard events would require a more detailed and informed situational awareness (Smith, 2013).

Unfortunately, cyberspace is characterized, amongst many things, by a lack of natural visibility and tangibility. Humans have sense-based defensive postures. Sight, smell, feel and sound underpin our innate defensive posture. The challenge of cyberspace is that none of these senses, the core of our sensory toolkits, are effective in the cyber environment without technology and tools. Network administrators are therefore depend upon these tools, and the way in which they have been developed and configured to provide them with necessary situational awareness in order to make sense of the network’s behaviour.

However, it is not unreasonable to expect that such unforeseen events would cause increased cognitive workload, thereby impacting on the situational awareness and consequently the decision-making time. During this decision-making process, the perceived situational awareness being used during that process will be influenced by the cognitive effort needed to acquire and comprehend that situational awareness (Smith, 2013). The authors in trying to overcome this cognitive effort have proposed a novel way of making data perceptible.

It should be noted that while SA as a concept is well defined and understood, SA depends on cognitive processes, and unfortunately while the SA concept makes general references to cognitive processing, very little detail on what cognitive processes are involved, or indeed how they function is available (Banbury & Tremblay, 2004). However, it still remains an extremely useful metaphor for describing the perceived understand an individual has of their immediate environment, and in an information-rich, time- constrained environment, a clear understanding of the current state of the battlespace, i.e., situational awareness, becomes a battle-winning factor. In securing cyberspace, situational awareness represents a way of perceiving threat activity to an information infrastructure, such that network administrators can actively defend the network.

When discussing the manoeuverist approach above, we noted that in order to gain and maintain the initiative in a particular area of operations, the first step or component was to achieve an understanding of the area and activity within it. This clearly echoes Endsley’s model noting a perception and comprehension of information in order to enable projection; actions to seize the initiative in a particular situation.

Noting that the manoeuverist perspective on situational awareness developed within a kinetic warfighting context (MoD, 2010, chapter 5), it looks in even more detail at information operations, intelligence collection and collation as part of the process to convert perception to comprehension and projection. This is directly relevant to the information space and implies a degree of planning and direction through the acquisition, analysis and dissemination of intelligence. In many contexts, analysis is intuitive and organic, especially in the high tempo information space, however, we must acknowledge its role as an active part of the practical process. It is this transition from information to intelligence which takes us from Endsley’s Understanding Phase to the Projection Phase.

Another practical perspective comes from John Boyd. Whilst Endsley’s model is useful for understanding the levels of situational awareness, an example from the kinetic sphere readily illustrates how it adds value in a practical context. If we take a brief step into kinetic military doctrine, and view the computer incident response process in the context of Boyd’s OODA loop theory (see Angerman, 2004), we find a useful model to review the practical relevance of situational awareness in a combat situation.

John Boyd was commissioned by the US Department of Defense in 1976 to analyze why US pilots in Korea were so successful despite the fact that the opposing Chinese MiG15 aircraft were technically superior in many respects. His simple theory, which postulated that certain aspects of the US aircraft design enabled the pilots to react more quickly to the changing battle, has gained much traction since.

Boyd theorized that combat pilots made decisions using a cycle comprising four steps: observe, orient, decide, and act (OODA). In a contest between two opposing pilots the individual who could complete this cycle the quickest would have the advantage. Boyd suggested that the increased speed at which the US pilots could react and reorient themselves outweighed the technical superiority of the MiG15.

Refinements have since been made to Boyd’s OODA model and it is particularly pertinent in the context of cyber security and the defence of information networks. The information network environment is characterized by high tempo and granularity, coupled with low visibilty and tangibility. Administrators are therefore dependent on complex and granular data feeds for data about what is happening, and must often further translate this view into language that can be understood by decision makers. The use of tools can simplify this complex data picture, but each analysis layer introduces margin for error and adds Clausewitzian friction (Clausewitz, 1873). Added to this are the practical limitations of our physical and intellectual physiology; it is practically impossible for most people to sit watching complex visual data feeds concurrently with other activity without quickly losing effectiveness.

We have discussed the role of cyber as a battlespace, and noted that one of the principal challenges associated with this area is that of maintaining situational awareness in an environment which is essentially intangible and invisible. In order to increase our visibility in this area, we need to identify indicators that may warn of attacks or inappropriate activity. The next section will discuss one such area that may support this visibility; network traffic analysis.

Network Traffic Volumes

Cisco has reported that global mobile data traffic grew 70% in 2012, with “traffic volumes reaching 885 petabytes per month, up from 520 petabytes per month in 2011” (Cisco, 2013a). The report goes on to make the following prediction for 2017.

“Global mobile data traffic will increase 13-fold between 2012 and 2017. Mobile data traffic will grow at a compound annual growth rate (CAGR) of 66 percent from 2012 to 2017, reaching 11.2 exabytes per month by 2017”

(Cisco, 2013a).

While in a report on data centre IP traffic, Cisco (2013b) present the following estimates of global data centre traffic growth.

“Annual global data centre traffic will reach 7.7 zettabytes by the end of 2017. By 2017, global data centre IP traffic will reach 644 exabytes per month (up from 214 exabytes per month in 2012)”

(Cisco, 2013b).

“Global data centre IP traffic will nearly triple over the next 5 years. Overall, data centre IP traffic will grow at a compound annual growth rate (CAGR) of 25 percent from 2012 to 2017”

(Cisco, 2013b).

These estimates are, of course, provided by an organization with a vested interest in seeing an increase in both mobile and data centre traffic volumes, but some corroboration of Cisco’s estimates can be found in various reports by Gartner. For example, in their forecast for worldwide mobile data traffic and revenue Ekholm and Fabre (2011) suggest that global mobile data traffic will grow 26-fold between 2010 and 2015. Another Gartner report on the worldwide consumer broadband market estimates that,

“the number of consumer broadband connections will reach nearly 1.3 billion connections worldwide by 2015”

(Elizalde et al. 2012).

Ekholm et al. (2011) expect that

“mobile broadband connections to grow five-fold, and to reach 588 million connections by 2015”.

Naturally, estimates must be treated with caution, but even a conservative view shows data traffic volumes will double within the next five years. Even so, current network traffic volumes are already huge, with mobile data traffic at over 520 petabytes per month, while data centre IP traffic volumes run at 217 exabytes per month (Cisco, 2013a).

Implications of increasing data volumes: from data to intelligence

The implications of increasing data volumes are significant. There is much work in the data mining industry to understand and harness the power of what has become known as Big Data. As technology becomes increasingly complex, with 5th, 6th and 7th generation systems being (largely) created by other systems, often generating large quantities of data, industry has both an opportunity and a challenge.

The challenge lies in the conversion of data to information, and ultimately to understanding (intelligence). This process, aligned with Endsley’s perception, understanding and projection phases, enables us to understand the environment and, in manoeuverist terms, to do something about it. The challenge is not, however, a straightforward one. In the same way that data mining projects struggle to make sense of big data we must find ways of identifying what we can achieve from data and what is required to turn it into meaningful information and intelligence that enables us to take action. The challenge is so large that it becomes necessary to implement a planned procedure to manage this process, especially as this process is governed by the Intelligence Cycle (FBI, 2013).

Visualization: Making Data Perceptible

One of the challenges faced when analyzing data is perceptualization, that is, making the data and its properties apprehensible. This thread runs through a number of components in the intelligence cycle. Visualization of data is the process by which intangible and invisible (or possibly merely incomprehensible) data is put in a form in which it can be apprehended and understood by those seeking to convert it to intelligence, or communicate it to those whose duty is to make the decisions required to hold the initiative. This process must be completed in a consistent and timely manner, if it is to produce intelligence that is reliable and useful.

When incidents occur in the computer information space, experience shows that speed and accuracy of initial response is a critical factor in the subsequent successful resolution of the situation. The OODA loop kicks in with operators observing the indicators, orienting themselves and their sensors to understand the problem, deciding on the action, and acting in a timely and decisive way. Traditional monitoring approaches often make this difficult by obfuscating the initial indication and the context and requiring an extensive orientation stage. Ineffectiveness of initial response is consistently seen to be one of the hardest things for people to get right in practice.

One approach that has been taken to presenting administrators with the information they need is the use of information visualization techniques for representing traffic data. A goal of visualization is to use representational techniques that allow the user to more easily interpret complex data than would be achievable by looking at the raw data or using text-based summaries. D’Amico (in McNeese, 2012) stated the design challenge this way:

“… visualization designers must focus on the specific role of the target user, and the stage of situational awareness the visualizations are intended to support: perception, comprehension, or projection.”

Sonification for Network Monitoring

Much work has been done in applying information visualization techniques to network data for facilitating situational awareness (e.g., see Jajodia et al. 2010) for a recent overview). However, a particularly striking feature of the three-level model is that the first two levels — perception and comprehension — correspond directly with Pierre Schaeffer’s two basic modes of musical listening, ́ecouter (hearing, the auditory equivalent of perception) and entendre (literally ‘understanding’, the equivalent of comprehension). Schaeffer was writing within a musical arts context but Vickers (2012) demonstrated how these modes are applicable to sonification, the auditory equivalent of visualization.

Sonification is a branch of auditory display, a family of representational techniques in which non-speech audio is used to convey information. Here, data relations are mapped to features of an acoustic signal which is then used by the listener to interpret the data. Sonification has been used for many different types of data analysis (see Hermann, Hunt, and Neuhoff (2011) for a broad and recent treatment of the field) but one for which it seems particularly well suited is live monitoring, as would be required in situational awareness applications. The approach described in this section provides one way of addressing the challenges outlined above by enabling operators to monitor infrastructures concurrently with other tasks using additional senses. This increases the available bandwidth of operators without overloading individual cognitive functions, and provides a fast and elegant route to practical situational awareness using multiple senses and an increased range of cognitive ability.

Situational awareness requires intelligence to be provided in real time. A major challenge with live real- time network monitoring is that, with the exception of alarms for discrete events, the administrator needs to attend to the console screen to see what is happening. Spotting changing or emerging patterns in traffic flow would need long-term attention to be focused on the display. Therefore, sonification has been proposed as a means of providing situational awareness.

Monitoring tasks can be categorized as direct, peripheral, or serendipitous-peripheral (Vickers, 2011):

“In a direct monitoring task we are directly engaged with the system being monitored and our attention is focused on the system as we take note of its state. In a peripheral monitoring task, our primary focus is elsewhere, our attention being diverted to the monitored system either on our own volition at intervals by scanning the system . . . or through being interrupted by an exceptional event signalled by the system itself”

(Vickers, 2011, p. 455).

A system to sonify network traffic thus allows us to monitor the network in a peripheral mode. In a peripheral monitoring task,

“our primary focus is elsewhere, our attention being diverted to the monitored system either on our own volition at intervals by scanning the system …or through being interrupted by an exceptional event signalled by the system itself”

(Vickers, 2011, p. 455).

Hence, the monitoring becomes a secondary task for the operator who can carry on with some other primary activity. Serendipitous-peripheral is like peripheral monitoring except that the information gained “is useful and appreciated but not strictly required or vital either to the task in hand or the overall goal” (Vickers, 2011, p. 456). Thus, a system to sonify network traffic may allow us to monitor the network in a peripheral mode, the monitoring becoming a secondary task for the operator who can carry on with some other primary activity. Network traffic is a prime candidate for sonification as it comprises series of temporally-related data which may be mapped naturally to sound, a temporal medium (Vickers, 2011).

Gilfix and Crouch’s (2000) PEEP system is an early network sonification example but Ballora et al. (2010; 2011; 2012) developed the idea to address situational awareness. Using an auditory model of the network packet space they produced a “nuanced soundscape in which unexpected patterns can emerge for experienced listeners”. Their approach used the five-level JDL fusion model which is concerned with integrating multiple data streams such that situational awareness is enhanced (see Blasch and Plano, 2002). However, Ballora et al. (2010) noted that the high data speeds and volumes associated with computer networks can lead to unmanageable cognitive loads. They concluded:

“The combination of the text-based format commonly used in cyber security systems coupled with the high false alert rates can lead to analysts being overwhelmed and unable to ferret out real intrusions and attacks from the deluge of information. The Level 5 fusion process indicates that the HCI interface should provide access to and human control at each level of the fusion process, but the question is how to do so without overwhelming the analyst with the details.”

Future Research Directions: Towards an Intelligent Information Infrastructure

As outlined above, when considering computer networks, there is a general agreement over the core elements of situational awareness. Firstly, the human operator must become aware of events (i.e., they must be able to recognize what is relevant). Then, they must develop an understanding (i.e., they must be able to interpret and comprehend the relevance of those events). Lastly, they must use that understanding to develop appropriate actions to control the network (i.e., they must be to predict the implications of those actions). The question thus arises of what to present to an operator so that situational awareness may be achieved.

Unfortunately, that isn’t the only issue. Any derived situational awareness of the current state of the network will require information about the network to be coupled, spatially and temporally (Cooke in McNeese, 2012). In addition, given that current network traffic volumes are huge and will continue to increase, any interpretation and comprehension that results from becoming aware will require that information to be aggregated and presented in an easy-to-understand way.

The predominant approach taken in previous network sonification efforts is to focus on the auditory representation of network traffic primitives, notably the number of bytes and packets crossing the network gateway in a given time period. This would allow for certain levels of analysis with the possibility of profiling traffic to identify certain types of network event.

However, a more fruitful avenue to explore is the identification and analysis of more holistic or emergent network and network traffic properties. In this section we will consider approaching the development of new techniques for situational awareness from two complementary angles; self organized criticality and affective computing (discussed below), which can be used together in a sonification framework. The combination of these two areas within a sonification system, it is suggested, will enable the development of an intelligent information infrastructure in which a network is able to monitor its own status and health and, using the real-time monitoring opportunities afforded by sonification, communicate its confidence or its anxieties about its state to a human operator. The components of such an intelligent information infrastructure are now discussed.

The self-organized criticality of a network

Complex natural systems appear to exhibit an emergent property known as self-organized criticality (SOC), by which the system responds to critical events in order to restore equilibrium (Bak et al. 1987). Complex information structures also appear to manifest self-organized criticality (Yang, 2006; Crovella, 1997; Leland, 1993). Modern networks demonstrate periods of very high activity alternating with periods of relative calm, a characteristic known as ‘burstiness’ (Leland, 1993). It was commonly thought that ethernet traffic conformed to Poisson or Markovian distribution profiles. This would mean that the traffic would possess a characteristic burst length which, when averaged over a long time scale, would be smooth (Crovella & Bestavros, 1997). However, it has been demonstrated that network traffic shows significant burstiness across a broad range of time scales. When traffic is bursty across different time scales it can be described using the statistical concept of self-similarity and it has been established that ethernet traffic exhibits such self-similarity (Crovella & Bestavros, 1997).

Yang et al. (2006) carried out a wavelet analysis of the burstiness of self-similar computer network traffic and showed that the avalanche volume, duration time, and the inter-event time of traffic flow fluctuations obey power law distributions. Bak et al. (1987) suggested that such power law distributions in complex systems are evidence of self-organized criticality. Self-organized criticality is a function of an external driving force and an internal relaxation process such that there exists a “separation of time scales” between them. An example that is often used to illustrate this is an earthquake. Stresses within the tectonic plates (the external driving force) take many years to develop, while the earthquake (the internal relaxation process) takes only seconds; so we see a separation of time scales between years and seconds.

Since the length of time required by the external driving force to initiate an internal relaxation process is non-determinable, then the threshold at which the internal relaxation process acts is also non- determinable. Consequently there can exist many differing states, each of which will be ‘barely stable’, a condition called metastability (Bak et al. 1987). Recall that complex systems (such as communication net- works) consist of many internal interactive components. Bak et al. (1987) hypothesized that when these complex systems are externally driven then the system’s internal components will begin to form random networks. Jensen adds that as the external driver continues these networks will be modified by the “actions of the internal dynamics induced by the external drive” (Jensen, 1998). For the purposes of this section we will assume that the external driver of a communication network will be any traffic passing through a network and the internal dynamic is the response of the network to the traffic.

In a study of self-organized criticality in network traffic Guo et al. (2008) observed that “from the perspective of traffic engineering, understanding the network traffic pattern is essential” for the analysis of network survivability. However, self-organized criticality is not a discrete variable that can be identified and monitored directly. Instead, its presence is inferred through the analysis of a system’s behaviour. Yang et al. (2006) measured time-dependent characteristics of a network (in their case, numbers of packets and bytes), after which they constructed a power spectrum. If the spectrum displayed a power law correlation they concluded self-organized criticality was present. We may observe the self- organized criticality, then, by measuring some time-dependent characteristics of the system and comparing changes in successive samples. It is suggested that a repeated series of large changes may well give an indication of network instability, and the possibility of some form of network ‘reset’. In this situation, a reset would not mean the catastrophic failure of the network, but may mean the existence of a rapidly increasing level of service traffic restrictions.

Affective Computing

In a physical battlespace one gains situational awareness through the perception and comprehension of events. One may start to feel a sense of unease or disquiet if a situation seems to be taking a turn for the worse. Given the inherent intangibility of system network events it is worth considering whether the network (or some computational engine working inside it) could itself gather intelligence about events and draw its own conclusions about the relative benignity or malignancy of developing situations. Affective computing is the study of computational systems than can recognize or simulation human emotions and affective states. Therefore, it is theoretically possible to design a monitoring agent that would simulate affective states related to the current state of a network and communicate these to a human operator. This would, potentially, enable the operator to know when the network itself is starting to ‘feel’ uncomfortable.

Sonification, because of its temporal rather than spatial representational nature offers the scope for creating a real-time affective system that keeps human operators continually in touch with the state of the network. Winters and Wanderley (2013) showed how sonification can communicate affective states in the traditional arousal—valence space. Taking this concept further, Kirke & Miranda’s (2013) pulsed melodic affective processing (PMAP) technique provides a mechanism for creating an affective sonification system in which the objects of interest being monitored constantly communicate their state using a sonification-based communications protocol. According to Kirke and Miranda PMAP “is a method for the processing of artificial emotions in affective computing.” Because the communications protocol is, essentially, a language whose syntax maps to auditory events, the system becomes self sonifying. Effectively, one can choose to listen to any component within a PMAP’ed environment and understand that component’s affective state.

An intelligent information infrastructure

In the above we outlined how an infrastructure could communicate risk or security concerns to its operators, with the operators then taking some ameliorating action. Now, we propose a way by which an infrastructure could use its awareness of its current context to adapt and change (for instance, by altering its ‘risky’ behaviour, or enhancing its security posture). This would extend the above to include an adaptive network using the methodology outlined in (Bentley, 2005), and (as discussed previously) an emergent property of complex systems known as self-organized criticality. As noted previously a system undergoing self-organized criticality will have many differing states, and at the transition point (from instability to stability) spatial self-similarity occurs (Bak et al. 1987). We hypothesize that this spatial self-similarity represents a fractal like geometry, with a ‘fractal dimensionality’. Consequently, this fractal structure and dimensionality could drive an adaptive self-organized criticality regulatory network that would act as the infrastructure’s operating system, controlling all aspects of the infrastructure, from observing, orientating, deciding, and acting.

The main characteristics of a self-organized critical system are power law fluctuations. We suggest that these power law fluctuations could be used to enable a system to monitor its own internal state (i.e., its current context). The pulsed melodic affective processing technique will communicate that current context (i.e., its affective state, which may consist of anomalies and discontinuities) to the human operator thereby potentially enhancing situation awareness and speed of response to critical incidents.

As well as providing for the adaptation of information infrastructures, additional outcomes could be the verification of the work by Kuehn (2012). Kuehn (2012), while investigating self-organized critical adaptive networks, concluded that information processing in complex systems exhibiting steady-state criticality could reach optimal noise values. If this is the case then self-organized criticality will be a better explanation of network behaviour than either Poisson or Markovian distribution profiles, and as such may result in more robust and resilient information infrastructures.

Conclusions

In this article the authors have explored the notion of information infrastructures as a cyber battlespace. In looking at this battlespace, we have also considered the role of situational awareness, in particular how the application of a military approach and the use of situational awareness within that warfighting domain will help to maintain healthy networks. However, given that cyber domain is by its very nature inaccessible to any form of sensory perception (which is an essential aspect of situational awareness), then the predicted increasing levels of network traffic will adversely impact on the development of any situational awareness. In considering this required sensory perception, the authors reviewed current visualization approaches, and concluded that there is a need to consider other modalities for the sensory perception of network behaviour. The discussion then focused on the use of sonification for visualization, and the authors concluded that sonification for situational awareness would offer potential benefits to human operators trying to perceive, comprehend and make decision about the network’s behaviour.

Following on from this, the authors proposed an agenda for future research. This agenda involved the collaboration of a number of dissimilar activities, namely, sonification, self-organized criticality, network context, and affective computing, which the authors believe will yield benefits for the field of network situation awareness. The authors are currently working on developing a system that sonifies in real time an information infrastructure’s self-organized criticality, which is then able to inform operators of both normal and abnormal network traffic and behaviour; details of which will be published later in the year.

References

Angerman, W. S. (2004). Coming full circle with Boyd’s OODA loop ideas: An analysis of innovation diffusion and evolution. Unpublished master’s thesis, Airforce Institute of Technology, Wright-Patterson AFB, Ohio, USA.

Banbury, S. & Tremblay, S. (2004). Preface. In S. Banbury & S. Tremblay (Ed.), A cognitive approach to situation awareness: Theory and application. Aldershot, UK: Ashgate Publishing

Bak, P., Tang, C., & Wiesenfeld, K. (1987). Self-organized criticality: An explanation of the 1/f noise. Phys. Rev. Lett., 59(4), 381-384.

Ballora, M., Panulla, B., Gourley, M., & Hall, D. L. (2010). Preliminary steps in sonifying web log data. In E. Brazil (Ed.), 16th International Conference on Auditory Display (pp. 83-87). Washington, DC: ICAD.

Ballora, M., Giacobe, N.A., & Hall, D. L. (2011). Songs of cyberspace: an update on sonifications of network traffic to support situational awareness, in SPIE Defense, Security, and Sensing, (pp. 80 640P— 80 640P).

Ballora, M., Giacobe, N. A. McNeese, M. & Hall, D. L. (2012). Information data fusion and computer network defense. In C. Onwubiko & T. Owens (Ed.), Situational Awareness in Computer Network Defense: Principles, Methods and Applications. IGI Global.

Bentley, P. J. (2005). Controlling robots with fractal gene regulatory networks. In L. De Castro & F. von Zuben (Ed.), Recent Developments in Biologically Inspired Computing. (pp. 320-339), London: Idea Group Publishing.

Blasch, E. P., & Plano, S. (2002). JDL level 5 fusion model: User refinement issues and applications in group tracking, in Proc. SPIE, Vol. 4729, (pp. 270—279).

Cisco Visual Networking Index (2013a): Global Mobile Data Traffic Forecast Update, 2012-2017, February 2013, ID: FLGD 10855 02/13

Cisco Global Cloud Index (2013b): Forecast and Methodology, 2012-2017, October 2013, ID: FLGD 11289 10/13

Clausewitz, C. (1874). On War. Translated by J. J., (1909). London.
Crovella, M. E., & Bestavros, A. (1997). Self-similarity in world wide web traffic: Evidence and possible

causes. IEEE/ACM Trans. Netw., 5(6), 835-846.
Elizalde, F., Ekholm, J., & Sabia, A. (2012). Gartner Forecast: Consumer Broadband Market, Worldwide,

2009-2015, Published: 16 March 2012 ID:G00228591.

Ekholm, J., & Fabre, S. (2011). Gartner Forecast: Mobile Data Traffic and Revenue, Worldwide, 2010- 2015, Published: 4 July 2011 ID:G00213763.

Ekholm, J., De La Vergne, H. J., & Baghdassarian, S. (2011). Gartner Forecast: Mobile Broadband Connections, Worldwide, 2009-2015, Published: 22 August 2011 ID:G00216119.

Endsley, M. (1995). Toward a theory of situation awareness in dynamic systems, Human Factors, 37(1), 32-64.

FBI (2013). The Intelligence Cycle. Accessed October 16, 2013, from http://www.fbi.gov/about- us/intelligence/intelligence-cycle

Gilfix, M., & Couch, A. L. (2000). Peep (the network auralizer): Monitoring your network with sound. In 14th System Administration Conference (LISA 2000), (pp. 109-117). New Orleans, Louisiana, USA: The USENIX Association.

Guo, C., Wang, L., Huang, L., & Zhao, L. (2008). Study on the internet behavior’s activity oriented to network survivability. In International Conference on Computational Intelligence and Security, 2008. CIS ‘08, Vol.1 (pp. 432-435), IEEE.

Hermann, T., Hunt, A. D., & Neuhoff, J. (Eds.). (2011). The Sonification Handbook. Berlin: Logos V erlag.

Hyacinthe, B. (2009). Cyber Warriors at war, U.S. national security secrets and fears revealed. XLibris Corp.

Jensen, H. J. (1998). Self-Organized Criticality. Cambridge University Press, Cambridge.
Kerr, P., Rollins, J. , Theohary, C. (2010). The stuxnet computer worm: Harbinger of an emerging

warfare capability, Congressional Research Service.

Kirke, A., & Miranda, E. (2013) Pulsed melodic processing — the use of melodies in affective computations for increased processing transparency. In S. Holland, K. Wilkie, P. Mulholland, & A. Seago (Eds.), Music and Human-Computer Interaction (pp. 171-188), Springer London.

Krekel, B. (2009). Capability of the People’s Republic of China to conduct Cyber Warfare. US-China Economic and Security Review Commission.

Kuehn, C. (2012). Time-scale and noise optimality in self-organized critical adaptive networks. Phys. Rev. E (85) 2.

Leland, W. E., Taqqu, M. S., Willinger, W., & Wilson, D. V. (1993). On the self-similar nature of Ethernet traffic. SIGCOMM Comput. Commun. Rev., 23(4), 183-193.

Libicki, M. (2012). Cyberspace is not a warfighting domain. A Journal of Law and Policy for the Information Society. 8 (2). 325-340.

McNeese, M. (2012). Perspectives on the role of cognition in cyber security. In Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting, 2012. (pp. 268-271).

Ministry of Defence. MoD (2010). Army Doctrine Publication – Operations. UK Ministry of Defence, 2010. Accessed February 5, 2014, from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/33695/ADPOperationsDec 10.pdf

Rosenzweig, P. (2013). Cyber Warfare (The changing face of war). Praeger: An Imprint of ABC-CLIO, LLC.

Smith, K. T. (2013). Building a human capability decision engine. In Contemporary Ergonomics and Human Factors 2013 Proceedings of the international conference on Ergonomics & Human Factors 2013, (pp. 395–402) Accessed February 5, 2014, from http://www.crcnetbase.com/doi/abs/10.1201/b13826-84

Vickers, P. (2011). Sonification for process monitoring. In T. Hermann, A. D. Hunt, & J. Neuhoff (Ed.), The Sonification Handbook (pp. 455—492), Logos Verlag, Berlin.

Vickers, P. (2012). Ways of listening and modes of being: Electroacoustic auditory display, Journal of Sonic Studies, Vol. 2 (1). Accessed February 5, 2014, from http://journal.sonicstudies.org/vol02/nr01/a04

Winters, R. M., & Wanderley, M. M. (2013). Sonification of emotion: Strategies for continuous display of arousal and valence. In G. Luck, & O. Brabant (Eds.), Proceedings of the 3rd International Conference on Music and Emotion (ICME3).

Yang, C.-X., Jiang, S.-M., Zhou, T., Wang, B.-H., & Zhou, P.-L. (2006). Self-organized criticality of computer network traffic. In Communications, Circuits and Systems Proceedings, 2006 International Conference on, Vol. 3 (pp. 1740-1743), IEEE.