Book Review: Intentional Risk Management Through Complex Networks Analysis - Innovation for Infosec

This post provides a non-comprehensive summary of a multi-author book published in 2015 titled "Intentional Risk Management Through Complex Networks Analysis".
I recommend this book to those looking for real science-based Information Security innovations. This statement is not a forced marketing slogan. It is a reality.
The authors of this book are, in alphabetic order Victor Chapela, Regino Criado, Santiago Moral and Miguel Romance.

In this post I present some of the interesting points proposed by the authors. The ideas mentioned here are coming from the book. Certainly this summary is a clear invitation to read the book, digest its innovative proposals and start innovating in this demanding field of IT Security.

Chapter 1. Intentional Risk and Cyber-Security: A Motivating Introduction

The authors start distinguishing between Static Risk and Dynamic Risk. Static Risk is opportunistic risk (e.g. identity theft). Dynamic Risk is directed intentional risk that attempts to use potentially existing but unauthorised paths (e.g. using a vulnerability).

Static Risk is based on the probability that a user with authorised access to a specific application abuse his access for personal gain. This risk can be deterred by reducing anonymity.

In Dynamic Risk the attacker tries to get the most valuable node via the least number of hops via authorised or unauthorised accesses.

Currently the main driver for a cyber-attack is the expected profit for the attacker. The book also links Intentionality Management with Game Theory, specifically with the stability analysis of John Nash's equilibrium. The book uses Complex Network Theory (both in terms of structure and dynamics) to provide a physical and logical structure of where the game is played.

The authors consider intentionality as the backbone for cyber-risk management. They mention a figure, coming from a security provider, of around USD 400 billion as the latest annual cost of cyber-crime.

The authors make a distinction between:
- Accidental risk management, a field in which there is a cause that leads to an effect and attacks are prevented mostly with redundancy (e.g. in data centres) and
- Intentional risk management, in which we have to analyse the end goal of the attackers.

To prevent these attacks we can:

- Reduce the value of the asset.
- Increase the risk the attacker runs.
- Increase the cost for the attacker.

Traditionally the risk management methodologies are based on an actuarial approach, using the typical probability x impact. Being the probability based on observation of the frequency of past events.

We need to assess which assets are the most valuable assets for the attackers.

Using network theory, whose foundations can also be found in this blog in summaries posted in October 2015, November 2015, December 2015, January 2016, February 2016 and March 2016, the more connected a node is (or the more accessibility a computer system has), the greater is the risk for it to be hackable.

A key point proposed by this book: Calculated risk values should be intrinsic to the attributes of the network and require no expert estimates. The authors break down attackers' expected profit into these three elements:

- Expected income i.e. the value for them.
- The expense they run (depending on the accessibility both via a technical user access or a non-technical user access).
- Risk to the attacker (related to anonymity and some deterrent legal, economic and social consequences.

An attacker prefers busy applications that are highly accessible, admin access privileges and critical remote execution vulnerabilities. The main driver for attackers is value for them. Attackers in the dynamic risk arena are not deterred by anonymity.

The authors relate anonymity to the number of users who have access to the same application.




Chapter 2. Mathematical Foundations: Complex Networks and Graphs (A Review)
Complex network model the structure and non-linear non-linear dynamics of discrete complex systems.

The authors mention the difference between holism and reductionism. Reductionism works if the system is linear. Complexity depends on the degrees of freedom that a system has and whether linearity is present.

Networks are composed of vertices and edges. In complex networks small changes may have global consequences.

Euler walk: A path between two nodes for which every link appears exactly once. The degree of a node is the number of links the node shares.

If the number of links with odd degree is greater than 2 then no Euler walk exists.

If the number of links with odd degree equals 0 then there are Euler walks from any node.

If the number of links with odd degree equals 2 then there is only an Euler walk  from one of the odd nodes.

A graph is the mathematical representation of a network. The adjacency matrix of a graph is a way to determine the graph completely. A node with a low degree is weakly connected. A regular network is a network whose nodes have exactly the same degree.

In a directed network the adjacency matrix is not necessarily symmetric. Paths do not allow repetition of vertices while walks do. A tree is a connected graph in which any two vertices are connected by exactly one path.

Structural vulnerability: How does the removal of a finite number of links and/or nodes affect the topology of a network?

Two nodes with a common neighbour are likely to connect to each other. The clustering coefficient measures it.

The eigenvector centrality of a node is proportional to the sum of the centrality values of all its neighbouring nodes.

Spectral graph theory studies the eigenvalues of matrices that embody the graph structure.

Betweenness centrality: Edge betweenness of an edge is the fraction of shortest paths between pairs of vertices that run along it. Degree distribution provides  the probability of finding a node in G with degree k.

Complex networks models
In random graphs, the probability that 2 neighbours of a node are connected is the probability that two randomly chosen nodes are linked. Large scale random networks have no clustering in general. The average distance in a random network is rather small.

Small world model
Some real networks like the Internet have characteristics which are not explained by uniformly random connectivity. Small world property: The network diameter is much smaller that the number of nodes. Most vertices can be reached from the others through a small number of edges.

Scale-free networks
The degree distribution does not follow a Poisson like distribution but does follow a power law i.e. the majority of nodes have low degree and some nodes, the hubs, have an extremely high connectivity.

Additionally, many systems are strongly clustered with many short paths between the nodes. They obey the small world property.

Scale-free networks emerge in the context of a growing network in which new nodes prefer to connect to highly connected nodes. When there are constraints limiting the addition of new edges, then broad-scale or single-scale networks appear.

Assortative networks
Most edges connect nodes that exhibit similar degrees (the opposite is disassortative networks).

A Hamiltonian cycle in a graph passes through all its nodes exactly once. The line graph is a set of nodes that are the initial set of edges.


Chapter 3. Random Walkers

Two different types of random walkers: Uniform random walkers and random walkers with spectral jump (a personalisation vector).

Statistical mechanics: The frequency of all the nodes will be the same in all the random walkers developed. In any type of random walker the most important element is the frequency with which each node appears. 

"If we move on a network in a random way, we will pass more often through the more accessible nodes". This is the idea of the PageRank algorithm used by Google. The difficulty comes to compute the frequency of each node. A random walker on a network can be modelled by a discrete-time Markov chain.

Multiplex networks: The edges of those networks are distributed among several layers. It is useful to model Dynamic Risk.

Intentional risk analysis
Accessibility: Linked to the frequency of a uniform random walker with spectral jump in the weighted network of licit connections. Two types of nodes:

- Connection-generator nodes (e.g. Internet access, effective access of internal staff).
- Non connection-generator node (those nodes through which the communication is processed).

Static intentional risk? (It exists but it is not so key I assume) The accessibility of each connection is zero cost because the accesses have been achieved by using the structure of the network.

In dynamic intentional risk each connection or non-designed access increase entails a cost for the attacker who seeks access to the valuable information (the vaults).

Modelling accessibility
A biased random walker with spectral jumps, going to those nodes with an optimal cost/benefit ratio. The random walker makes movements approaching the vaults. Accessibility in dynamic intentional risk may be modelled using a biased random walker with no spectral jumps in a 3-layered multiplex network.

1. A first layer corresponding to spectral jumps (ending and starting connections).
2. A second layer with the existing connections registered by the sniffing.
3. A third layer with connections due to the existence of both vulnerabilities + affinities.


Chapter 4. The Role of Accessibility in the Static and Dynamic Risk Computation

The anonymity is computed for each edge of the intentionality network. The value and the accessibility are computed for each node. Two ways to calculate the edge's PageRank:

a. via the classic PageRank algorithm (frequency of access to an edge and the PageRank of its nodes).
b. via Line Graph i.e. the nodes are the edges of the original network.

The dumping factor will be the jumping factor.

The outcome will be a weighted and directed network with n nodes and m edges. There are equivalent approaches using the personalization vector.

Chapter 5. Mathematical Model I: Static Intentional Risk

Static Risk: Opportunistic risk. Risk follows authorised paths.
Dynamic Risk: Directed intentional risk. Tendency to follow unauthorised paths. Linked to the use of potentially existing paths but not authorised in the network.

The model is based on the information accessibility, on its value and on the anonymity level of the attacker.

Intentionality complex network for static risk. Elements:

- Value: How profitable the attack is.
- Anonymity: How easy the identity of the attacker is determined.
- Accessibility: How easily the attack is carried out.

Every node has a resistance (a measure for an attacker to get access). Value is located at certain nodes of the network called vaults. Different algorithms will be used: Max-path algorithm, value assignment algorithm and accessibility assignment algorithm.

Static risk intentionality network construction method:
1. Network construction from the table of connection catches.
2. Network collapsed and anonymity assignment.
3. Value assignment.
4. Accessibility assignment.

Two networks appear in this study, the users network and the admins network. Network sniffing provides the connections between the nodes IP and the nodes IP:ports. Based on this sniffing, we get the number of users who use each one of the edges. The inverse of that integer number becomes the label for each edge. The max-path algorithm is executed to distribute the value from the vaults to all the nodes of the networks.

The inverse of the number of users in each edge is used as a value reduction factor. The higher the number of users who access a node, the higher value reduction potential attackers will have in that node but, however, the higher anonymity they will have though.

Each edge is labelled with the frequency of access (the number of accesses). The accessibility of a node is linked to the accessibility of the edges connecting it. For each edge, the PageRank algorithm is calculated.

The higher the access frequency, the higher the probability that someone will misuse the information present in that node.

The higher the profit to risk ratio for the attacker, the greater the motivation for the attacker.

The paradigm shift is relevant: From the traditional risk = impact x probability to:

- Attacker income: Value for each element of the network.
- Attacker probability: Directly proportional to accessibility.
- Attacker risk: 1/anonymity.

The value of each element resides in the node. Anonymity resides on the edge.
The profit to risk for the attacker ratio (PAR) =

value x accessibility x (anonymity /k) being k the potential punishment probability for the attacker.


 Chapter 6. Mathematical Model II: Dynamic Intentional Risk

Zero-day attacks are not integrated in the model.

In static risk:

- The most important single attribute is Value. The value depends on the percentage of value accessible by the user.
- The attacker uses their authorised access.
- Anonymity is an important incentive. Lack of anonymity is a deterrent.
- Accessibility has no cost (the user is already authorised)
- There is a higher level of personal risk perception.
- The higher the number of users, the higher his perceived anonymity.

In dynamic risk:

- The most important single attribute is accessibility.
- The degree of anonymity is not a deterrent (the user is not already authorised or known).

- The hacker tries to access the entire value.
- Typical values of anynomity: Coming from the Internet anonymity equals 1, from Wireless equals 0.5 and from the Intranet equals 0.

Accessibility in Dynamic Risk
Each jump of a non-authorised user from one element to another element increases the cost for the attacker. The more distance to the value, the more difficult and costly the attack is.

Dynamic risk construction
First step: Performing a vulnerability scanning of the network to get all non-authorised paths (known vulnerabilities, open ports, app fingerprinting, known vulnerabilities and so forth).

The vulnerability scanner used is Nessus.

Two types of potential connections:
- Affinities: Two nodes sharing e.g. OS, configurations and users.
- Vulnerabilities.

A modified version of the PageRank algorithm is used.

Dynamic Risk model

User network + admins network + affinities + vulnerabilities

Anonymity does not play any role in Dynamic Risk but accessibility is the main parameter.

Each edge has an associate weight. The dynamic risk of an element is the potential profit the attacker obtains reaching that element. As anonymity is not relevant in the context of dynamic risk, it is not necessary to collapse its associated network.

The accessibility of an element of the Dynamic Risk Network is the value we get for the relative frequency of a biased random walker through that element.

- Dynamic risk = value x accessibility
- The dynamic risk of a network is the maximum dynamic risk value of its elements (interesting idea - why not the sum?)
- The dynamic risk average = the total value found in the vaults x accessibility average (the root mean square of all accessibility values associated to elements of the network in the context of dynamic risk).


Chapter 7. Towards the Implementation of the Model

Source ports in this model are not important. They are mostly generated randomly.

Access levels. Restricted and unrestricted.
The higher the level of privilege, the more information and functionality an attacker can access. Typically there are two types of accesses, based on different ports:

- Restricted end user access: Always authorised and mostly with low risk.

- Unrestricted technical access: Any access that allows a technical user or an external hacker to have unrestricted access to code, configuration or data. It can be authorised or gained via an exploit. It is a high risk: Using admin access in an application you can in most cases escalate privileges to gain control over the server and the network.

For static risk we need to find which accesses are already authorised and normal. The frequency of connections for each socket (especially for the frequently used sockets) informs about the busiest routes and how many hosts accessed a specific application.

For dynamic risk, we need to model the potential routes that a hacker might find and exploit. For an attacker, sockets that are used normally are desirable since they are more anonymous.

Attackers will select routes where they can obtain the most privileges with the least effort and get the closest to their end goal.

Other unknown risks are out of the scope of this proposal. This is a key point to understand.

To calculate anonymity in the static risk network we need to collapse all the IP sources that connect to the same port destination. It will be the inverse of the number of IP sources collapsed.

Value: How much the data or functionality is worth for the attacker. It needs to be placed manually into those vault nodes.

And the ending point of the book is the great news that the authors are working on a proof of concept.


Innovation in IT Security


















































 

What networks can tell us about the world - A lecture (2 of 3) by Mark Newman

For those willing to get introduced to the world of complex networks, the three lectures given by Mark Newman, a British physicist, at the Santa Fe Institute on 14,15 and 16 September 2010 are a great way to get to know a little bit about this field.

In his first lecture, Mark Newman introduced what a network was. Let's continue with the second lecture, in which he explains what we can do with complex networks. You can find it here.

In this post I summarise (certainly in a very personal fashion, although some points are directly extracted from his slides) the learning points I extracted from the lecture.

Positioning of nodes within a social network. Centrality
- The idea of distances in networks.
- The most famous experiment in networks is the one by Stanley Milgram in 1967. The same Milgram who, some years afterwards made the famous experiment on obedience to an authority.
- The network-related experiment that Milgram made relates to the concept of a "small-world".
- He explains the mathematical basis of the six-step relationship concept. Everybody is connected with everyone is even less than six steps.
- If each person knows 100 people, then the number of people 1 step away from you is 100. The number of people 2 steps away from you is 100x100 = 10000. Actually people 5 steps away is 10 billion (more than the current world population).
- A way to pinpoint important people in a social network. Closeness centrality: Average distance to everyone in the network. Those nodes with the shortest distance are well connected. The most connected node is the one with the highest closeness centrality. However, this calculation is not very useful. Its calculation is complex and centrality numbers are very very similar to each other in a social network.

Degree centrality
- Can we do better? Yes, the degree centrality. The degree is the number of nodes a node is connected to. Degree centrality is just the degree number.
- Hubs in the network play a really important role in the function of the network.
- He presents a very important graph in network science. The degree graph. The x axis represents the degree i.e. the number of connections that a node has. The y axis represent the number of nodes that have that degree.
- They are well spaced values.

PageRank
- "Degree is like a score where you get one point per person you know. But not all people are equally important".
- How can we signal whether a node is connected to a very important node? One algorithm is PageRank. The rank associated to any node is an eigenvector. "Each node in the network has a score that is the sum of the score of its neighbours.

 Transitivity
- What does a triangle means in a social network? A friend of my friend is my friend. Or, probably two of my friends know each other.
- However if my friends don't know each other, I am more central, I have more influence in the network.
- Predicting future friendships: Look for pairs who have one or more mutual friends.

Homophily
- Probably not a big surprise but... 
- People tend to be friends with people in the same school grade.
- People tend to get married with people with a similar age.
- Liberal blogs like to be connected to liberal blogs.
- Conservative blogs like to be connected to conservative blogs.
- We can use homophily to make predictions.
- For example, on average about 70% of your friends vote like you do.
- Probably people change opinions to match their friends' and people change friends to match their opinions (both things happen).
- "83% of friends had the same ethnicity.
- However if those links are randomly chosen there is already a specific chance.
- Modularity helps us identifying when there is a lot of homophily in the network. The difference between the homophily case and the random case.

Modules, groups or communities
- Are there communities in a network?
- This helps understanding how networks will split up.
- A computer can calculate the definition of modularity for all nodes in the network.
- Understanding these network characteristics would enable us to solve real problems.


Sunset networking

The connected world - A lecture (1 of 3) by Mark Newman

For those willing to get introduced to the world of complex networks, the three lectures given by Mark Newman, a British physicist at the Santa Fe Institute on 14,15 and 16 September 2010 are a great way to get to know a little bit about this field.

Let's start with the first lecture. You can find it here.

In this post I summarise (certainly in a very personal fashion, although some points are directly extracted from his slides) the learning points I extracted from the lecture.

- We find networks in many different fields.
- They can be used to explain very different things happening in real life.
- A network is a collection of edges and nodes or vertices (plural of vertex). These words are taken from Spatial Mathematics.
- Although the border among them is blurred. He proposes 4 types of networks: Technological, information, biological and social networks.

Technological networks
- One example, the Internet is a complex network. Even though we human beings built it, we do not know its structure. However, we can make a scientific experiment and try to identify its structure. For example, using the program traceroute.
- When we see the result, we start understanding why the study of complex networks (e.g. with billions of nodes) help us making networks e.g. more efficient and more robust.
- Some human-made networks are the Internet and the air transportation network. These are technological networks.
- In some networks we are interested in their static structure. In some others, for example in the airline network, we are also interested in the dynamics of the nodes, cities connected by flights and edges, the flights themselves. The dynamic study of a complex network is actually the cutting edge of network science these days.

Information networks
- Regarding information networks, for example the World Wide Web, where the nodes are web pages and the edges are the hyperlinks you can find in the web pages.
- A hyperlink has its direction. So, the WWW is a directed network. In 1990 there were around 20 pages. In 2010 Google listed more than 25 billion. Actually the number of pages is now infinite. Some pages didn't exist until you asked for.
- A recommendation network (e.g. books in Amazon that could be of your interest) is also an information network.

Biological networks
- An example is the metabolic network or the neural network.
- A food web (which species eats which species) is also an example of a complex network.
- A self-edge in the food web represents canibalism.

Social networks
- His favorite. Jacob Moreno in 1934 already talked about sociograms. He observed kids playing in the playground.
- Actually, even Newman's grandfather, in a scientific paper, mentioned the idea of a social network.

Measuring social networks
- This is a complex endeavour. How can we measure? By observation, interviews, questionnaires, online data, archival records and message passing.
- Social networks govern the way diseases spread.
- Political connections, business board connections, dating connections are only some examples of complex networks.

The understanding of complex networks is still basic. 

In the second lecture, once he has explained how networks are described, Mark Newman will talk about how can these network diagrams be used.

Foggy networks?

 

 

Complex networks: Structure and dynamics by S. Boccaletti et al. - Summary based on extracts

Following the series of book summaries, student notes to papers and analysis related to Network Science posted in this blog, all of them easy to reach using the network science label, I focus this time on a classical survey paper titled
"Complex networks: Structure and dynamics" by S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, D.-U. Hwang.

This is a paper with over 100 pages and 888 references. The summary approach I follow this time is different. After having read and highlighted the paper, I literally copy here those statements that could be considered a non-complete summary of the paper. Network science students could find it useful as an initial reference to later on dive deep into the entire paper.

1. Intro 

- Networks: systems composed by a large number of highly interconnected dynamical units.
- The first approach to capture the global properties of such systems is to model them as graphs whose nodes represent the dynamical units, and whose links stand for the interactions between them.
- A relevant property regards the degree of a node, that is the number of its direct connections to other nodes. In real networks, the degree distribution P (k), defined as the probability that a node chosen uniformly at random has degree k or, equivalently, as the fraction of nodes in the graph having degree k, significantly deviates from the Poisson distribution expected for a random graph and, in many cases, exhibits a power law (scale-free) tail with an exponent taking a value between 2 and 3.
- real networks are characterized by correlations in the node degrees, by having relatively short paths between any two nodes (small-world property), and by the presence of a large number of short cycles or specific motifs.
- coupling architecture has important consequences on the network functional
robustness and response to external perturbations, as random failures, or targeted attacks.
- how the network structure affects the properties of a networked dynamical
system
- some brain diseases are the result of an abnormal and, some times, abrupt synchronization of a large number of neural populations
- finding the communities within a network is a powerful tool for understanding the functioning of the network, as well as for identifying a hierarchy of connections within a complex architecture

2. Structure

- The walk of minimal length between two nodes is known as shortest path or
geodesic. A cycle is a closed walk, of at least three nodes, in which no edge is repeated.
- A component of the graph is a maximally connected induced subgraph. A giant
component is a component whose size is of the same order as N.
- The degree (or connectivity) k i of a node i is the number of edges incident with the node.
- in assortative networks the nodes tend to connect to their connectivity peers, while in disassortative networks nodes with low degree are more likely connected with highly connected ones
- The concept of betweenness can be extended also to the edges. The edge betweenness is defined as the number of shortest paths between pairs of nodes that run through that edge
- transitivity means the presence of a high number of triangles
- An alternative possibility is to use the graph clustering coefficient C
- A motif M is a pattern of interconnections
- a community (or cluster, or cohesive subgroup) is a subgraph G (N , L ), whose nodes are tightly connected, i.e. cohesive
- The spectrum of a graph is the set of eigenvalues of its adjacency matrix
- The eigenvalues and eigenvectors of A, and N have been used to characterize either models and real networks, and also for discovering the presence of cohesive subgroups
- despite the inherent differences, most of the real networks are characterized by the same topological properties, as for instance relatively small characteristic path lengths, high clustering coefficients, fat tailed shapes in the degree distributions, degree correlations, and the presence of motifs and community structures
- in most of the real networks, despite of their often large size, there is a relatively short path between any two nodes. This feature is known as the small-world property
- when
the scientists approached the study of real networks from the available databases, it was considered reasonable to find degree distributions localized around an average value, with a well-defined average of quadratic fluctuations. In contrast with all the expectancies, it was found that most of the real networks display power law shaped degree distribution
- Such networks have been named scale-free networks [2,93], because power-laws have the property of having the same functional form at all scales
- ER random graphs are the best studied among graph models, although they do not reproduce most of the properties of real networks
- the degree distribution is well approximated by a Poisson distribution
- The Watts and Strogatz (WS) model is a method to construct graphshaving both the small-world property and a high clustering coefficient
- Graphs with a power-law degree distribution can be simply obtained as a special case of the random graphs with a given degree distribution. We denote such graphs as static scale-free to distinguish them from models of evolving graphs
- Static scale-free graphs are good models for all cases in which growth or aging processes do not play a dominant role in determining the structural properties of the network
- The Barabási–Albert (BA) model is a model of network growth inspired to the formation of the World Wide Web and is based on two basic ingredients: growth and preferential attachment
- The BA model does not allow for changes after the network is formed. However, in real networks like the WWW or a social network, connections can be lost and added. Albert and Barabási (AB) have proposed a generalization of
their original model
- Here we present some of the most striking examples of real systems which have been studied as weighted networks. It has been found that the weights characterizing the various connections exhibit complex statistical features with highly varying distributions and power-law behaviors. Correlations between weights and topology provide a complementary perspective on the structural organization of such systems. Biological, social and technological networks.
- The easiest way to construct a weighted network is by considering a random graph with a given probability distribution P (k), and by assuming that the weights of the edges are random independent variables, distributed according to a weight distribution Q(w)
- Spatial networks: A particular class of networks are those embedded in the real space, i.e. networks whose nodes occupy a precise position in two or three-dimensional Euclidean space, and whose edges are real physical connections. The typical example are neural networks
- networks with strong geographical constraints are not small worlds
- example that power law degree distributions do not necessarily imply the small-world behavior
- random immunization in presence of geographical clustering might be more successful in controlling human epidemics than previously believed
- the structure of the world-wide airport network, finding empirical evidences
that both degree and betweenness distributions decay as truncated power laws

3. Static and dynamic robustness

- Robustness refers to the ability of a network to avoid malfunctioning when a fraction of its constituents is damaged. This is a topic of obvious practical reasons
- static robustness, is meant as the act of deleting nodes without the need of redistributing any quantity
- dynamical robustness refers to the case in which the dynamics of redistribution of flows should be taken into account
- The two types of robustness are similar in spirit, but while the first can be analytically treated, e.g. by using the tools of statistical physics such as percolation theory, the analytical treatment of the second case is harder and in almost all cases one has to rely on numerical simulations.
- The study of random failures in complex networks can be exactly mapped into a standard percolation problem
- the response to attacks of the scale-free network is similar to the response to attacks and failures of the random graph network
- The numerical results indicate that both the global and the local efficiency of scale-free networks are unaffected by the removal of some (up to 2%) of the nodes chosen at random. On the other hand, at variance with random graphs, global and local efficiencies rapidly decrease when the nodes removed are those with the higher connectivity
- The conclusion that follows immediately is that any graph with a finite amount of random mixing of edges for which the second moment diverges does not have a percolation threshold
- targeted deletion of nodes in uncorrelated scale-free networks are highly effective if compared to random breakdown, as previously anticipated by the numerical simulations by Albert et al. [275]. This is due to the critical high degree of just a few vertices whose removal disrupts the whole network. On the other hand, tolerance to attacks in correlated networks is still a problem entirely to be explored. This is due in part to the lack of generic models that produce networks with arbitrary degree-degree correlations
- in order to prevent the breakdown of scale-free networks, one has to find an optimal criterion that takes into account two factors: the robustness of the system itself under repeated failures, and the possibility of knowing in advance that the collapse of the system is approaching
- cascading failures occur much easier in small-world and scale-free networks than in global coupling networks
- an appropriate randomness in path selection can shift the onset of traffic congestion, allowing to accommodate more packets in the network

4. Spreading processes

- Epidemic spreading vs rumour spreading
- epidemiological processes can be regarded as percolation like processes
- the long term maintenance of the infection in a closed population is impossible in the SIR model, due to the depletion of susceptibles, as the epidemic spread through the population
- in both models the spread of infections is tremendously strengthened on scale-free networks. For such a reason, here we shall mainly concentrate on the SIR model
- Heterogeneous graphs with highly skewed degree distributions are particularly important to describe real transmission networks. For example, in the case of sexual contacts, which are responsible for the diffusion of sexually transmitted
diseases, the degree distribution has been found to follow a power-law
- a targeted immunization based on the node connectivity can restore a finite
epidemic threshold and potentially eradicate a virus. This result gives also important indications on the best public health strategies to adopt in order to control and eradicate sexually transmitted diseases (as the HIV). The real
problem here is that sexual promiscuous individuals are not always easy to identify
- vaccination of random acquaintances of random chosen individuals. This strategy is based on the fact that the probability of reaching a particular node by following a randomly chosen edge is proportional to the nodes degree
- structured scale-free networks do not possess the small-world property
- the diseases are longest lived in assortative networks



- desirable to spread the “epidemic” (the rumour) as fast and as efficiently as possible
- The standard rumor model is the so-called DK model
- denoted by ignorant (I), spreader (S) and stifler (R) nodes
- there is no “rumor threshold”, contrarily to the case of epidemic spreading.
The difference does not come from any difference in the growth mechanism of s(t)—the two are actually the same—,but from the disparate rules for the decay of the spreading process
- contrary to epidemic modelling, rumor processes can be tailored depending on the specific application. In this respect, there are still many issues to be addressed such as the influence of correlations, and dynamical rewiring. But we already know that even for a very poorly designed rumor, the final fraction of stiflers is finite at variance with epidemiological models

5. Synchronisation and collective dynamics

- Synchronization is a process wherein many systems (either equivalent or non-equivalent) adjust a given property of their motion due to a suitable coupling configuration, or to an external forcing
- We start with discussing the so called Master Stability Function approach
- Master stability function arguments are currently used as a challenging framework for the study of synchronized behaviors in complex networks, especially for understanding the interplay between complexity in the overall topology and local dynamical properties of the coupled units
- where a weighting in the connections has relevant consequences in
determining the network’s dynamics
- asymmetry in the coupling was shown to play a fundamental role in connection with synchronization of coupled spatially extended fields
- conveying the global complex structure of shortest paths into a weighting procedure gives in that range a better criterion for synchronization than solely relying on the local information around a node
- asymmetry here deteriorates the network propensity for synchronization
- which are the essential topological ingredients enhancing synchronization in weighted networks? The answer is that only the simultaneous action of many ingredients provide the best conditions for synchronization
- The first ingredient is that the weighting must induce a dominant interaction from hub to non-hub nodes
- A second fundamental ingredient is that the network contains a structure of connected hubs influencing the other nodes
- the need of a dominant interaction from hubs to non-hubs for improving synchronization, also for highly homogeneous networks
- weighted networks is the most promising framework for the study of how the architectural properties of the connection wiring influence the efficiency and robustness of the networked system in giving rise to a synchronized behavior
- the coupling strength at which the transition occurs is determined by
the largest eigenvalue of the adjacency matrix
- the ability of a given network to synchronize is strongly ruled by the structure of connections
- some studies suggested that small-world wirings always lead to enhance synchronization as compared with regular topologies

6. Applications

- the larger is the clustering coefficient, the easier the development of the consensus takes place
- the role of complex topologies in opinion dynamics is still to be better investigated
- network structure can be as important as the game rules in order to maintain
cooperative regimes
- the Internet has a relatively high clustering coefficient, about 100 times larger than that of a ER random graph with the same size. Interestingly, the clustering properties increase with time. Moreover, the clustering coefficient is a power-law decaying function of the degree
- Finally, the betweenness distributions follow a power-law
- For the data mining in WWW, it is necessary to estimate the accuracy and
valueness of such information. Interestingly, in the commercial search engine Google such estimation is performed by the so called “page rank” measurements
- it is expected that the large-scale network approach may lead to new insights on various longstanding questions on life, such as robustness to external perturbations, adaptation to external circumstances, and even hidden underlying design principles of evolution
- Similarly to metabolic networks, scale-free properties, high-clustering and small-world properties with hierarchical modularity and robustness against random attacks were observed in protein–protein interaction networks
- argued that the
biological functional organization and the spatial cellular organization are correlated significantly with the topology of the network, by comparing the connectivity structure with that of randomized networks
- Protein crystallography reveals that the fundamental unit of the protein structure is the domain
- Up to now, we have presented evidences supporting that scale-free distributions, small-world, and high clustering seem to be universal properties of cellular networks
- One of the basic properties of scale-free networks is the existence of hub nodes
- The scale-freeness property of a network might indicate growing and
preferential attachment as evolutionary principles that led to the formation of the network as we know it nowadays
- Suppressing a link between highly connected proteins and favoring links between highly connected and low-connected pairs of proteins decreases the likelihood of cross talks between different functional modules of the cell, and increases the overall robustness of the network against localized effects of deleterious perturbations. This feature (combined with scale-free and small-world property) is considered nowadays a key property to explain biological robustness
- Each network motif appears to carry out a specific dynamical function in the network, as it has been experimentally and theoretically demonstrated
- the brain organization is ruled by optimizing principles of resource allocation and constraint minimization
- This plasticity renders neurons able to continuously change connections, or establish new ones according to the computational and communication needs
- The literature usually refers to three different types of connectivity: neuroanatomical, functional and effective
- the description of neural system may be improved by using weighted networks
- Within the framework of learning of artificial neural networks, it has recently been shown that a small-world connectivity may yield a learning error and learning time lower than those obtained with random or regular connectivity patterns
- Some brain diseases, as epilepsy, are the result of an abnormal and, some times, abrupt synchronization of a large number of neural populations
- Many open questions e.g. the presence of motifs is a relevant feature in the structure of cortical networks. However, the role of motifs in brain dynamics remain unclear
- learning could induce changes in the wiring scheme
- Synchronization has been found to be enhanced by a small-world wiring structure. This finding has suggested a plausible paradigm to explain the emergence of some neural phenomena as epileptic seizures, characterized by
a sudden and strong synchronization. Unfortunately, there is presently no conceptual or computational works to understand the role of connectivity in the control of these synchronized states

7. Other topics

- finding the communities within a network is a powerful tool for understanding the structure and the functioning of the network, and its growth mechanisms
- spectral graph partitioning
- A class of algorithms that work much better when there is no prior knowledge on the number of communities is the hierarchical clustering analysis used in social networks analysis
- a series of algorithms based on the idea of iteratively removing edges with high
centrality score have been proposed. Such methods use different measures of edge centrality, as the random-walk betweenness, the current-flow betweenness
- fast method based on modularity
- The study of the eigenvector associated with the second smallest eigenvalue is of practical use only when a clear partition into two parts exists, which is rarely the case
- other algorithms use voltage drops, the graph is considered an electric circuit
- The ability to navigate and search for shortest paths in a network without knowledge on its whole topological structure is a relevant issue, not only in
social systems, but also in optimization of information finding from the World Wide Web, traffic way-finding in a city, transport of information packets on the Internet, or diffusion of signaling molecules in a biological cell
- Analytical and numerical results show that the information does not distribute
uniformly in heterogeneous networks
- Since in a highly heterogeneous network, as the Internet, the highest-degree nodes are connected to a significant fraction of all nodes in the network,
the agent need only a few steps to find a node that is a neighbor of the target q
- This means that in the small world regime, the average search time increases very slowly with the network size, in comparison with regular and random networks, and also with respect to a strategy based on random walk
- For a single search problem the optimal network is clearly a highly polarized starlike structure. This structure is indeed very simple and efficient in terms of searchability, since the average number of steps to find a given node is always bounded, independently of the size of the system. However, the polarized starlike structure becomes inefficient when many search processes coexist in the  network, due to the limited capacity of the central node
- This means that only two classes of networks can be considered as optimal: starlike configurations when the number of parallel searches is small, or homogeneous configurations when the number of parallel searches is large.
- agents making use of local information of the net topology are more efficient in such dynamical environment than agents making use of the global information, thus suggesting that this can explain what occurs in multi-agent systems for the processes of allocation of distributed resources
- Finally, for high densities and small noises, the motion of walkers becomes ordered on the same scale of the system size, and all walkers move in the same spontaneously chosen direction






Networking

Student Notes: 3 Papers on complex networks vulnerabilities

This post compiles my notes on three Papers on vulnerabilities in complex networks:
- "Multiscale vulnerability of complex networks" by Stefano Boccaletti, Javier Buldu, Regino Criado and Julio Flores, Vito Latora, Javier Pello and Miguel Romance (2007).

- "Error and Attack Tolerance of Complex Networks" by Reka Albert, Hawoong Jeong and Albert-Laszlo Barabasi (2000).

- "Information Theory Perspective on Network Robustness" by Tiago A. Schieber, Laura Carpi, Alejandro C. Frery, Osvaldo A. Rosso, Panos M. Pardalos and Martin G. Ravetti (2015).

As always, I also add my usual disclaimer when dealing with Student Notes:

The intent of this post is not to create new content but rather to provide Network Science students with a (literal and non-comprehensive) summary of this paper.

Why this Information Security blog touches the field of Network Science? I am convinced that we can apply Network Science learning points to our Information Security real-life scenarios.

As always, a little disclaimer: These notes do not replace the reading of the paper, they are just that, some student notes (or fragments) of the paper.


On the Paper titled "Multiscale vulnerability of complex networks" by Stefano Boccaletti, Javier Buldu, Regino Criado and Julio Flores, Vito Latora, Javier Pello and Miguel Romance (2007):

- The paper defines vulnerability of a complex network as the capacity of a grpah to maintain its functional performance under random damages or malicious attacks.

- Network malfunctioning could be caused by the deletion of a node and all the links ending in it or by the deletion of one or several links between nodes. This paper focuses on the deletion of links (and not nodes).

- A proper measure of vulnerability should refer to measures of the link betweenness. In general we have to consider the full multiscale sequence of betweenness coefficients (a higher betweenness means less vulnerability).

- A geodesic is the shortest path between two nodes.

On the Paper titled "Error and Attack Tolerance of Complex Networks" by Reka Albert, Hawoong Jeong and Albert-Laszlo Barabasi (2000):

- This paper demonstrates that error tolerance is not shared by all redundant systems but a class of inhomogeneously wired networks called free-scale networks display it.

- However, error tolerance comes at a high price: these networks are extremely vulnerable to attacks. The removal of the nodes that play the most important role in achieving network's connectivity could affect functional performance.

- Complex (both empirical and theoretical) networks can be divided in two major classes based on P(k) i.e. the probability that a node in the network is connected to k other nodes. In the first case P(k) is peaked at an average and decays exponentially for large k (an exponential tail). Typical examples of these exponential (homogeneous) networks are the random graph model of Erdos and Renyi and the small world model of Watts and Strogatz. Each node has approximately the same number of links. In the second case, there are inhomogeneous networks (scale-free networks) for which P(k) decays as a power law (a power law tail) i.e. P(k) = alfa times k exp (minus gamma). They do not have a characteristic scale. Highly connected nodes are statistically significant in scale-free networks.

- Connectivity in a homogeneous network follows a Poisson distribution peaked at and decaying exponentially for k>> .

- The scale-free model incorporates two common ingredients in real networks: growth and preferential attachment.

- The average length of the shortest paths between any two nodes in the network is diameter d. It describes the interconnectedness of a network. Networks with a very large number of nodes can have a very small diameter.

- Error tolerance studies the changes in the diameter when a small fraction f of the nodes is removed.

- In exponential networks, the diameter increases monotonically with f, since all nodes have approximately the same number of links, they contribute equally to the network's diameter.

- In scale-free networks, the diameter remains unchanged under an increasing level of errors. The removal of "small nodes" does not alter the path structure.

- Attack survivability: When the most connected nodes are eliminated, the diameter of the scale-free network increases rapidly.

- In exponential networks, for fractions f > 0.28 we have cluster sizes S close to 0. Qualitatively similar to the percolation critical point.

- In scale-free networks, the threshold is extremely delayed as longs as it is not a targeted attack (in that case fc=0.18).

On Paper titled "Information Theory Perspective on Network Robustness" by Tiago A. Schieber, Laura Carpi, Alejandro C. Frery, Osvaldo A. Rosso, Panos M. Pardalos and Martin G. Ravetti (2015):

- This paper proposes a dynamical definition of network robustness based on Information Theory.

- They define a failure as a temporal process defined in a sequence. Robustness measures then dissimilarities between topologies after each time step of the sequence.

- Robustness is the ability of the network to continue performing after failures or attacks.

- The most popular methodology to measure robustness is based on percolation theory and on the size of the biggest connected component. However, depending on the network structure, it is possible to attack great part of it keeping these measures blind to changes.

-  This paper proposes to measure network robustness based on the Jensen-Shannon divergence. It quantifies the topological damage of each time step due to failures, not considering the consequences of the dynamical process operating via the network.

- The distance that a given topology is apart from itself after a failure quantifies robustness.

- The Jensen-Shannon divergence between two probability distributions P and Q is defined as the Shannon entropy of the average minus the average of the entropies.

- The robustness measure proposed by this paper depends not only on the network topology but on the sequence of failures over time, aiming to quantify the vulnerability of a given structure under a series of deterministic or stochastic failures.

- The computation cost of the probability distribution function (PDF) update depends on the link removed. New algorithms as the ANF or HyperANF (based on HyperLogLog counters) offer a fast and precise approach and obtain very good approximations of the distance probability distribution for graphs with millions of nodes in a few seconds.


- The network's average degree, mean degree and the minimum and maximum degree are immediately obtained from the degree distribution. The network's efficiency, diameter, average path length, fraction of disconnected pairs of nodes and other distance features can be obtained from the distance distribution.

- The knowledge of critical elements is of great importance to plan strategies either to protect or to efficiently attack networks.

- The problem of finding the best sequence of links to destroy the network can be solved through combinatorial optimization approaches.

 - This method can efficiently work with disconnections as the distance PDF is able to acknowledge the fraction of disconnected pairs of nodes.

Happy robustness study!

A network city