Papers on Blockchain and Bitcoin: Student notes

Lots of time devoted to this post. The reader will find it useful to get an initial idea of both concepts: Bitcoin and blockchain.

As usual, a disclaimer note: This summary is not comprehensive and it reflects mostly literal extracts from the mentioned papers and article.

Let's start summarising a classic paper, the paper on bitcoin, written by Satoshi Nakamoto.

Paper 1
Bitcoin: A Peer-to-Peer Electronic Cash System by Satoshi Nakamoto

This paper appeared on 2008. The first Bitcoins were exchanged in 2009. By June 2011 there were around 10000 users and 6.5 million Bitcoins [Info coming from this paper]. 

Bitcoins (BTC) are generated at a predictable rate. The eventual total number of BTC will 21 million [Info coming from this paper].

This is a proposal of a peer-to-peer electronic cash version. The novelty here is that there is no need for a trusted third party to tackle the double spending challenge. The key for the functioning of this peer-to-peer network is that the majority of CPU power is not controlled by an attacker.

The proposal is to replace the need of trust, an element that make transaction costs higher by a cryptographic (and distributed) proof. More specifically, via a peer-to-peer distributed timestamp server that generates computational proof of the chronological order of transactions.

Interestingly, an electronic coin is a chain of digital signatures. The main challenge in an electronic payment system is double spending avoidance. A digital mint would solve this, however this means that the entire scheme would depend on the mint.

The participants of the peer-to-peer network form a collective consensus regarding the validity of this transaction by appending it to the public history of previously agreed transactions (the blockchain) using a hashing function and their keypair. A transaction can have multiple inputs and multiple outputs [Info coming from this paper].

The only way to confirm the absence of a transaction is to be aware of all transactions. Without the role of a central party, this translates into two requirements:
- All transactions must be publicly announced.
- All market participants should agree on a single payment history. In practical terms, this means the majority of market participants.

The first technical element requiring this electronic payment proposal is a timestamp server. Each timestamp includes all previous timestamps. A timestamp consists of a published hash of a block of items.

How do we consider the distributed nature of the system in the case of the timestamp servers? The authors (or author) propose a proof-of-work system. In this case, the proof-of-work means one CPU-one vote. The majority decision is represented by the longest chain. This longest chain will have the greatest proof of work effort invested in it.

Technically, the proof-of-work involves scanning for a value that when hashed, the hash begins with a number of zero bits. The average work required is exponential in the number of zero bits required and can be verified by executing a single hash.

The reader can start grasping the great degree of CPU-intensity that this electronic payment system requires.

Nodes always consider the longest chain to be the correct one and they will keep working on extending it.

Incentives come both from the creation of a new coin and from transaction fees. The first transaction in a block is a special transaction that starts a new coin owned by the creator of the block. Potential attackers will find much more benefitial to them to devote their CPU cycles and electricity to create new coins rather than to re-do existing blocks and to control enough consensus for theirs.

Transactions are hashed in a Merkle Tree way. This makes payment verification possible without running a full network node. This verification helps as long as honest nodes control the network.

The authors state that there is never the need to extract a complete standalone copy of a transaction history.

As all transactions need to be published, the way to obtain privacy in this system is to de-couple identities from public keys. However, privacy is only partially guaranteed. An additional recommendation is to use a new keypair for each new transaction.

An attacker can only try to change one of their own transactions to take back money that they recently spent.




Paper 2
An Analysis of Anonymity in the Bitcoin System by Fergal Reid and Martin Harrigan

This paper provides further input, probably the first paper, on the topic we mentioned earlier in this post regarding Bitcoin and the limits of user anonymity.

The decentralised nature of the BTC system and the lack of a central authority brings along the need to make all transaction history publicly available.

Users in Bitcoin are identified by public keys. Bitcoin maps a public key with a user only in the user's node and it allows users to issue as many public keys as wished. Users can also make use of third-party mixers (or laundry).

The interesting element of this paper is the description of the topological structure of two networks derived from Bitcoin's public transaction history: The transaction network and the user network. This is doable thanks to the transaction history being publicly available. After joining Bitcoin's peer-to-peer network, a client can freely request the entire history of Bitcoin transactions. This enables the possibility of performing passive identification analysis.

The authors of this paper studied BTC transactions from January 2009 to July 2011. 1019486 transactions and 12530564 public keys.

The authors of this paper are not aware of network structure studies of electronic currencies. However, this was done in a physical currency based on gift certificates named Tomamae-cho that existed in Japan during 3 months in 2004-2005.

The flow of that currency showed that the cumulative degree distribution followed a power-law distribution and the network showed small-world properties (high average clustering coefficient and low average path length.

Many papers maintain the difficulty to keep anonymity in networks in which user behaviour data is available. The main postulate of the authors of this paper is that Bitcoin does not anonymise user activity.

The transaction network
It represents the flow of BTCs between transactions over time. Each node represents a transaction and each directed edge represents an output of the transaction corresponding to the source node that is an input to the transaction corresponding to the target node. Each edge includes a timestamp and a BTC value.

There is no preferential attachment in this network.

The user network
It represents the flow of BTCs between users over time. Each node represents a user and each directed edge represents an input-output pair of a single transaction. Each directed edge also includes a timestamp and a BTC value.

As a user can use many different public keys, the authors of the paper construct an ancillary network in which each vertex represents a public key. They connect nodes with undirected edges where each edge joins a pair of public keys that are both inputs to the same transaction and then are controlled by the same user.

The contraction of public keys into users generates a network that is a proxy for the social network of BTC users.

Disassembling anonymity
A first source to decrease anonymity consists of integrating off-network information. Some BTC related organisations relate public keys with personally identifiable information. SOme BTC users disclose voluntarily  their public keys in fora.

Bitcoin public keys are strings with about 33 characters in length and starting with the digit one.


A second source of information is IP addressing. Unless they are using anonymising proxy technology such as Tor, it is relatively true that the first IP address informing of a transaction is the source of it.

A third source is based in egocentric analysis and visualisation e.g. WIkileaks published its public key to request donations. The analysis of transactions having as destination that particular public key can also provide input on identities.

A fourth source will be context discovery e.g. identifying nodes that correspond to BTC brokers.

These techniques help investigating BTC thefts. For example, a very quick transfer of BTCs between public keys (most of them not yet known to the network of already done transactions)  can be an indication to generate a theft hypothesis.

There are other analysis paths involving tainted BTCs, order books from BTC exchanges, client implementations, time analysis and the like.

Mitigation strategies
The official BTC client could be patched to prevent the linking of public keys with user information, a service that would use dummy public keys could be implemented (certainly, this would increase transaction fees). Even the BTC protocol could be modified to allow for BTC mixing at protocol level.

For the time being, the authors of this paper state that physical cash payments still represent a competitive and anonymous payment system.

The final statement from the authors of this paper: "Strong anonymity is not a prominent design goal of the BTC system".




Paper 3
Bitcoin: Economics, Technology and Governance by Rainer Boehme, Nicolas Christin, Benjamin Edelman and Tyler Moore

This paper defines Bitcoin (BTC) as an online communication protocol facilitating the use of a virtual currency. It states that BTc is the first widely adopted mechanism to provide absolute scarcity of a money supply. Inflation does not have a place in this system.

Public keys serve as account numbers. Every new transaction published to the BTC network is periodically grouped in a block of recent transactions. A new block is added to the chain of blocks every ten minutes.

In some cases, a transaction batch will be added to the block chain but then a few minutes later it will be altered because a majority of miners reached a different solution.

When listing a transaction, the buyer and the seller can also offer to pay a "transaction fee", normally 0.0001 which is a bonus payment to whatever miner solves the computationally difficult puzzle that verifies the transaction.

The paper reviews four key categories of intermediaries: Currency exchanges, digital wallet services, mixers and mining pools.

Currency exchanges
They exchange BTCs for traditional currencies or other virtual currencies. Most operate double auctions with bids and asks and charge a commission (from to 0.2. to 2 percent). Today BTC resembles more a payment platform rather than a real currency.

There are significant regulatory requirements (including expensive certification fees) to establish a exchange. In addition to that, they require considerable security measures. So the number of them is relatively limited.

Digital wallet services
They are data files that include BTC accounts, recorded transactions and keys necessary to spend or transfer the stored value. In practice, digital wallet services tend to increase centralisation (and online availability with high security requirements also).

The loss of a private key, if not backed up, would mean the loss of the possibility to trade with those owned (i.e. digitally signed) BTCs.

The entire blockchain reached 30GB in March 2015.

Mixers
Mixers ensure that timing does not yield clues about money flows. They let users pool sets of transactions in unpredictable combinations. Mixers charge 1 to 3 percent of the amount sent. Mixer protocols are usually not public.


Mining pools
BTCs are created when a miner solves a mathematical puzzle. Mining pools now combine resources from numerous miners. Oversized mining pools threaten the decentralisation that underpins BTC's trustworthiness.

Uses of Bitcoin
Initially it seems illicit activities use BTC given its openness and distributed nature. Every Bitcoin transaction must be copied into all future versions of the block chain. Updating the block chain entails an undesirable delay, making BTC too slow for many in-person retail payments.

Some scientists stress the importance of BTC for its ability to create a decentralised record of almost anything.

Risks in BTC
Market risk due to the fluctuation in the exchange rate between BTC and other currencies. It has also the shallow market problem: a person trading quickly a large amount would affect the market price.
Counterparty risk: Of the exchanges that closed (either due to a security breach or to low-volume business), 46% of them did not reimburse their customers after shutting down.

The BTC system offers no possibility to un-do a transaction, creating then transaction risk (and affecting end consumer protection).

There is certainly some operational risk coming from the technical infrastructure and the already mentioned 51 percent attack.

Finally, BTC faces also privacy, legal and regulatory risks.

Crime
Three types, BTC-specific crime, BTC-facilitated crime and money laundering.

Regulation
The authors suggest that longstanding reporting requirements can provide a level of compliance for virtual currencies similar to what has been achieved for traditional currencies. However, they recommend to consider regulations in the broader context of a global market for virtual currencies services.

Social science lab
Interestingly, most users treat their bitcoin investments as speculative assets rather than as means of payment.

Incentives
A so far theoretical concern: Larger blocks are less likely to win a block race than a smaller one.

Privacy and anonymity
Some authors claim that almost have of BTC users can be identified.

An open question posed by the authors
What happens if the BTC economy grows faster than the supply of bitcoins?

A final thought by the authors of this paper: BTC may be able to accommodate a community of experimentation built on its foundations.

Paper 4
Bitcoin-NG: A scalable blockchain protocol by Ittay Eyal, Adem Efe Gencer, Emin Gun Sirer, and Robert van Renesse (Cornell Univesity)

This paper proposes a new blockchain protocol designed to scale. Original bitcoin-derived blockchain protocols have inherent scalability limits. To improve efficiency, one has to trade off throughput for latency. BTC currently targets a conservative 10-minute slot between blocks, yielding 10 minute expected latencies for transactions to be encoded in the blockchain.

Bitcoin-NG achieves a performance improvement by decoupling Bitcoin's blockchain operation into two planes: leader election and transaction serialisation.

Some generic descriptions of the blockchain protocol
An output is spent if it is the input of another transaction. A client owns x Bitcoins at time t if the aggregate of unspent outputs to its address is x. The miners commit the transactions into a global append-only log called the blockchain.

Blockchain
The blockchain records transactions in units of blocks. A valid block contains a solution to a cryptopuzzle involving the hash of the previous block, the hash (the Merkle root) of the transactions in the current block, which have to be valid and a special transaction (the coinbase) crediting the miner with the reward for solving the cryptopuzzle. The cryptopuzzle is a double hash of the block header whose result has to be smaller than a set value. The difficulty of the problem, set by this value, is dynamically adjusted such that blocks are generated at an average rate of one every ten minutes.

Bitcoin-NG
It is a blockchain protocol that serialises transactions allowing for better latency and bandwidth than BTC.

The protocol divides into time epochs. In each epoch, a single leader is in charge of serialising state machine transitions. To facilitate state propagation, leaders generate blocks. The protocol introduces two types of blocks: key blocks for leader election and microblocks that contain the ledger entries.

Leader election is already taking place in BTC. But in BTC the leader is in charge of serialising history, making the entire duration of time between leader elections a long system freeze. Leader election in BTC-NG is forward-looking and ensures that the system is able to continually process transactions.

Resilience
Bitcoin-NG is resilient to selfish mining against attackers with less than 1/4 of the mining power.

Bitcoin-NG shows that it is possible to improve the scalability of blockchain protocols to the point where the network diameter limits consensus latency and the individual node processing power is the throughput bottleneck.  


Paper 5
A Protocol for Interledger Payments by Stefan Thomas and Evan Schwartz

This paper deals with the complexity to move money between different payment systems. The authors of the paper propose a way to connect different blockchain implementations. It uses ledger-provided escrow (conditional locking of funds) to allow secure payments through untrusted connectors.

This is a protocol for secure interledger payments across an arbitrary chain of ledgers and connectors. It uses ledger-provided escrow based on cryptographic conditions to remove the need to trust connectors between different ledgers. Payments can be as fast and cheap as the participating ledgers and connectors allow and transaction details are private to their participants.

The focus of this summary is not the deep description of this protocol but the introduction to the BAR (Byzantine, Altruistic, Rational model.

Byzantine actors may deviate from the protocol for any reason, ranging from technical failure to deliberate attempts to harm other parties or simply impede the protocol.

Altruistic actors follow the protocol exactly.

Rational actors are self-interested and will follow or deviate from the protocol to maximize their short and long-term benefits.

The authors of the paper assume that all actors in the payment are either Rational or Byzantine. Any participant in a payment may attempt to overload or defraud any other actors involved. Thus, escrow is needed to make secure interledger payments.

This protocol proposes two working modes: The atomic mode and the universal mode.

In the atomic mode, transfers are coordinated by a group of notaries that serve as the source of truth regarding the success or failure of the payment. The atomic mode only guarantees atomicity when notaries N act honestly. Rational actors can be incentivised to participate with a fee.

The universal mode relies on the incentives of rational participants to eliminate the need for external coordination.



Paper 6
A Next-Generation Smart Contract and Decentralized Application Platform from Ethereum's GitHub repository

This white paper presents a blockchain implementation alternative to BTC and, initiallly, more generic.It presents blockchain technology as a tool of distributed consensus. It is not only cryptocurrencies but also financial instruments, non-fungible assets such as domain names or any other digital asset being controlled by a script i.e. a piece of code implementing arbitrary rules (e.g. smart contracts).

Ethereum provides a blockchain with a built-in fully fledged Turing-complete programming language.

A recap on BTC
 
As already mention in the summary of Paper 1 in this post, BTC is a decentralised currency managing ownership through public key cryptography with a consensus algorithm named "proof of work". It achieves two main goals: It allows nodes in the network to collectively agree on the state of the BTC ledger and it allows free entry into the consensus process. How does it do this last point? By replacing the need to use a central register by an economic barrier.

The ledger of a cryptocurrency can be thought of as a state transition system. The "state" in BTC is the collection of all coins (unspent transaction outputs, UTXO) and their owners.

BTC decentralised consensus process requires nodes in the network to continuously attempt to produce packages of transactions called blocks. The network is intended to create a block every ten minutes. Each block contains a timestamp, a nonce, a hash of the previous block and a list of all transactions that took place in the previous block.

Requirement for the "proof of work": The double SHA256 hash of every block - a 256-bit number - must be less than a dynamically adjusted target (e.g. 2 to the power of 187).

The miner of every block is entitled to include a transaction giving themselves 25 BTC out of nowhere.

In the event of a malicious attacker, they will target the order of transactions, not protected by cryptography.

The rule is that in a fork the longest blockchain prevails. In order for an attacker to make his blockchain the longest, he would need to have more computational power than the rest of the network (51% attack).

Merkle Trees
A Merkle Tree is a type of binary tree. Each node is the hash of its two children. As hashes propagate upwards. This way, a client, by downloading the header of a block, would know whether the block has been tampered.

A "simplified payment verification" protocol allows for light nodes to exist. They download only block headers and branches related to their transactions.

Alternative blockchain applications


Namecoin: A decentralised name registration database.
Colored coins and metacoins: A customised digital currency on top of BTC.


Basic scripting
UTXO in BTC can be owned also by a script expressed in a simple stack-based programming language. However, this language has some drawbacks:

- Lack of Turing completeness. Loops are not supported.
- Value-blindness: UTXO are all or nothing.
- No opportunity to consider multi-stage contracts.
- UTXOs are blockchain-blind.

Ethereum builds an alternative framework with a built-in Turing-complete programming language.

Ethereum
An Ethereum account contains four fields: The nonce (a counter that guarantees that each transaction can only be processed once), the ether balance, the contract code and the account's storage.

Ether is the crypto-fuel of Ethereum. Externally owned accounts are controlled by private keys and contract accounts are controlled by their contract code.

Contracts are autonomous agents living inside the Ethereum execution environment. Contracts have the ability to send message to other contracts. A message is a transaction produced by a contract. A transaction refers to the signed data package that stores a message to be sent from an externally owned account. Each transaction sets a limit to how many computational steps of code execution it can use.

Ethereum is also based on blockchain. Ethereum blocks contain a copy of both the transaction and the most recent state.

Ethereum applications
Token systems, financial derivatives (financial contracts mostly require reference to an external price ticker), identity and reputation systems, decentralised file storage and decentralised autonomous organisations.

Other potential uses are saving wallets, a decentralised data feed, smart multisignature escrow, cloud computing, peer-to-peer gambling and prediction markets.

GHOST
Blockchains with fast confirmation times suffer from reduced security due to blocks taking a long time to propagate through the network. Ethereum implements a simplified version of GHOST (Greedy Heaviest Observed Subtree) which only goes down seven levels.

Currency issuance
Ether is released in a currency sale at the price of 1000-2000 ether per BTC. Ether has an endowment pool and a permanently growing linear supply.

The linear supply reduces the risk of an excessive wealth concentration and gives users a fair chance to acquire ether.

Mining
BTC mining is no longer a decentralised and egalitarian task. It requires high investments. Most BTC miners rely on a centralised mining pool to provide block headers. Ethereum will use a mining algorithm where miners are required to fetch random data from the state. This white paper states that this model is untested.
Ethereum full nodes need to store just the state instead of the entire blockchain history. Every miner will be forced to be a full node, creating a lower bound on the number of full nodes and an intermediate state tree root after processing each transaction will be included in the blockchain.

The question now is ... will it work?




Article 1
Technology: Banks Seek the Key to Blockchain by Jane Wild, Martin Arnold and Philip Stafford - FT.com 

This FT.com article on blockchain can be found here. The authors mention an internal blockchain implementation and remember that a blockchain is a shared database technology that connects consumers and suppliers creating online networks with no need for middlemen or a central authority. Applications are endless and supporters claim that trust is created by the participating parties.

The authors of this article mention the use of blockchain, also named distributed ledger, as a back-office new implementation and even new governmental implementations such as land registers. 

There are two types of blockchains in terms of accessibility: invitation-only (private) and public (open). UBS and Microsoft are working with blockchain start-up Ethereum (running  an open source technology). Other banks are going the private blockchain way.

The authors of this article also mention that this technology has in front of it key challenges such as robustness, security, regulation.

The ledger of BTC weighs already more than 45 GB.

Bits and pieces





Using Networks To Make Predictions - A lecture (3 of 3) by Mark Newman

For those willing to get introduced to the world of complex networks, the three lectures given by Mark Newman, a British physicist, at the Santa Fe Institute on 14,15 and 16 September 2010 are a great way to get to know a little bit about this field.

The first lecture introduced the concept of networks. The second lecture talked about network characteristics (centrality, degree, transitivity, homophily and  modularity). Let's continue with the third lecture. You can find it here. This time on the impact of network science.

In this post I summarise (certainly in a very personal fashion, although some points are directly extracted from his slides) the learning points I extracted from the lecture.

Dynamics in networks
- For example, how does a rumor spread in a network?
- This aspect is much more controversial than the point touched in lectures 1 and 2.
- An example: Citation networks (e.g. the network of legal opinions or the network of scientific papers).
- "Price observed that the distribution of the number of citations a paper gets follows a power law or Pareto distribution - a fat-tailed distribution in which most papers get few citations and a few get many".
- This power law is somehow surprising.

Power laws
- In comparison to a normal distribution, the power law shows that there are some nodes with a number of links that is several orders of magnitude higher. This does not happen in normal distributions.
- Examples of cases that follow a Pareto law (power law) are word counts in books, web hits, wealth distribution, family names, city populations, etc.
- Power law - the 80/20 rule. E.g. "the top 20% own 86% of the wealth. 10% of the cities have 60% of the people. 75% of people have surnames in the top 1%.
- Power laws are a very study area in complex systems.

Where do power laws come from? Preferential attachment
- The importance of getting an early lead e.g. with an excellent product, or by good marketing.  
- A plausible theory is preferential attachment. Interestingly enough, this theory does ignore the content of the papers. It only uses the number of links the nodes have. 
- First mover advantage: In citations, if you are one of the first ones writing on a topic, your paper will be cited anyway, regardless of the content. They are the early lead in that specific field.
- How many you have depends on how many you already have.
- In conclusion, it is much more effective, according to this theory, to write a mediocre paper on tomorrow's field rather than a superb paper in today's field.
- The long tail effect: A small number of nodes with  lots of connections.

The spread of a disease over a network
- Percolation model. In a specific network, I colour some of the edges and with those I have a different network starting from my initial network.
- How does the structure of the network influence the spread of a disease?
- Degree is the number of connections you have.
- Hubs are extremely effective of passing diseases along.
- What about if we vaccinate hubs? Targeted vaccination.
- Herd immunisation.
- Targeted attacks are much more effective (clear link with information security)
- We can use the network itself to find the hubs.
- People who should be vaccinated are the most mentioned friends.

Network robustness
- Can we tell that a network is robust by looking at its structure? Let's go back to the concept of homophily (mentioned in Part 2 or 3 of this series of lectures).
- Homophily by degree: Party people hanging out with party people (positive correlation coefficient in social networks- high degree nodes connect with high degree nodes).
- You get a very dense core and very clean borders. Social networks are then very robust networks. This is exactly the opposite we would like in terms of disease spread.
- Social networks are very robust and easy to vaccinate against diseases.
- Internet is fragile however. The high degree nodes connect with the low degree nodes. The highly dense nodes connect with scarcely connected nodes. The high degree nodes are spread out all over the network. Those networks are not so robust. They are fragile. If you knock down nodes with high degree, you knock down the network very quickly.
- Number of connections (x axis) is the degree.
- The crucial factor in the spread of disease is airplanes.

Future directions
- Great slide: This is very very new field. "We need to
- Improve the measurement of networks.
- Understand how networks change over time.
- Understand how changing a network can change its performance, and perhaps improve it.
- Get better at predicting network phenomena.
- Predict how society will react or evolve based on social networks.
- Prevent disease outbreaks before they happen. 
- And..?"
- Sometimes you engineer a network and sometimes it works!


Networking city

Book Review: Intentional Risk Management Through Complex Networks Analysis - Innovation for Infosec

This post provides a non-comprehensive summary of a multi-author book published in 2015 titled "Intentional Risk Management Through Complex Networks Analysis".
I recommend this book to those looking for real science-based Information Security innovations. This statement is not a forced marketing slogan. It is a reality.
The authors of this book are, in alphabetic order Victor Chapela, Regino Criado, Santiago Moral and Miguel Romance.

In this post I present some of the interesting points proposed by the authors. The ideas mentioned here are coming from the book. Certainly this summary is a clear invitation to read the book, digest its innovative proposals and start innovating in this demanding field of IT Security.

Chapter 1. Intentional Risk and Cyber-Security: A Motivating Introduction

The authors start distinguishing between Static Risk and Dynamic Risk. Static Risk is opportunistic risk (e.g. identity theft). Dynamic Risk is directed intentional risk that attempts to use potentially existing but unauthorised paths (e.g. using a vulnerability).

Static Risk is based on the probability that a user with authorised access to a specific application abuse his access for personal gain. This risk can be deterred by reducing anonymity.

In Dynamic Risk the attacker tries to get the most valuable node via the least number of hops via authorised or unauthorised accesses.

Currently the main driver for a cyber-attack is the expected profit for the attacker. The book also links Intentionality Management with Game Theory, specifically with the stability analysis of John Nash's equilibrium. The book uses Complex Network Theory (both in terms of structure and dynamics) to provide a physical and logical structure of where the game is played.

The authors consider intentionality as the backbone for cyber-risk management. They mention a figure, coming from a security provider, of around USD 400 billion as the latest annual cost of cyber-crime.

The authors make a distinction between:
- Accidental risk management, a field in which there is a cause that leads to an effect and attacks are prevented mostly with redundancy (e.g. in data centres) and
- Intentional risk management, in which we have to analyse the end goal of the attackers.

To prevent these attacks we can:

- Reduce the value of the asset.
- Increase the risk the attacker runs.
- Increase the cost for the attacker.

Traditionally the risk management methodologies are based on an actuarial approach, using the typical probability x impact. Being the probability based on observation of the frequency of past events.

We need to assess which assets are the most valuable assets for the attackers.

Using network theory, whose foundations can also be found in this blog in summaries posted in October 2015, November 2015, December 2015, January 2016, February 2016 and March 2016, the more connected a node is (or the more accessibility a computer system has), the greater is the risk for it to be hackable.

A key point proposed by this book: Calculated risk values should be intrinsic to the attributes of the network and require no expert estimates. The authors break down attackers' expected profit into these three elements:

- Expected income i.e. the value for them.
- The expense they run (depending on the accessibility both via a technical user access or a non-technical user access).
- Risk to the attacker (related to anonymity and some deterrent legal, economic and social consequences.

An attacker prefers busy applications that are highly accessible, admin access privileges and critical remote execution vulnerabilities. The main driver for attackers is value for them. Attackers in the dynamic risk arena are not deterred by anonymity.

The authors relate anonymity to the number of users who have access to the same application.




Chapter 2. Mathematical Foundations: Complex Networks and Graphs (A Review)
Complex network model the structure and non-linear non-linear dynamics of discrete complex systems.

The authors mention the difference between holism and reductionism. Reductionism works if the system is linear. Complexity depends on the degrees of freedom that a system has and whether linearity is present.

Networks are composed of vertices and edges. In complex networks small changes may have global consequences.

Euler walk: A path between two nodes for which every link appears exactly once. The degree of a node is the number of links the node shares.

If the number of links with odd degree is greater than 2 then no Euler walk exists.

If the number of links with odd degree equals 0 then there are Euler walks from any node.

If the number of links with odd degree equals 2 then there is only an Euler walk  from one of the odd nodes.

A graph is the mathematical representation of a network. The adjacency matrix of a graph is a way to determine the graph completely. A node with a low degree is weakly connected. A regular network is a network whose nodes have exactly the same degree.

In a directed network the adjacency matrix is not necessarily symmetric. Paths do not allow repetition of vertices while walks do. A tree is a connected graph in which any two vertices are connected by exactly one path.

Structural vulnerability: How does the removal of a finite number of links and/or nodes affect the topology of a network?

Two nodes with a common neighbour are likely to connect to each other. The clustering coefficient measures it.

The eigenvector centrality of a node is proportional to the sum of the centrality values of all its neighbouring nodes.

Spectral graph theory studies the eigenvalues of matrices that embody the graph structure.

Betweenness centrality: Edge betweenness of an edge is the fraction of shortest paths between pairs of vertices that run along it. Degree distribution provides  the probability of finding a node in G with degree k.

Complex networks models
In random graphs, the probability that 2 neighbours of a node are connected is the probability that two randomly chosen nodes are linked. Large scale random networks have no clustering in general. The average distance in a random network is rather small.

Small world model
Some real networks like the Internet have characteristics which are not explained by uniformly random connectivity. Small world property: The network diameter is much smaller that the number of nodes. Most vertices can be reached from the others through a small number of edges.

Scale-free networks
The degree distribution does not follow a Poisson like distribution but does follow a power law i.e. the majority of nodes have low degree and some nodes, the hubs, have an extremely high connectivity.

Additionally, many systems are strongly clustered with many short paths between the nodes. They obey the small world property.

Scale-free networks emerge in the context of a growing network in which new nodes prefer to connect to highly connected nodes. When there are constraints limiting the addition of new edges, then broad-scale or single-scale networks appear.

Assortative networks
Most edges connect nodes that exhibit similar degrees (the opposite is disassortative networks).

A Hamiltonian cycle in a graph passes through all its nodes exactly once. The line graph is a set of nodes that are the initial set of edges.


Chapter 3. Random Walkers

Two different types of random walkers: Uniform random walkers and random walkers with spectral jump (a personalisation vector).

Statistical mechanics: The frequency of all the nodes will be the same in all the random walkers developed. In any type of random walker the most important element is the frequency with which each node appears. 

"If we move on a network in a random way, we will pass more often through the more accessible nodes". This is the idea of the PageRank algorithm used by Google. The difficulty comes to compute the frequency of each node. A random walker on a network can be modelled by a discrete-time Markov chain.

Multiplex networks: The edges of those networks are distributed among several layers. It is useful to model Dynamic Risk.

Intentional risk analysis
Accessibility: Linked to the frequency of a uniform random walker with spectral jump in the weighted network of licit connections. Two types of nodes:

- Connection-generator nodes (e.g. Internet access, effective access of internal staff).
- Non connection-generator node (those nodes through which the communication is processed).

Static intentional risk? (It exists but it is not so key I assume) The accessibility of each connection is zero cost because the accesses have been achieved by using the structure of the network.

In dynamic intentional risk each connection or non-designed access increase entails a cost for the attacker who seeks access to the valuable information (the vaults).

Modelling accessibility
A biased random walker with spectral jumps, going to those nodes with an optimal cost/benefit ratio. The random walker makes movements approaching the vaults. Accessibility in dynamic intentional risk may be modelled using a biased random walker with no spectral jumps in a 3-layered multiplex network.

1. A first layer corresponding to spectral jumps (ending and starting connections).
2. A second layer with the existing connections registered by the sniffing.
3. A third layer with connections due to the existence of both vulnerabilities + affinities.


Chapter 4. The Role of Accessibility in the Static and Dynamic Risk Computation

The anonymity is computed for each edge of the intentionality network. The value and the accessibility are computed for each node. Two ways to calculate the edge's PageRank:

a. via the classic PageRank algorithm (frequency of access to an edge and the PageRank of its nodes).
b. via Line Graph i.e. the nodes are the edges of the original network.

The dumping factor will be the jumping factor.

The outcome will be a weighted and directed network with n nodes and m edges. There are equivalent approaches using the personalization vector.

Chapter 5. Mathematical Model I: Static Intentional Risk

Static Risk: Opportunistic risk. Risk follows authorised paths.
Dynamic Risk: Directed intentional risk. Tendency to follow unauthorised paths. Linked to the use of potentially existing paths but not authorised in the network.

The model is based on the information accessibility, on its value and on the anonymity level of the attacker.

Intentionality complex network for static risk. Elements:

- Value: How profitable the attack is.
- Anonymity: How easy the identity of the attacker is determined.
- Accessibility: How easily the attack is carried out.

Every node has a resistance (a measure for an attacker to get access). Value is located at certain nodes of the network called vaults. Different algorithms will be used: Max-path algorithm, value assignment algorithm and accessibility assignment algorithm.

Static risk intentionality network construction method:
1. Network construction from the table of connection catches.
2. Network collapsed and anonymity assignment.
3. Value assignment.
4. Accessibility assignment.

Two networks appear in this study, the users network and the admins network. Network sniffing provides the connections between the nodes IP and the nodes IP:ports. Based on this sniffing, we get the number of users who use each one of the edges. The inverse of that integer number becomes the label for each edge. The max-path algorithm is executed to distribute the value from the vaults to all the nodes of the networks.

The inverse of the number of users in each edge is used as a value reduction factor. The higher the number of users who access a node, the higher value reduction potential attackers will have in that node but, however, the higher anonymity they will have though.

Each edge is labelled with the frequency of access (the number of accesses). The accessibility of a node is linked to the accessibility of the edges connecting it. For each edge, the PageRank algorithm is calculated.

The higher the access frequency, the higher the probability that someone will misuse the information present in that node.

The higher the profit to risk ratio for the attacker, the greater the motivation for the attacker.

The paradigm shift is relevant: From the traditional risk = impact x probability to:

- Attacker income: Value for each element of the network.
- Attacker probability: Directly proportional to accessibility.
- Attacker risk: 1/anonymity.

The value of each element resides in the node. Anonymity resides on the edge.
The profit to risk for the attacker ratio (PAR) =

value x accessibility x (anonymity /k) being k the potential punishment probability for the attacker.


 Chapter 6. Mathematical Model II: Dynamic Intentional Risk

Zero-day attacks are not integrated in the model.

In static risk:

- The most important single attribute is Value. The value depends on the percentage of value accessible by the user.
- The attacker uses their authorised access.
- Anonymity is an important incentive. Lack of anonymity is a deterrent.
- Accessibility has no cost (the user is already authorised)
- There is a higher level of personal risk perception.
- The higher the number of users, the higher his perceived anonymity.

In dynamic risk:

- The most important single attribute is accessibility.
- The degree of anonymity is not a deterrent (the user is not already authorised or known).

- The hacker tries to access the entire value.
- Typical values of anynomity: Coming from the Internet anonymity equals 1, from Wireless equals 0.5 and from the Intranet equals 0.

Accessibility in Dynamic Risk
Each jump of a non-authorised user from one element to another element increases the cost for the attacker. The more distance to the value, the more difficult and costly the attack is.

Dynamic risk construction
First step: Performing a vulnerability scanning of the network to get all non-authorised paths (known vulnerabilities, open ports, app fingerprinting, known vulnerabilities and so forth).

The vulnerability scanner used is Nessus.

Two types of potential connections:
- Affinities: Two nodes sharing e.g. OS, configurations and users.
- Vulnerabilities.

A modified version of the PageRank algorithm is used.

Dynamic Risk model

User network + admins network + affinities + vulnerabilities

Anonymity does not play any role in Dynamic Risk but accessibility is the main parameter.

Each edge has an associate weight. The dynamic risk of an element is the potential profit the attacker obtains reaching that element. As anonymity is not relevant in the context of dynamic risk, it is not necessary to collapse its associated network.

The accessibility of an element of the Dynamic Risk Network is the value we get for the relative frequency of a biased random walker through that element.

- Dynamic risk = value x accessibility
- The dynamic risk of a network is the maximum dynamic risk value of its elements (interesting idea - why not the sum?)
- The dynamic risk average = the total value found in the vaults x accessibility average (the root mean square of all accessibility values associated to elements of the network in the context of dynamic risk).


Chapter 7. Towards the Implementation of the Model

Source ports in this model are not important. They are mostly generated randomly.

Access levels. Restricted and unrestricted.
The higher the level of privilege, the more information and functionality an attacker can access. Typically there are two types of accesses, based on different ports:

- Restricted end user access: Always authorised and mostly with low risk.

- Unrestricted technical access: Any access that allows a technical user or an external hacker to have unrestricted access to code, configuration or data. It can be authorised or gained via an exploit. It is a high risk: Using admin access in an application you can in most cases escalate privileges to gain control over the server and the network.

For static risk we need to find which accesses are already authorised and normal. The frequency of connections for each socket (especially for the frequently used sockets) informs about the busiest routes and how many hosts accessed a specific application.

For dynamic risk, we need to model the potential routes that a hacker might find and exploit. For an attacker, sockets that are used normally are desirable since they are more anonymous.

Attackers will select routes where they can obtain the most privileges with the least effort and get the closest to their end goal.

Other unknown risks are out of the scope of this proposal. This is a key point to understand.

To calculate anonymity in the static risk network we need to collapse all the IP sources that connect to the same port destination. It will be the inverse of the number of IP sources collapsed.

Value: How much the data or functionality is worth for the attacker. It needs to be placed manually into those vault nodes.

And the ending point of the book is the great news that the authors are working on a proof of concept.


Innovation in IT Security


















































 

What networks can tell us about the world - A lecture (2 of 3) by Mark Newman

For those willing to get introduced to the world of complex networks, the three lectures given by Mark Newman, a British physicist, at the Santa Fe Institute on 14,15 and 16 September 2010 are a great way to get to know a little bit about this field.

In his first lecture, Mark Newman introduced what a network was. Let's continue with the second lecture, in which he explains what we can do with complex networks. You can find it here.

In this post I summarise (certainly in a very personal fashion, although some points are directly extracted from his slides) the learning points I extracted from the lecture.

Positioning of nodes within a social network. Centrality
- The idea of distances in networks.
- The most famous experiment in networks is the one by Stanley Milgram in 1967. The same Milgram who, some years afterwards made the famous experiment on obedience to an authority.
- The network-related experiment that Milgram made relates to the concept of a "small-world".
- He explains the mathematical basis of the six-step relationship concept. Everybody is connected with everyone is even less than six steps.
- If each person knows 100 people, then the number of people 1 step away from you is 100. The number of people 2 steps away from you is 100x100 = 10000. Actually people 5 steps away is 10 billion (more than the current world population).
- A way to pinpoint important people in a social network. Closeness centrality: Average distance to everyone in the network. Those nodes with the shortest distance are well connected. The most connected node is the one with the highest closeness centrality. However, this calculation is not very useful. Its calculation is complex and centrality numbers are very very similar to each other in a social network.

Degree centrality
- Can we do better? Yes, the degree centrality. The degree is the number of nodes a node is connected to. Degree centrality is just the degree number.
- Hubs in the network play a really important role in the function of the network.
- He presents a very important graph in network science. The degree graph. The x axis represents the degree i.e. the number of connections that a node has. The y axis represent the number of nodes that have that degree.
- They are well spaced values.

PageRank
- "Degree is like a score where you get one point per person you know. But not all people are equally important".
- How can we signal whether a node is connected to a very important node? One algorithm is PageRank. The rank associated to any node is an eigenvector. "Each node in the network has a score that is the sum of the score of its neighbours.

 Transitivity
- What does a triangle means in a social network? A friend of my friend is my friend. Or, probably two of my friends know each other.
- However if my friends don't know each other, I am more central, I have more influence in the network.
- Predicting future friendships: Look for pairs who have one or more mutual friends.

Homophily
- Probably not a big surprise but... 
- People tend to be friends with people in the same school grade.
- People tend to get married with people with a similar age.
- Liberal blogs like to be connected to liberal blogs.
- Conservative blogs like to be connected to conservative blogs.
- We can use homophily to make predictions.
- For example, on average about 70% of your friends vote like you do.
- Probably people change opinions to match their friends' and people change friends to match their opinions (both things happen).
- "83% of friends had the same ethnicity.
- However if those links are randomly chosen there is already a specific chance.
- Modularity helps us identifying when there is a lot of homophily in the network. The difference between the homophily case and the random case.

Modules, groups or communities
- Are there communities in a network?
- This helps understanding how networks will split up.
- A computer can calculate the definition of modularity for all nodes in the network.
- Understanding these network characteristics would enable us to solve real problems.


Sunset networking

The connected world - A lecture (1 of 3) by Mark Newman

For those willing to get introduced to the world of complex networks, the three lectures given by Mark Newman, a British physicist at the Santa Fe Institute on 14,15 and 16 September 2010 are a great way to get to know a little bit about this field.

Let's start with the first lecture. You can find it here.

In this post I summarise (certainly in a very personal fashion, although some points are directly extracted from his slides) the learning points I extracted from the lecture.

- We find networks in many different fields.
- They can be used to explain very different things happening in real life.
- A network is a collection of edges and nodes or vertices (plural of vertex). These words are taken from Spatial Mathematics.
- Although the border among them is blurred. He proposes 4 types of networks: Technological, information, biological and social networks.

Technological networks
- One example, the Internet is a complex network. Even though we human beings built it, we do not know its structure. However, we can make a scientific experiment and try to identify its structure. For example, using the program traceroute.
- When we see the result, we start understanding why the study of complex networks (e.g. with billions of nodes) help us making networks e.g. more efficient and more robust.
- Some human-made networks are the Internet and the air transportation network. These are technological networks.
- In some networks we are interested in their static structure. In some others, for example in the airline network, we are also interested in the dynamics of the nodes, cities connected by flights and edges, the flights themselves. The dynamic study of a complex network is actually the cutting edge of network science these days.

Information networks
- Regarding information networks, for example the World Wide Web, where the nodes are web pages and the edges are the hyperlinks you can find in the web pages.
- A hyperlink has its direction. So, the WWW is a directed network. In 1990 there were around 20 pages. In 2010 Google listed more than 25 billion. Actually the number of pages is now infinite. Some pages didn't exist until you asked for.
- A recommendation network (e.g. books in Amazon that could be of your interest) is also an information network.

Biological networks
- An example is the metabolic network or the neural network.
- A food web (which species eats which species) is also an example of a complex network.
- A self-edge in the food web represents canibalism.

Social networks
- His favorite. Jacob Moreno in 1934 already talked about sociograms. He observed kids playing in the playground.
- Actually, even Newman's grandfather, in a scientific paper, mentioned the idea of a social network.

Measuring social networks
- This is a complex endeavour. How can we measure? By observation, interviews, questionnaires, online data, archival records and message passing.
- Social networks govern the way diseases spread.
- Political connections, business board connections, dating connections are only some examples of complex networks.

The understanding of complex networks is still basic. 

In the second lecture, once he has explained how networks are described, Mark Newman will talk about how can these network diagrams be used.

Foggy networks?