# An Advance in Mesoscale Graph Theory with implications for Whole Brain Emulation

*[The following article is being reworked into a longer piece on WBE and dynamical systems, but I thought I would share this as is. Enjoy 🙂 — Peter]*

## Mesoscale Network Dynamics

Understanding the the dynamics of large networks of interacting systems such as computers or neurons is a fundamental technology for many transhumanist projects and undertakings. In particular, the prospect of Whole Brain Emulation or WBE requires production of a system which preserves mesoscale dynamics. Other applications of mesoscale graph theory include materials science, understanding food webs, and more.

Patterns of activity in the brain are often described at three levels. The dynamics of individual neurons are considered to be the microscale level, cooperative rhythms of specific neuronal populations or subnetworks are the mesoscale level, and large-scale patterns of activity, such as the average mean ﬁeld dynamics, synchronization, “brain waves”, etc. all correspond to the macroscale dynamics of the brain.

Anne-Ly Do, Johannes Höfener and Thilo Gross recently demonstrated the amazing result that examination of a single subgraph allows conclusions to be made about the dynamical properties of the network as a whole. The authors show that certain mesoscale subgraphs have precise and distinct consequences for the system-level dynamics. In particular, they induce characteristic dynamical instabilities that are independent of the structure of the larger embedding network. This suggests that a reduced network model might be possible in certain cases.

The ability to analyze dynamical implications of small mesoscale subgraphs and apply the results to larger networks will significantly advance our understanding of mesoscale network dynamics. The functional importance of “network motifs” that dominate the structure of biological networks for example is explained. Moreover, these results will facilitate the assembly of networks with defined dynamical implications from simpler Tinker-Toy like network components and might therefore facilitate the design of WBEs.

Various network architectures can produce identical dynamics. For example, Helge Aufderheide, Lars Rudolf and Thilo Gross examined food web graphs and defined an equivalence relation over these graphs based on the bifurcation diagrams.

The food webs pairs depicted below have identical bifurcation diagrams and this defines an equivalence relation over such networks. The rightmost of the “competitive-exclusion motifs” depicts two top-level predators (populations 1,2) feeding on the same prey (population 3) and the dynamics of this web can be shown to be dynamically equivalent to a single predator model.

## Mesoscale Dynamical Systems Models and WBE

The brain is well model as a dynamical system. However the present state of knowledge about the brain does not allow for a rigorous mathematical analysis of all its functions such as one can apply to other physical systems like electronic circuits. However we still may make some statements about models of brain function, and some quite large models now exist.

Dynamical equivalence is the basis for any whole brain emulation. We expect, for example, an emulated brain to respond similarly to the original when presented with identical stimulus. Since brain function corresponds not only with static brain structure but also dynamic brain activity a WBE must not only accurately model the connectome, but also the resulting system dynamics. A WBE must therefore be *dynamically equivalent* to the original brain in some sense.

Understanding how mesoscale dynamics influences brain function will significantly advance WBE development since it will allow analysis and comparison of non-identical networks of components that nevertheless give rise to identical dynamics in a rigorous manner. Defining an equivalence relation over networks provides a framework within which we can start to talk about what we precisely mean by a given system being an emulation of a brain (see also http://cbt.beckman.uiuc.edu/papers_spring06/izhikevich04whichmodel.pdf ). Igor Belykh and Martin Hasler for example discuss dynamics of Hindmarsh–Rose (HR) neuron model. See Gilani, Taravat Saeb, and Philipp Hövel. “Dynamical Systems in Neuroscience.” (2012) which includes MATLAB source code for some popular neural models. Moreover, this approach suggests a modular or Tinker-toy approach for designing and building novel neural systems with predictable dynamics.

Understanding dynamical properties of networks will also allow us to understand a variety of phenomenon that have previously been hard to understand such as psychedelic experiences and hallucinations and perhaps more importantly the ability of neural systems to produce information-rich output from information-poor input:

*Time-dependent visual hallucinations are one example of information produced by neural systems, in this case the visual cortex, themselves. Such hallucinations consist in seeing something that is not in the visual ﬁeld. There are interesting models, beginning from the pioneering paper of Ermentrout and Cowan 1979, that explain how the intrinsic circuitry of the brain’s visual**cortex can generate the patterns of activity that underlie hallucinations. These hallucination patterns usually take the form of checkerboards, honeycombs, tunnels, spirals, and cobwebssee two examples in Fig. 23. Because the visual cortex is an excitable medium it is possible to use spatiotemporal amplitude equations to describe the dynamics of these patterns see the next section. These* *models are based on advances in brain anatomy and physiology that have revealed strong short-range connections and weaker long-range connections between neurons in the visual cortex. Hallucination patterns can be quasistatic, periodically repeatable, or chaotically repeatable as in low-dimensional convective turbulence; see for a review Rabinovich et al. 2000. Unpredictability of the speciﬁc pattern in the hallucination sequences movie means the generation of information that in principle can be characterized by the value of the Kolmogorov-Sinai entropy Scott, 2004.*

## Want to learn more?

See Gilani, Taravat Saeb, and Philipp Hövel. “Dynamical Systems in Neuroscience.” (2012) which includes source code for some popular models and describes computation of bifurcation diagrams and watch the videos below.

## References

1. Engineering mesoscale structures with distinct dynamical implications, Anne-Ly Do *et al* 2012 *New J. Phys.* **14** 115022

2. Mesoscale symmetries explain dynamical equivalence of food webs, Helge Aufderheide *et al* 2012 *New J. Phys.* **14** 105014

3. Mesoscale and clusters of synchrony in networks of bursting neurons, Igor Belykh and Martin Hasler, Chaos **21**, 016106 (2011); http://dx.doi.org/10.1063/1.3563581

4. Izhikevich, Eugene M. “Which model to use for cortical spiking neurons?.”*Neural Networks, IEEE Transactions on* 15.5 (2004): 1063-1070.

5. Ižikevič, Eugene M. *Dynamical systems in neuroscience: the geometry of excitability and bursting*. The MIT press, 2007.

6. Bickle, John, and Valerie Gray Hardcastle. *Philosophy of neuroscience*. John Wiley & Sons, Ltd, 2003.

7. Hasselblatt, Boris, and Anatole Katok, eds. *Handbook of dynamical systems*. Vol. 1. North Holland, 2002.

8. Gilani, Taravat Saeb, and Philipp Hövel. “Dynamical Systems in Neuroscience.” (2012).

9. Rabinovich, Mikhail I., et al. “Dynamical principles in neuroscience.” *Reviews of modern physics* 78.4 (2006): 1213.