In a variety of settings, our only glimpse at a network’s structure is through observations of a corresponding dynamical system. For instance, in a social network, we may observe a time series of members’ activities, such as posts on social media. In electrical systems, cascading chains of power failures reveal critical information about the underlying power distribution network. In biological neural networks, firing neurons can trigger or inhibit the firing of their neighbors, so that information about the network structure is embedded within spike train observations. These processes are “self-exciting” in that the likelihood of future events depends on past events. In these and other settings, a network’s structure corresponds to the extent to which one node’s activity stimulates or inhibits activity in another node. The interactions between nodes are thus critical to understanding the underlying functional network structure and accurate predictions of likely future events.
Relatively little is known about how to accurately conduct inference in these settings and how many events must be recorded before we may accurately infer the underlying networks. Recent literature on high-dimensional statistics provides an initial toehold for investigators, but observations of dynamical systems exhibit strong temporal dependencies and other characteristics that preclude straightforward adoption of existing methodology. In this talk, I will describe sparsity-regularized inference methods and theoretical guarantees that reflect the role of the network’s degree distribution and other network properties in determining the complexity of the inference problem for large-scale networks. In addition, we will see how these techniques can be used in applications ranging from criminology to predicting adverse drug reactions.
This is joint work with Eric Hall, Ben Mark, and Garvesh Raskutti.