Learning from a single network: How new mathematics reveals the hidden geometry of graphs
Giulia Livieri sets out remarkable new research with results that clarify how learning works on complex graphs and how quickly any method (including Graph Convolutional Networks) can learn from them, enabling us to move beyond empirical intuition and understand, in a precise mathematical way, what makes graph learning succeed or fail.

When you browse a social platform, look up a research paper, or explore a map of the internet, you’re looking at data that naturally forms a network, a web of connections linking people, ideas, computers, or biological systems. Networks are everywhere, and understanding them is one of the most powerful ways to make sense of the modern world. Whether it’s identifying diverse communities on X, predicting how a virus spreads, or analysing the structure of scientific knowledge, the real insight often comes not from the individual pieces of data, but from how those data connect.
This view of data has inspired enormous progress in machine learning. Graphs themeselves have long been used as mathematical and practical tools to represent relationships in such settings. In 2017, Thomas Kipf and Max Welling introduced what became the foundation of modern learning on networks: the Graph Convolutional Network, or GCN. Their breakthrough was deceptively simple. Convolutional neural networks — the workhorses behind image recognition — succeed because they exploit structure. Images are arranged on a grid, and nearby pixels tend to be related. By sliding small filters across the image, these networks detect patterns like edges, textures, and shapes, enabling everything from object detection to medical imaging.
Kipf and Welling asked: What if we could design a neural network model that specifically leverages on graph topology? Graphs are messier than images — there’s no neat grid, no orderly arrangement — but they carry something equally valuable: relationships. A graph may connect friends in a social network, link scientific papers through citations, or map how atoms bind inside a molecule. GCNs provide a way to turn these relationships into a learnable architecture, using the graph’s adjacency structure to guide how information is propagated. Instead of learning from local pixel neighbourhoods, they learn from each node’s neighbours, aggregating information across the network to make predictions.
This approach quickly transformed tasks like identifying communities, suggesting new links, predicting molecular properties, and forecasting traffic flows. Yet behind this success sat a quiet puzzle that researchers rarely discussed: How do you learn from a graph when you only have one graph?
In most areas of statistics and machine learning, generalisation comes from repetition. We flip a coin many times, run multiple experiments, or collect large datasets of similar objects. But with networks, we typically have just a single, enormous structure — one social network, one citation network, one molecular interaction network. There’s no opportunity to ‘sample another graph’. So where, exactly, does the learning signal come from?
This is the challenge my collaborators — Nils Detering, Luca Galimberti, Anastasis Kratsios, A. Martina Neuman — and I take up in our recent paper (Detering et al. 2025). Our work introduces new mathematical tools that uncover something remarkable: many large networks exhibit structural regularities and geometric patterns that emerge at scale, making learning possible, even when data appears sparse and unstructured.
These geometric patterns are not shapes in the usual sense, like triangles or spheres, but deeper regularities that emerge when networks grow large. They appear in both random, simulated networks and fixed, real-world ones. That is, even if a network seems chaotic, its large-scale structure often follows predictable principles that learning algorithms can exploit.
Building on this perspective, we establish new results that clarify how learning works on complex graphs and how quickly any method — including GCNs — can learn from them. This lets us move beyond empirical intuition and understand, in a precise mathematical way, what makes graph learning (using machine learning on data structured as graphs to find patterns, make predictions and understand complex relationships in networks like social media, molecules, or transportation systems) succeed or fail.
Perhaps most strikingly, we show that GCNs can learn effectively even when only a tiny fraction of nodes are labelled. In many real applications, labels are scarce: maybe only a few papers in a citation network are categorised, or only a handful of molecules have been experimentally measured. Our results demonstrate that this is not merely a lucky accident or a quirk of a particular dataset. Rather, it stems from the fundamental geometry of large networks and the way information propagates smoothly over their structure. Even with sparse labels, the structure of the graph carries enough information for learning to proceed — up to a natural limit we characterise precisely.
These limits tell us how accurately models can learn, how fast they can converge, and how performance scales as networks grow. In other words, we provide a theoretical map of what’s achievable when learning from a single, fixed network — and where the boundaries lie.
In short, our research uncovers the hidden geometry of complex networks and explains why modern graph-based learning methods work as well as they do. By understanding the principles that govern learning on graphs, we can design more effective algorithms and better interpret the predictions they make. As networks continue to shape our digital, social, and biological worlds, these insights help illuminate how machines — and humans — can learn from the vast webs of connection that surround us.
Read our full paper here: https://arxiv.org/pdf/2509.06894
By Dr Giulia Livieri, Associate Professor, LSE Department of Statistics.