The Internet has changed the way we make decisions, but the way executives make decisions hasn’t changed at all. Board members focus on internal data when every day competitors are leaving behind online breadcrumbs filled with valuable external data. This could be a job advert, filing a new patent, launching a new product, social media and more. Using insights gleaned from this data will help companies to look ahead and make more informed decisions.
In this lecture, Jorn Lyseggen will talk about his new book Outside Insight, which includes case studies of the success and failures of international companies including Nike, Volvo, L’Oreal, Manchester United, the World Wide Fund for Nature, as well as the Obama 2012 campaign.
Jorn Lyseggen (@jorn_lyseggen) is the CEO of Meltwater: a company that develops and markets media monitoring and business intelligence software. The company was founded in 2002 in a shack in Norway with just $15,000 start-up money. Now, Meltwater employs more than 1,000 people in 60 offices across six continents, and has over 23,000 clients across the world. The company has won various awards and Jorn also founded the Meltwater Entrepreneurial School of Technology, a training programme and seed fund for African entrepreneurs.
This event is strictly on a first-come, first-served basis with no tickets available, and is open to the general public. We strongly advise that you arrive at the venue no later than 6:30pm as late admittance may not be granted.
Speaker: Dr Andreas Jungherr, Junior Professor at the University of Konstanz
Over the last ten years, social scientists have found themselves confronting a massive increase in available data sources. In the debates on how to use these new data, the research potential of “digital trace data” has featured prominently. While various commentators expect digital trace data to create a “measurement revolution”, empirical work has fallen somewhat short of these grand expectations. In fact, empirical research based on digital trace data is largely limited by the prevalence of two central fallacies: First, the n=all fallacy; second, the mirror fallacy.
As Professor Jungherr will argue, these fallacies can be addressed by developing a measurement theory for the use of digital trace data. For this, researchers will have to test the consequences of variations in research designs, account for sample problems arising from digital trace data, and explicitly link signals identified in digital trace data to sophisticated conceptualizations of social phenomena. Below, he will outline the two fallacies in greater detail. Then, he will discuss their consequences with regard to three general areas in the work with digital trace data in the social sciences: digital ethnography, proxies, and hybrids. In these sections, Professor Jungherr will present selected prominent studies predominantly from political communication research. He will close by a short assessment of the road ahead and how these fallacies might be constructively addressed by the systematic development of a measurement theory for the work with digital trace data in the social sciences.
Elicitation is the study of statistics or properties which are computable via empirical risk minimization. This has applications in understanding which loss function to use in a regression for a particular statistic or finding a surrogate loss function which is easier to optimize.
While several recent papers have approached the general question of which properties are elicitable, we suggest that this is the wrong question—all properties are elicitable by first eliciting the entire distribution or data set, and thus the important question is how elicitable. Specifically, what is the minimum number of regression parameters needed to compute the property?
Building on previous work, we introduce a new notion of elicitation complexity and lay the foundations for a calculus of elicitation. We establish several general results and techniques for proving upper and lower bounds on elicitation complexity. These results provide tight bounds for eliciting the Bayes risk of any loss, a large class of properties which includes spectral risk measures and several new properties of interest.
Joint work with Rafael Frongillo.
The traditional econometrics approach for inferring properties of strategic interactions that are not fully observable in the data, heavily relies on the assumption that the observed strategic behavior has settled at an equilibrium. This assumption is not robust in complex economic environments such as online markets where players are typically unaware of all the parameters of the game in which they are participating, but rather only learn their utility after taking an action. Behavioral models from online learning theory have recently emerged as an attractive alternative to the equilibrium assumption and have been extensively analyzed from a theoretical standpoint in the algorithmic game theory literature over the past decade. In this talk, Vasilis will present recent work, in which he takes a learning agent approach to econometrics, i.e. infer properties of the game, such as private valuations or efficiency of allocation, by only assuming that the observed repeated behavior is the outcome of a no-regret learning algorithm, rather than a static equilibrium. He will also present some empirical results from applying our methods to datasets from Microsoft’s sponsored search auction system.
Joint work with Denis Nekipelov, Eva Tardos and Yichen Wang.
Vasilis Syrgkanis is a Researcher at Microsoft Research, New England. He received his Ph.D. in Computer Science from Cornell University in 2014, under the supervision of Prof. Eva Tardos and subsequently spend two years in Microsoft Research, New York as a postdoctoral researcher in the Machine Learning and Algorithmic Economics groups. His research addresses problems at the intersection of theoretical computer science, machine learning and economics. His work received best paper awards at the 2015 ACM Conference on Economics and Computation (EC’15) and at the 2015 Annual Conference on Neural Information Processing Systems (NIPS’15) and was the recipient of the Simons Fellowship for graduate students in theoretical computer science 2012-2014.
Bias is an increasingly observed phenomenon in the world of artificial intelligence (AI) and machine learning: From gender bias in online search to racial bias in court bail pleas to biases in worldviews depicted in personalized newsfeeds. How are societal biases creeping into the seemingly “objective’’ world of computers and programs? At the core, what is powering today’s AI are algorithms for fundamental computational problems such as classification, data summarization, and online learning. Such algorithms have traditionally been designed with the goal of maximizing some notion of “utility” and identifying or controlling bias in their output has not been a consideration. In this talk, Nisheeth and Elisa will explain the emergence of bias in algorithmic decision making and present the first steps towards developing a systematic framework to control biases in several of the aforementioned problems. This leads to new algorithms that have the ability to control and alleviate bias, often without a significant compromise to the utility that the current algorithms obtain.
The fields of computer science and game theory both trace their roots to the first half of the 20th century, with the work of Turing, von Neumann, Nash, and others. Fast forwarding to the present, there are now many fruitful points of contact between these two fields. Game theory plays an important role in 21st-century computer science applications, ranging from social networks to routing in the Internet. The flow of ideas also travels in the other direction, with computer science offering a number of tools to reason about economic problems in novel ways. For example, computational complexity theory sheds new light on the “bounded rationality” of decision-makers. Approximation guarantees, originally developed to analyse fast heuristic algorithms, can be usefully applied to Nash equilibria. Computationally efficient algorithms are an essential ingredient to modern, large-scale auction designs. In this lecture, Tim Roughgarden will survey the key ideas behind these connections and their implications.
Tim Roughgarden is a Professor in the Computer Science and (by courtesy) Management Science and Engineering Departments, Stanford University, as well as a Visiting Professor in the Department of Mathematics at LSE.
Martin Anthony (@MartinHGAnthony) is Professor of Mathematics and Head of Department of Mathematics at LSE.
The Department of Mathematics (@LSEMaths) is internationally recognised for its teaching and research in the fields of discrete mathematics, game theory, financial mathematics and operations research.
Twitter Hashtag for this event: #LSEmaths
Jeff will evaluate the effects of different survey modes on respondents’ patterns of answers using an entropy measure of variability. While measures of centrality show little differences between face-to-face and Internet surveys, he will find strong patterns of distributional differences between these modes where Internet responses tend towards more diffuse positions due to lack of personal contact during the process and the social forces provided by that format. The results provide clear evidence that mode matters in modern survey research, and he will make recommendations for interpreting results from different modes.
Innovation in most large companies these days is fairly incremental. There is nothing inherently wrong in this, as much of our progress as a society has resulted from such innovation. Over recent years, however, we are seeing a radical departure from incremental innovation. Instead, we look at organizations who intentionally set extremely ambitious innovation objectives, where incremental innovation cannot get the job done.
The focus of this talk is to discuss the ways in which organizations mobilize resources to go after bold objectives which can move the needle: Moonshots. These are not incremental innovation activities, but instead multi-year missions that mobilize extensive scientific and technological resources to expand the horizons for both organizations and societies, and transform both in the process.
From the original Apollo mission, the original IBM 360 mainframe computer, NASA, DARPA, Google X, or Telefonica´s new spinoff company — Alpha, more and more organizations are trying to inductively develop a coherent approach to creating and executing organizational moonshots.
A major driving force to tackle Moonshots is the incredible advances in Data Science and Artificial Intelligence. It is widely believed that global human progress depends on the collection and analysis of data to fuel our increasingly digital world. There is tangible benefit including economic opportunity to be gained. But arguably most important, is data as a force for global and impactful social good and, here, the possibilities are endless.
We will examine the process against which Moonshot organizations transform new science into new progress. Why companies should do them, the resources the company must bring to the initiative, partnerships required, talent and values. How Data Science, Artificial Intelligence and Machine Learning can be applied to many disciplines to solve long standing problems with extraordinary results, and which will soon enable the vast majority of humanity to experience many things that today are restrict to none or the very few. We will also see both successful and unsuccessful Moonshot cases, and will discuss the internal organization of these initiatives, as well as their external objectives.
Dr. Pablo Rodriguez is the CEO of Alpha. Prior to Alpha, Pablo led Telefonica´s corporate research lab and incubator. He has worked in several Silicon Valley startups and corporations including Inktomi, Microsoft Research and Bell-Labs. His current interests are privacy and personal data, re-thinking the Internet ecosystem and network economics. He has co-founded the Data Transparency Lab, an NGO to drive data privacy and transparency. He is on the advisory board of Akamai, EPFL, and IMDEA Networks. He has worked with chef Ferran Adria (El Bulli) on computational gastronomy and with F.C. Barcelona applying data science to soccer. He received his Ph.D. from the Swiss Federal Institute of Technology. He is an IEEE Fellow and an ACM Fellow. For further information on Pablo, please see www.rodriguezrodriguez.com
Alpha is an innovation facility established by Telefonica to define and solve big societal problems and democratise access to new opportunities. Alpha supports the creation of moonshots – multi-year development projects that address these big societal problems. Alpha aims to conceive and deliver radical solutions and breakthrough technology by collaborating with the right talent and the people impacted by the problems we are trying to solve.
We constantly generate digital traces in our online and offline lives, for example by using our smartphones, by interacting with everyday devices and the technological infrastructure of our cities or simply by posting content on online social media platforms. This information can be used to model and possibly predict human behaviour in real-time, at a scale and granularity that were unthinkable just a few years ago.
In this talk, Mirco will present his recent work in modelling human behaviour using these “digital traces” with a specific focus on mobile data. He will provide an overview of the methodological, algorithmic, and systems issues related to the development of solutions that rely on the online analysis and modelling of this type of data. As a case study, he will show how mobile phones can be used to collect and analyse mobility patterns of individuals in order to quantitatively understand how mental health problems affect their daily routines and behaviour and how potential changes can be automatically detected. Mirco will demonstrate that it is possible to observe a non trivial correlation between mobility patterns and depressive mood using data collected by means of smartphones. Finally, I will also introduce our efforts in using cellular data for modelling mobility patterns of individuals at scale and their applications in the area of data for development.
Mirco Musolesi is a Reader in Data Science at the Department of Geography at University College London and a Turing Fellow at the Alan Turing Institute. He received a PhD in Computer Science from University College London and a Master in Electronic Engineering from the University of Bologna. He held research and teaching positions at Dartmouth College, Cambridge, St Andrews and Birmingham. He is a computer scientist with a strong interest in sensing, modelling, understanding and predicting human behaviour and dynamics in space and time, at different scales, using the “digital traces” we generate daily in our online and offline lives. He is interested in developing mathematical and computational models as well as implementing real-world systems based on them. This work has applications in a variety of domains, such as intelligent systems design, digital health, security&privacy, and data science for social good. More details about his research profile can be found at: http://www.ucl.ac.uk/~ucfamus/
Determining policy priorities is a challenging task for any government. The interdependency between policies and corruption of government officials creates a rugged landscape that governments need to navigate in order to reach their goals. We develop a framework to model the evolution of development indicators as a public goods game on a network. Our approach accounts for the complex network of interactions among policy issues as well as the principal–agent problem arising from budget assignment. Using development indicator data from more than 100 countries over 11 years, our main results are as follows: (i) well known empirical patterns involving aggregate corruption and income can be explained by the opaque relationship between policy outcomes and contributions of public agencies; (ii) achieving a multidimensional target depends on a learning process during the allocation of resources; (iii) the network of spillover effects provides country-specific context that is critical to order policy priorities; and (iv) a country may reach different development targets but how `easy’ it is and through which policies it can be achieved may vary considerably. Our framework provides an analytic tool to generate bespoke advise on development strategies.