By-
Dr. Abhinanda Sarkar
(Senior Faculty & Director Academics, Great Learning)
Not every discipline has a Nobel Prize associated with it. But most disciplines have awards that recognize great achievements and are thought of as the most prestigious by those in the discipline. This year, the Nobel-equivalents in Computer Science, Mathematics, and Statistics have been announced recently. It therefore seems like a good opportunity to talk of an interesting connecting strand between them, namely the idea of representations, and how it weaves through data and technology.
The Concept of Representations in Complex Systems
Imagine a complicated object, like the human brain or a quantum mechanical system. Such things are hard to understand and study. What scientists try to do is to break them down into simpler conceptual parts and represent those parts by easier-to-study objects. In fact, doing analysis and engineering on the representations themselves can also be innovative and extremely useful by itself. Let’s look at a little deeper through three recent recognitions.
The Turing Award in Computer Science: Reinforcement Learning
The Turing Award in Computer Science has been given this year to Andrew Barto and Richard Sutton for their many years of foundational work in Reinforcement Learning (RL) – a fundamental technology behind gaming, robotics and artificial intelligence. For instance, generative AI such as ChatGPT uses Reinforcement Learning from Human Feedback (RLHF) to learn from prompts what the best answer might be. The principle is that some answers (in general, some actions) are better than others and they need to be positively reinforced by a scheme of rewards. The system then attempts to maximize rewards. In the 1980s Barto and Sutton represented such action spaces by Markov Decision Processes (MDPs), Sutton being Barto’s PhD student. MDPs are rich enough to include many choices of states and actions, study actions that depend on the result of other actions, and to introduce uncertainty or randomness in action and response. But MDPs are also simple enough to study comprehensively and to build RL algorithms on. AI continues to use RL is continually creative ways and this was a good year to – as my colleague Pavan Gurazada says – reinforce reinforcement learning.
The International Prize in Statistics: Grace Wahba and Smoothing Splines
The International Prize in Statistics is relatively recent, but has established itself as a major recognition in the field. Given every two years, this year the prize has gone to Grace Wahba. When Wahba started her career in the 1960s, Statistics was largely a mathematical field, and yet she became a leader in introducing radical (and radically useful) computational methods in the statistical analysis of data. Specifically, Wahba is recognized as the ‘mother of smoothing splines’ in honor of her most impactful contributions. Splines are smooth curves – almost drawable by hand – that can represent data by connecting points in an approximate way. They allow the data to dictate the complexity of the underlying model; they are flexible representations of random phenomena. Wahba’s Representor Theorem made this precise. In a way, splines lie at the heart of many machine learning algorithms today. A Rectified Linear Unit (ReLU) that connect layers in a neural network is a linear spline. Wahba has also been a superb mentor and champion of women in science. My wife – who was briefly at Grace Wahba’s department at the University of Wisconsin – still speaks fondly of her.
At this point, you may want to think of representations of your own. How might you represent the thought pattern of facial recognition? What are the states and mental actions? Are they the same for everyone? Or how can the relationship between the price of a mobile phone brand and its sales volume be represented? Is it ‘smooth’ or are there jumps at particular price points?
The Abel Prize in Mathematics: Masaki Kashiwara and Representation Theory
In Mathematics, there is in fact a field of study called Representation Theory. This year’s Abel Prize in Mathematics has been awarded to one of its modern pioneers, Masaki Kashiwara. I do not know enough about Kashiwara’s specifically cited work in algebraic analysis and other areas to summarize adequately. However, Representation Theory more generally is central area of pure mathematics that is increasingly finding unexpected connections. For instance, Kashiwara and others have connected quantum phenomena to algebraic objects called groups by incorporating symmetry as an observable physical feature as well a set of mathematical axioms. Quantum groups are a way to describe objects for which the rules of quantum mechanics dominate and the representations of quantum groups allow for the postulation of new quantum mechanical principles. Given the practical importance of Quantum 2.0, such as in quantum computing, communication and cybersecurity, such analysis could become central to a crucial emerging field.
As we work with increasingly larger datasets and deal with increasingly more complex decisions taken by technology, our ability to understand and devise complex technology relies our ability to create and study good representations. Scholars such as Barto, Sutton, Wahba and Kashiwara have shown us how to think. Now, the next steps beckon for the generations that follow them to represent more and to represent better.
References: