How will AI shape society and how will society shape AI? AI4Society Dialogues talks to leading researchers across the disciplines at the University of Alberta, a global leader in artificial intelligence research. The podcast explores how researchers are constructing and using AI in the course of their work and examines opportunities, challenges and concerns as AI becomes an increasingly prevalent aspect of our world.
AI4Society Dialogues Episode 10
Episode Ten: Rich Sutton: Reinforcement learning and the future of AI
Can we teach AI to learn from it’s own experiences? That’s the premise behind reinforcement learning. Dr. Rich Sutton in a University of Alberta Professor in computer science and one of the leading figures in AI research today. We talk about the early days of his career, why he chose to make the University of Alberta his academic home, the benefits of not chasing fads and the importance of an interdisciplinary AI research community. We also explore reinforcement learning and how it shifts not only the ways in which we develop AI, but also how we might think about our relationship with AI.
“The data a reinforcement learning agent uses the data it generates itself…you’re an (AI) agent, you’ve done some things, you’ve seen some things, it’s your data…that’s what you learn from (in reinforcement learning).” – Rich Sutton
Dr. Rich Sutton is a distinguished research scientist at DeepMind, a professor in the department of computing science at the University of Alberta and a Fellow of the Alberta Machine Intelligence Institute (Amii). He is one of the pioneers of reinforcement learning and co-authored the textbook Reinforcement Learning: An Introduction. His research interests center on the learning problems facing a decision-maker interacting with its environment, which he sees as central to artificial intelligence.
AI4Society Dialogues Episode 9
Episode Nine: Deb Verhoeven: Networked Data: Gender, Power and Relationships
How can we use the tools of big data, networks and an understanding of our digital infrastructure to shed light on power relationships and inequities? In her work as a digital humanities scholar, Dr. Deb Verhoeven is enlisting machine learning to redress the persistent domination of power elites. Her unique approaches include using criminal network analysis to identify “gender offenders” and expose “Daversity”. She’s also pioneering a new type of intervention she calls “digital infrapuncture”. We talk about big data, AI, networks, digital infrastructure, search and why Richard Davidson just might be the best name to have if you’re looking for funding.
“People think of big data as having something to with size, but size is relative. Big data stretches us to the limits of what we’re capable of…if you’re not having an existential crisis….it’s not big. Big data implicates us.” – Deb Verhoeven
Dr. Deb Verhoeven is a Professor at the University of Alberta and holds the Canada150 Research Chair in Gender and Cultural Informatics. She’s also the Director of the Humanities Networked Infrastructure (HuNI) Project and was previously Associate Dean of Engagement and Innovation in the Faculty of Arts and Social Sciences (FASS), University of Technology Sydney (UTS). Her research interest lies in extending the limits of conventional film studies; exploring the intersection between cinema studies and other disciplines such as history, information management, geo-spatial science, statistics, urban studies and economics.
AI4Society Dialogues Episode 8
Episode Eight: Jennifer Raso: Encoding the law: technology’s impact on institutional decision making
Increasingly, automated systems are being used to aid in the delivery of important social service programs. Dr. Jennifer Raso’s work in the area of administrative law focuses on the relationship of human and non-human collaboration on institutional decision making and the resulting legal and ethical outcomes. We talk about the challenges in delivering government programs when “people’s lives are not drop down menus”. We explore how procuring and building these systems is an area for innovation and how layering AI onto the mix introduces further complexity. We also take a critical look at efficiency and cost savings and examine if technology really delivers on these promises.
“AI adds an additional layer onto what is already a situation that is hard to entangle…the focus on the AI piece obscures what is already happening.” – Jennifer Raso
Dr. Jennifer Raso is an Assistant Professor at the University of Alberta’s Faculty of Law, where she studies the relationship between discretion, data-driven technologies, and administrative law. She is particularly interested in how humans/non-humans collaborate and diverge as they produce institutional decisions, and the consequences of this arrangement for procedural fairness and substantive justice.
AI4Society Dialogues Episode 7
Episode Seven: Noah Castelo: Relating to robots
How people respond to artificial intelligence depends on their perceptions of whether or not AI is to be trusted, the context in which they encounter AI and, in the case of robots, how the AI looks. Dr. Noah Castelo, a behavioural scientist whose expertise and research into psychology and consumer behaviour are providing deeper insights into the question of how humans will relate to AI. We talk about the uncanny valley of “creepy” human-like robots, how robots are being used in customer service and how perceptions around the future use of AI in the workplace are shaping career choices for students.
“There are two big attitudes driving perception of (AI) technology…usefulness and comfort and on both we’re not quite there yet…there are major obstacles.” – Noah Castelo
Dr. Noah Castelo is an Assistant Professor in the faculty of business at the University of Alberta. As a behavioral scientist, he studies the human side of AI, focusing on questions that relate to human interactions with AI systems and how growing awareness of AI in general changes how humans treat each other.
AI4Society Dialogues Episode 6
Episode Six: Nidhi Hegde: Privacy, Ethics and AI
The use of artificial intelligence across many high risk domains is raising many ethical questions. This includes issues around privacy and biases in data as well as questions about equity and inclusion. Dr. Nidhi Hegde is an Assistant Professor at the University of Alberta whose work is helping to drive new approaches in differential privacy. We talk about the challenges in managing data for privacy, as well as times when we might want to securely share our data. We also dive into the topic of gender and equity with personal insights on being a female AI researcher and how to foster greater participation and inclusion.
“The general public might think it’s a far off, futuristic idea, but in fact it’s all around us…and I think there needs to be a greater data literacy… It’s not Terminator, but it is that search algorithm…there’s this sense of a sort of mystical idea about AI…and I wish that people would understand… it’s a technology and it can be used for many purposes just like any technology. ” – Nidhi Hegde
Dr. Nidhi Hegde is an Associate Professor in the department of Computing Science at the University of Alberta. Recently she had been a Research team lead at Borealis AI. Her research interests lie in probabilistic modelling and algorithmic design of machine learning for networks and systems. She has particular interest in privacy and ethics of AI.
AI4Society Dialogues Episode 5
Michael Frishkopf: Advancing ethnomusicology with AI
Music and sound are fundamental aspects of how we interact with the environment and play an important role in shaping culture. Dr. Michael Frishkopf is an ethnomusicologist who is deploying the power of AI to formulate and test new hypotheses about the relationship between music and culture, speech and individuality. In this wide ranging conversation, we cover the relationship between math and music, the tough challenges of applying machine learning to sound, how the evolution of music technologies impacts culture, and what we can learn from sound in cyberworlds.
“Everybody goes around recording these days….the problem is how do you categorize these things?…If you could develop good artificial intelligence machine learning tools, then you could do something like Alan Lomax was trying to do (build an ethnomusicology recording archive).” – Michael Frishkopf
Dr. Michael Frishkopf is a Professor of Music at the University of Alberta, an ethnomusicologist, performer, and composer. Dr. Frishkopf’s ethnomusicological research interests include music of the Arab world; Sufi music; sound in Islamic ritual performance; music and religion; comparative music theory; the sociology of musical taste; social network analysis; (virtual [world) music]; digital music repositories; deep learning for sound recognition and music information retrieval; music in West Africa; participatory action research; psychoacoustics and music cognition; music and global health; indigenous medicine and music as medicine for integrative health; and music for global human development and social change.
AI4Society Dialogues Episode 4
Francois Bolduc: Genetics, chatbots and the future of medicine
Artificial intelligence is having significant impacts on advancing medical research. Dr. Francois Bolduc is a pediatric neurologist, with deep expertise in understanding the genetic basis of memory and cognition. As both a clinician and researcher, Dr. Bolduc is using artificial intelligence to advance his research as well as serve patients. We talk about the role of big data and AI in genetic research and the challenges and opportunities in working as part of an interdisciplinary team. We also discuss how a chat bot developed by Dr. Bolduc’s team is helping autistic children and their parents by providing a medically sound, ongoing support system.
“One thing that AI brought is the ability to manage very complex data. We always had to reduce the information (about a patient), to some degree, to keep it within our working memory…what AI allows is these “external brains”….people are very complex and with AI, we are trying to capture all their information and not be biased in our selection criteria.” – Francois Bolduc
Dr. Francois Bolduc is an Associate Professor of pediatrics in the Faculty of Medicine and Dentistry at the University of Alberta. His work focuses on identifying the molecular mechanisms underlying cognitive, behavioral, and social defects in patients with neurodevelopmental disorders (NDD). He leads a lab that has developed several paradigms in Drosophila to model and quantitatively study the various defects seen in neurodevelopmental and neurodegenerative disorders. His team is using artificial intelligence (AI) to better understand the complex interactions modulating behavioral outcomes in animal models and human as well as developing a chatbot to better provide information to parents, health professionals, and educators involved with individuals with NDD.
AI4Society Dialogues Episode 3
Cathy Adams: Digital technology: the next killer app for education
When we think about technology and education, we often think about the ways in which we learn through technology, but we may not see how technologies shape education itself. Dr. Cathy Adams has been working at the intersection of education and technology, investigating how digital technologies shape knowledge practices. We reflect on the past, and how the now ubiquitous PowerPoint presentation has shaped learning. Looking to the future, we discuss the opportunities and challenges for AI technologies in education and their profound impacts that extend far beyond the classroom.
“As AI and machine learning are able to make more and more smart decisions for us we will increasingly be willing to off-load our own decision making to these technologies.” – Cathy Adams
Dr. Cathy Adams is a Professor and Vargo Teaching Chair in the Department of Secondary Education. Her research addresses digital technology integration across K-12 educational environments, with a focus on ethical, pedagogical and sociocultural issues. She teaches graduate and undergraduate courses on pedagogy of technology, educational technology integration, and computational thinking for teachers.
AI4Society Dialogues Episode 2
Michael Bowling: Game changing AI research
As constructed environments with clear parameters, games are an ideal proving ground for artificial intelligence research. Dr. Michael Bowling is equally passionate about games and AI. From his early days of robot soccer, to building an AI program and winning the World Championship of Poker (twice!) to his current research into theory of mind influenced by the cooperative card game Hanabi, Dr. Bowling takes a playful approach in using games to solve big, complex, challenges.
“We (as humans) can communicate a great deal by letting the background information or the inferences communicate the information…we do not have any computer algorithms that do that….(but) could a game help us get there?….We’re actually starting to make progress which is really exciting.” – Michael Bowling
Dr. Michael Bowling is a professor in Computing Science, Fellow in the Alberta Machine Intelligence Institute, and a senior scientist at DeepMind. His research is driven by his fascination in the problem of how computers can learn to play games through experience. He led the Computer Poker Research Group, which has built some of the best poker playing programs in the world.
AI4Society Dialogues Episode 1
Jonathan Schaeffer: Building an AI research powerhouse at the University of Alberta
How did the University of Alberta become a global leader in AI research? Dr. Jonathan Schaffer shares his personal account of how a community of like minded individuals, early government support and world record setting research set the stage for Alberta to play a leading role in AI research today. We talk about his “ta da” moment in solving the game of checkers with perfect play, why he thinks AI ethics are such an important topic and his latest project – a new book aimed at non-technical audiences.
“No discipline will be untouched by AI. It’s AI and X…choose your X.” – Jonathan Schaeffer
Dr. Jonathan Schaeffer is a Distinguished University Professor of Computing Science at the University of Alberta, and the former Dean of Science. His checkers-playing program Chinook was the first computer to win a human world championship (1994), a feat recognized in the Guinness Book of World Records.
AI4Society Dialogues TRAILER
AI4Society Dialogues is a new podcast that takes you behind the scenes to meet some of the talented researchers who are constructing and using AI in ways that will shape our world. This trailer provides a quick overview of our first three episodes. Dr. Jonathan Schaeffer kicks off the series by taking us back to the early days of the University of Alberta computer science department, Dr. Michael Bowling shares how he has applied his love of AI and passion for games in his world class research into poker and Dr. Cathy Adams takes us into the realm of education and how technologies, including AI, are fundamentally reshaping what we teach and how we learn.
AI4Society Dialogues on Podcast platforms
About AI4Society Dialogues
AI4Society Dialogues is a co-production between AI4Society, a signature research area at the University of Alberta and the Kule Institute for Advanced Research (KIAS), an endowed institute at the University of Alberta that supports research in the social sciences, humanities and fine arts.
Host: Katrina Ingram, Founder and CEO, Ethically Aligned AI
Technical Producer: Corey Stroeder
Special thanks to Dr. Scott Smallwood and the Sound Studies Institute at the University of Alberta for providing recording space.
Theme music: “Seeing the Future” by Dexter Britain
Dr. Eleni Stroulia, Professor, Computer Science and Director, AI4Society
Dr. Geoffrey Rockwell, Professor, Philosophy and Digital Humanities, Director, Kule Institute and co-Director, AI4Society
Copyright 2020 University of Alberta. All rights reserved.