Notes on the Artificial Intelligence for Information Accessibility (AI4IA) conference.

Notes on the Artificial Intelligence for Information Accessibility (AI4IA) conference

In observance of the International Day for Universal Access to Information (IDUAI), AI4Society helped organize an online conference on Artificial Intelligence for Information Accessibility (AI4IA) on September 28th, 2020. This conference was hosted by the UNESCO Information For All Programme (IFAP) Working Group on Information Accessibility (WGIA). 

The goal of the event was to examine the opportunities and barriers that Artificial Intelligence (AI) poses to information accessibility. To that end, the conference brought together academics, policy experts, private sector and government.

IDUAI 2020 focuses on the right to information in times of crisis and on the advantages of having constitutional, statutory and/or policy guarantees for public access to information to save lives, build trust and help the formulation of sustainable policies through and beyond the COVID-19 crisis. Speakers at the AI for Information Access talked about how vital access to accurate information is in these pandemic times and the role that AI could play as we prepare for future crises. Tied to this was a discussion of the important role of international policy initiatives and shared regulation in ensuring that smaller countries, especially in the Global South, benefit from developments in AI. The worry is that some countries won’t have the digital literacy, or cadre of experts, to critically guide the introduction of AI.

A number of key issues were discussed during the conference. 

  • A number of speakers talked to the importance of access to accurate information for all in this time of pandemic. AI has the potential to help filter misinformation and foreground accurate information. Access to information is a fundamental right; what sorts of regulations/policies are needed to ensure that AI facilitates this right?
  • Saadia Sánchez-Vegas, the opening speaker and Director of the UNESCO Cluster Office for the Caribbean, talked issues around the ownership of data needed to train models that can be used to support or automate decision making. Making information accessible through social media and the web has also meant that our data is used for training purposes. She also talked about the diverse impacts that AI may have to different geographies and countries. For example, AI may have a dramatic impact to the economy of a small island state like Jamaica, which is largely dependent on tourism: positive, by helping improve the efficiency of renewable resources, or negative, by causing disruptions in the employment market. 
  • Dorothy Gordon, Chair of the UNESCO Information For All Programme, talked about Zuboff’s idea of “surveillance capitalism” and how our information is being harvested in order to predict our behaviour, sell us products, and, in some cases, even manipulate us. Alas, there is a feeling of helplessness in the face of scandals like Cambridge Analytica. She warned that the digital divide will only get worse if we don’t overcome our helplessness and develop evidence-based policies and systems.
  • Coetzee Bestes, Chair for IFAP South Africa, started by asking “can we trust AI or is it already corrupted by human beings?” He talked about how information ethics and AI ethics need to converge. We need to think about the ethics both of the data, algorithms, and of the human entities developing and using AI.
  • Jill Clayton, the Privacy Commissioner of Alberta, talked about transparency and how Canadian law is falling behind. She mentioned how the Province of Quebec is currently discussing a law about regulating automated decisions. She also discussed the importance of synthetic data, as opposed to de-identified data, as a means of protecting information privacy while at the same time enabling research and innovation through data, including, for example, health data. How can we use these rich datasets appropriately without endangering the privacy of citizens?
  • Nidhi Hedge, a professor of Computing Science, and Amii Fellow, and AI4Society member, also discussed techniques for protecting data in data sets through generalization, where one hides individuals in bigger cohorts and suppresses information that can be used to identify individuals. She also mentioned the potential of synthetic data, but warned that we don’t yet know if valuable attributes might be lost in the translation. She closed by raising a question particularly relevant to the times of the pandemic, namely, virtual care or telecare: What happens to the recordings of virtual care meetings? 

This post is a summary of the detailed conference Notes on AI4AI 2020, kept by the AI4Society Associate Director, Geoffrey Rockwell.