Tag Archives: toolkit

Practising Ethos | Practical Ethics

It’s just over 2 weeks since I launched a practical toolkit for assessing ethics and governance of Artificial Intelligence / Machine Learning systems, and automated decision making systems – UnBias AI For Decision Makers (AI4DM). It has initiated a series of discussions with friends, colleagues and strangers as well as feedback from others. This has prompted me to explain a bit more about its genesis and the benefits I believe it offers to people and organisations who use it.

UnBias AI4DM is a critical thinking toolkit for bringing together people from across an organisation or group to anticipate and assess – through a rigorous process of questioning and a whole systems approach – the potential harms and consequences that could arise from developing and deploying automated decision making systems (especially those utilising AI or ML). It can also be used to evaluate existing systems and make sure that they are in alignment with the organisation’s core mission, vision, values and ethics.

It is an engagement tool (rather than software) that fosters participation and communication across diverse disciplines – the cards and prompts provide a framework for discussions; the worksheet provides a structured method for capturing insights, observations and resulting actions; the handbook provides ideas and suggestions for running workshops etc. I am also creating and sharing instructional animations and videos throughout the crowdfunding campaign – raising funds to make a widely affordable production version.

AI4DM builds on a previous toolkit I created – UnBias Fairness Toolkit – released in 2018 that is aimed at building awareness (in young people and non-experts) of bias, unfairness & untrustworthy effects in algorithmic systems. Used together, the two toolkits enable such systems to be explored from both the perspectives of those creating them, and those on the receiving end of their outcomes. It is available in both a free downloadable DIY print-at-home version and a soon-to-be-manufactured physical production set.

Benefits of Using AI4DM

The toolkit has two key strands of benefits for groups and organisations who adopt and use it:

Addressing Issues:

  • a structured process for identifying problems and devising solutions
  • emphasises collective obligations and responsibilities
  • a means of addressing complex challenges
  • a way to anticipate future impacts (e.g. new legislation & regulations or shifts in public opinion)

Staff Development:

  • develops communications skills across different disciplines fields and sectors
  • supports team building and cohesion
  • develops understanding of roles in teams and group work
  • evolves a culture of shared practices across different disciplines and fields within an organisation

AI4DM as a Teaching Aid

I have also been having discussions with academic colleagues who teach in various disciplines at a number of universities about using the toolkit in teaching and lecturing. As with the UnBias Fairness Toolkit, AI4DM provides a structured framework for looking at a wide range of issues from multiple perspectives and exploring how they align or misalign with an organisation’s core ethos and values. It can be used practically to look at a real world example or as a conceptual exercise to explore the potential for harmful consequences to emerge from a development process that doesn’t integrate ethical analysis alongside other key considerations, such as legal or health and safety regulations.
I’ll be adding a short animation to demonstrate using the toolkit in teaching settings soon on our Youtube Playlist.

Using the Toolkit Online

The pandemic has forced everyone to rapidly shift from in-person meetings, workshops, classes and lectures to distributed engagement via online platforms. So I have been experimenting with ways to use the toolkit in such spaces – from making a test interactive version using the MURAL online whiteboard collaboration tool, to tests using Zoom and a hybrid combination of physical cards and online annotation of the Worksheet. I’ll be adding a short animation soon to the Playlist to demonstrate using the toolkit online.

In summary, I believe that the best way to use the toolkit online is in a hybrid fashion:

  • Make sure ALL the participants have their own physical set of cards (either a production set or DIY print-at-home version). In the long gaps between speaking and direct participation (a familiar feature of online situations) this gives the participants something tangible that they can contemplate and play with that is contextually relevant – given the many distractions of working/studying from home that are usually absent (or less intrusive) in traditional meeting or teaching spaces. I remain convinced that there is a positive cognitive impact of combining manual activities with critical thinking – it grounds ideas in ways that seem to promote associative connections; and may be similar to the effect identified in recent studies on the differences in learning outcomes when writing notes versus typing during lectures;
  • Assign specific roles or topics to participants so that they can focus on their contribution to the session whilst listening to others;
  • Have a Facilitator or Moderator ‘host’ the session and manage who is participating or speaking at any one time;
  • The Issue Holder (who is first among equals) should keep the key issue or problem being addressed at the forefront of the conversation and make connections to the contributions of other participants;
  • The Scribe should annotate the Worksheet on a Shared Screen (perhaps using an online collaboration tool such as Miro or MURAL, or simply annotating the PDF) with observations and notes, and post photos of the evolving matrix of cards as they are added.
  • Use the Chat area to encourage participants to add their own annotations, ideas, links and other relevant observations to the session, assisting the Scribe in capturing as much of the richness of the conversation and discussions as possible.

Genesis

The UnBias Fairness Toolkit was an output of the UnBias project led by the Horizon Institute at the University of Nottingham with the Universities of Oxford and Edinburgh and Proboscis (and funded by the EPSRC). It was designed to accommodate future extensions and iterations so that it could be made relevant to specific groups of people (such as age groups or communities of interest) or targeted around particular issues and contexts (banking and finance; health data; education; transportation and tracking etc).

At the techUK Digital Ethics Summit in December 2019 I ran into Ansgar Koene, one of my former UnBias project colleagues who is a Senior Research Fellow at Horizon and also Global AI & Regulatory Leader at EY Global Services. Ansgar proposed developing an extension to the UnBias Fairness Toolkit aimed at helping people inside corporations and public organisations get to grips with AI ethics and governance issues in a practical and tangible way. Over the next few months this became a formal commission from EY to devise a prototype, which then evolved over the summer of 2020 into the full companion toolkit, UnBias AI For Decision Makers.

Support Our Crowdfunding Campaign

You can back our Indiegogo campaign to create a widely affordable production run of the toolkit – perks include the AI4DM toolkit itself at £25 (+ p&p) reduced from its retail price of £40, as well as the original UnBias Fairness Toolkit at £40 (+ p&p) reduced from its retail price of £60. Perks also include a Combo Pack of both toolkits at £60 (+ p&p) as well as multiples of each (with big savings). And I am also offering a perk with 50% off a dedicated one-to-one Facilitator Training Package with myself (2 x 1.5 hour video meetings + toolkits + personalised facilitator guide) at £360 instead of £720.

The campaign ends on 15th October 2020 – back it now to ensure you get your set!

Civic Thinking for Civic Dialogue

Over the past six months or so I have been focused on my work for the UnBias project which is looking at the issues of algorithmic bias, online fairness and trust to provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ co-produced with young people and other stakeholders. My role has been to lead the participatory design process on the Fairness Toolkit, which has involved devising and facilitating a series of workshops with young people in schools and a community group, as well as with stakeholders in the ICT industry, policy, civil society and research fields. My colleagues in the Human Centred Computing group at the University of Oxford and the Horizon Digital Economy Institute at the University of Nottingham, as well as Informatics at the University of Edinburgh, have been wonderful collaborators – providing a rich intellectual and pragmatic context for developing the tools.

The co-design workshops with two schools (in Harpenden and in Islington) and with a young women’s group in Oxfordshire explored what their levels of awareness of the issues were, how relevant to their own lives they perceived them to be, and what they thought should be done. In each workshop, and with each group, we consistently encountered quite different perceptions and experiences – often unexpected and surprising – whilst also observing certain commonalities, which were echoed in the findings of the Youth Juries which our colleagues at Nottingham have been running for UnBias since late 2016. Many of the young people expressed a certain fatalism and lack of agency regarding how they use technology which seems to foster a sense of isolation and inability to effect change. This was coupled with a very limited sense of their rights and how the law protects them in their interactions with service providers, institutions and big companies. Unsurprisingly, they often feel that their voice is not listened to, even when they are the targets of some of the most aggressive marketing techniques.

The tools have thus been informed and shaped by young people’s perceptions and their burgeoning understanding of the scale and depth of algorithmic processes affecting modern everyday life. The tools have also been designed to address the atomising effect that personalised technologies are increasingly understood to have – whereby the increasing personalisation of platforms and services isolates our experiences of media and the mediated world from each other. Where broadcast technologies used to be understood to have a homogenising effect on societies, networked technologies, and the highly personalised software services running on them, are creating a sense of isolation from other people’s cultural and social experiences as they serve each of us something more bespoke to our own tastes and preferences. Recent controversies over the use of targeted advertising in US and UK elections has exposed the iniquitous consequences of such hyper-specific campaigning, and offered a new set of insights into the wider, and deeper social and cultural impacts happening around us.

I have tried to design a toolkit that could build awareness of these issues, offer a means to articulate how we feel about them, and provide a mechanism for ‘stakeholders’ (in the ICT industry, policymakers, regulators, public sector and civil society) to respond to them. What has emerged is something I call a ‘civic thinking tool‘ for people to participate in a public civic dialogue. By this I mean a mode of critical engagement with the issues that goes beyond just a  personal dimension (“how does this affect me?”) and embraces a civic one (“how does this affect me in relation to everyone else?”). And then, when we participate in a public dialogue about these issues, it is not simply conducted in public, but it embraces the co-construction of our society and acknowledges everyone as having a stake and a voice within it. It is about trying to find co-constructive and non-confrontational means to engage people in critical reflection about what kind of world we want to have (and the roles algorithmic systems in particular should play in it).

On Monday we held a workshop to preview the first draft of the toolkit and seek feedback from a variety of stakeholders. Take a look at the presentation below to find out more:

The response has been very encouraging – highlighting the strengths and revealing weaknesses and areas that need additional development. The next stage is to start a testing phase with young people and with stakeholders to refine and polish the toolkit.

We are also developing relationships with “trusted intermediaries” – organisations and individuals who are wiling to adopt and use the toolkit with their own communities. As the UnBias project concludes in August, our aim is to have the toolkit ready for deployment by whoever wants to use it this Autumn.