Tag Archives: UnBias

Practising Ethos | Practical Ethics

It’s just over 2 weeks since I launched a practical toolkit for assessing ethics and governance of Artificial Intelligence / Machine Learning systems, and automated decision making systems – UnBias AI For Decision Makers (AI4DM). It has initiated a series of discussions with friends, colleagues and strangers as well as feedback from others. This has prompted me to explain a bit more about its genesis and the benefits I believe it offers to people and organisations who use it.

UnBias AI4DM is a critical thinking toolkit for bringing together people from across an organisation or group to anticipate and assess – through a rigorous process of questioning and a whole systems approach – the potential harms and consequences that could arise from developing and deploying automated decision making systems (especially those utilising AI or ML). It can also be used to evaluate existing systems and make sure that they are in alignment with the organisation’s core mission, vision, values and ethics.

It is an engagement tool (rather than software) that fosters participation and communication across diverse disciplines – the cards and prompts provide a framework for discussions; the worksheet provides a structured method for capturing insights, observations and resulting actions; the handbook provides ideas and suggestions for running workshops etc. I am also creating and sharing instructional animations and videos throughout the crowdfunding campaign – raising funds to make a widely affordable production version.

AI4DM builds on a previous toolkit I created – UnBias Fairness Toolkit – released in 2018 that is aimed at building awareness (in young people and non-experts) of bias, unfairness & untrustworthy effects in algorithmic systems. Used together, the two toolkits enable such systems to be explored from both the perspectives of those creating them, and those on the receiving end of their outcomes. It is available in both a free downloadable DIY print-at-home version and a soon-to-be-manufactured physical production set.

Benefits of Using AI4DM

The toolkit has two key strands of benefits for groups and organisations who adopt and use it:

Addressing Issues:

  • a structured process for identifying problems and devising solutions
  • emphasises collective obligations and responsibilities
  • a means of addressing complex challenges
  • a way to anticipate future impacts (e.g. new legislation & regulations or shifts in public opinion)

Staff Development:

  • develops communications skills across different disciplines fields and sectors
  • supports team building and cohesion
  • develops understanding of roles in teams and group work
  • evolves a culture of shared practices across different disciplines and fields within an organisation

AI4DM as a Teaching Aid

I have also been having discussions with academic colleagues who teach in various disciplines at a number of universities about using the toolkit in teaching and lecturing. As with the UnBias Fairness Toolkit, AI4DM provides a structured framework for looking at a wide range of issues from multiple perspectives and exploring how they align or misalign with an organisation’s core ethos and values. It can be used practically to look at a real world example or as a conceptual exercise to explore the potential for harmful consequences to emerge from a development process that doesn’t integrate ethical analysis alongside other key considerations, such as legal or health and safety regulations.
I’ll be adding a short animation to demonstrate using the toolkit in teaching settings soon on our Youtube Playlist.

Using the Toolkit Online

The pandemic has forced everyone to rapidly shift from in-person meetings, workshops, classes and lectures to distributed engagement via online platforms. So I have been experimenting with ways to use the toolkit in such spaces – from making a test interactive version using the MURAL online whiteboard collaboration tool, to tests using Zoom and a hybrid combination of physical cards and online annotation of the Worksheet. I’ll be adding a short animation soon to the Playlist to demonstrate using the toolkit online.

In summary, I believe that the best way to use the toolkit online is in a hybrid fashion:

  • Make sure ALL the participants have their own physical set of cards (either a production set or DIY print-at-home version). In the long gaps between speaking and direct participation (a familiar feature of online situations) this gives the participants something tangible that they can contemplate and play with that is contextually relevant – given the many distractions of working/studying from home that are usually absent (or less intrusive) in traditional meeting or teaching spaces. I remain convinced that there is a positive cognitive impact of combining manual activities with critical thinking – it grounds ideas in ways that seem to promote associative connections; and may be similar to the effect identified in recent studies on the differences in learning outcomes when writing notes versus typing during lectures;
  • Assign specific roles or topics to participants so that they can focus on their contribution to the session whilst listening to others;
  • Have a Facilitator or Moderator ‘host’ the session and manage who is participating or speaking at any one time;
  • The Issue Holder (who is first among equals) should keep the key issue or problem being addressed at the forefront of the conversation and make connections to the contributions of other participants;
  • The Scribe should annotate the Worksheet on a Shared Screen (perhaps using an online collaboration tool such as Miro or MURAL, or simply annotating the PDF) with observations and notes, and post photos of the evolving matrix of cards as they are added.
  • Use the Chat area to encourage participants to add their own annotations, ideas, links and other relevant observations to the session, assisting the Scribe in capturing as much of the richness of the conversation and discussions as possible.

Genesis

The UnBias Fairness Toolkit was an output of the UnBias project led by the Horizon Institute at the University of Nottingham with the Universities of Oxford and Edinburgh and Proboscis (and funded by the EPSRC). It was designed to accommodate future extensions and iterations so that it could be made relevant to specific groups of people (such as age groups or communities of interest) or targeted around particular issues and contexts (banking and finance; health data; education; transportation and tracking etc).

At the techUK Digital Ethics Summit in December 2019 I ran into Ansgar Koene, one of my former UnBias project colleagues who is a Senior Research Fellow at Horizon and also Global AI & Regulatory Leader at EY Global Services. Ansgar proposed developing an extension to the UnBias Fairness Toolkit aimed at helping people inside corporations and public organisations get to grips with AI ethics and governance issues in a practical and tangible way. Over the next few months this became a formal commission from EY to devise a prototype, which then evolved over the summer of 2020 into the full companion toolkit, UnBias AI For Decision Makers.

Support Our Crowdfunding Campaign

You can back our Indiegogo campaign to create a widely affordable production run of the toolkit – perks include the AI4DM toolkit itself at £25 (+ p&p) reduced from its retail price of £40, as well as the original UnBias Fairness Toolkit at £40 (+ p&p) reduced from its retail price of £60. Perks also include a Combo Pack of both toolkits at £60 (+ p&p) as well as multiples of each (with big savings). And I am also offering a perk with 50% off a dedicated one-to-one Facilitator Training Package with myself (2 x 1.5 hour video meetings + toolkits + personalised facilitator guide) at £360 instead of £720.

The campaign ends on 15th October 2020 – back it now to ensure you get your set!

Civic Agency: a vision & plan

Civic Agency is an initiative aimed at encouraging people, at grassroots level, to engage with the social, cultural and political issues at the heart of our increasingly automated and divisive digital world. Social media and hyper-personalisation of digital experiences are becoming ever more prevalent as the interface between us and society. So, as we come to rely ever more on digital systems and technologies to run everyday life, we are coming to realise that society needs new ways to face the issues that these present; and new strategies for people to successfully navigate the implications.

AI, Machine Learning, Personalisation, Algorithm Bias, Automated Decision Making, Big Data
* * *
Ethics, Regulation, Responsible Innovation, Rights, Information/Media/Digital Literacy

Above are some of the headline issues and below, some of the proposed solutions. However we believe that more needs to be done to engage ordinary people in developing their own critical and civic thinking skills: to identify potential harms and to make better informed choices about what they do online, which services they use and how their data is protected from exploitation.

Our aim is to help people feel that they have agency and are empowered to make good decisions and choices, and for them to feel that their voice is being listened to and heeded in the corridors and places of power where laws, rights and regulations are determined. Practical Ethics at grassroots level, meeting in the middle with top down regulation and codes of practice in industry and public institutions.

Enabling Literacy

Awareness and literacy are crucial for people to be able to navigate our increasingly mediated world – Stéphane Goldstein has recently written an excellent argument for why this matters so much now.

“We cannot act wisely without making sense of the world and making sense of the world is in itself a profoundly practical action that informs how we experience reality, how we act, and the relationships we form. Without questioning our worldview and the narrative that has shaped our culture, are we not likely to repeat the same mistakes over and over again?”
Daniel Christian Wahl, Designing Regenerative Cultures

In the workshops I ran with young people that informed the creation of the UnBias Fairness Toolkit, it was clear that they had only the vaguest understanding of what their rights as children were (and would soon be as adults), and what laws already existed to protect them. The general sense of disempowerment when using online services (like buying clothes, shoes or other products) went as far as statements to the effect that they were powerless and unprotected whenever they interacted with the big internet companies (GAFA) or even small online retailers. Almost as if all digital services were a gift of the companies involved and could not be challenged even if they were doing wrong or questionable things. The young people had almost no conception of the scale in which they are being tracked online, across multiple sites and services, no matter what devices they use. When we created mappings of what they did online and how their personal data was being distributed across a huge range of platforms and services they were shocked and, to some degree, incensed – that they had been duped in some way, to give up their data so freely, every time they go online. 

On the positive side, at least in one school, the young people felt it was their duty to challenge this and to call for a safer internet. I think this was an early indication that this generation are more empowered to speak up and demand to be listened to, as the recent SchoolStrike4Climate/ FridaysForTheFuture protests have demonstrated even more palpably. It is possible that the seeds already exist for a society which expects ‘responsible’, sustainable innovation and development to be the default for designers and developers, no matter whether they work for a public institution, a non profit organisation or a profit-making corporation. We have seen the consequences of unbridled, irresponsible innovation play out and cause tremendous damage to democracy and to the societies we live in.

Now public dialogue and deliberation needs to be stimulated and to bring ordinary people’s concerns and desires to the same level of consideration as the privileged influence of gatekeepers, corporate lobbyists and policy makers. We are all stakeholders in this society, and we must not let lobbyists capture the agenda and subvert democratic principles. Concepts such as duty of care and the precautionary principle – pro-active and a priori approaches – could be baked in to the culture of innovation and development, not tacked on as after-thoughts or funded through marketing and corporate social responsibility budgets. Digital Safety, not digital security – social justice, not breaking things because they get in your way.

A Plan for Grassroots Engagement

Our proposal is simple: using the UnBias Fairness Toolkit as our building block, we propose to stimulate civic agency through:

  • Access : place copies of the toolkit in every school, in public libraries and make them available to any community that wants to get to grips with these issues for themselves.
  • Literacy : create an organic train-the-trainer programme and additional facilitation tools that lay the foundation for a participatory and grassroots-based approach to de-msytifying the issues – making the abstract tangible and actionable.
  • Engagement : train teachers, youth & community workers and public librarians in using the toolkit to engage people in developing their critical and civic thinking skills;
  • Collaboration : establish an organic network of people who can guide others to learn more and devise their own strategies – to have agency.

Expanding the Frame

Alongside this it is important that the toolkit can be adapted for a variety of different contexts and situations, age groups and experiences. For instance to discuss very specific topics such as security; online banking and finance; medical ethics and patient data. And for the training materials to be templates that people can build on themselves, not just rely on us to define and deliver.

We propose to collaborate with other key participants in these spaces to develop additional materials – Expansion sets – that make the toolkit modular and useful to more people (for example, new Example Cards for specific issues; a much expanded set of Glossary Cards etc). We may create additional worksheets and materials for teams to use as practical ‘responsible innovation’ tools. There may also be other tools and toolkits we can introduce and share.

How?

The tricky part is funding something like this – rather amorphous, profoundly unbusinesslike and with a Return On Investment that will definitely not be financial. I’ve been finding fellow travellers and talking with a variety of public and private organisations whose interests align with some of the above. But what this needs is resources to make it a reality. We have the basic toolkit, we just need funds to roll out the rest of the process, bit by bit.

Get in touch if you can help [giles at proboscis dot org dot uk].

Stimulating and Inspiring Civic Agency

Over the past couple of weeks – at the V&A Digital Design Weekend and the UnBias Showcase at Digital Catapult – I’ve been sharing and demonstrating the UnBias Fairness Toolkit to people from all kinds of walks of life. The response has been enormously enthusiastic as people have immediately imagined using it in the contexts of their own working lives and interests. They have instantly grasped its power to stimulate critical thinking, find and share people’s voices on these issues (bias, trust and fairness in algorithmic systems) and see how this can contribute to a public civic dialogue that involves industry, government, the public sector and civil society too.

What the Toolkit Offers

  • A pragmatic and practical way to raise awareness and stimulate dialogue about bias, trust and fairness in algorithms and digital technologies.
  • It is designed to make complex and often abstract ideas tangible and accessible to young people and to non-experts across society.
  • It supports critical thinking skills that can help people feel empowered to make better informed choices and decisions about how they interact with algorithmic systems.
  • It helps collect evidence of how people feel about the issues and what motivates them to share their concerns by contributing to a public civic dialogue.
  • It provides a communication channel for stakeholders in industry, policy, regulation and civil society to respond to public concerns about these issues.
  • It can also be used by developers of algorithms and digital systems to reflect on ethical issues and as a practical method for implementing Responsible Research and Innovation.

Where Next?

The next stage is slowly becoming clear – what I believe we need is a national programme to train people, especially those working with young people, in using the toolkit, and to inspire people working in industry, regulation and policy to understand how to use it as an applied responsible research and innovation tool. We want to get the toolkit into as many schools, libraries and other places where young people, and others of all ages, can enhance their awareness, their critical thinking skills and understanding of the issues we face for digital literacy and the profound effects on our society and democracy that digital technologies are having.

Over the coming months I will be sounding out potential partners and sponsors/funders to make this possible.

This would be the first step in a more expansive programme on enabling agency, building on this, and much of my and Proboscis’s previous work. Its not something I expect to achieve alone – so I am hoping to bring like-minded collaborators together under the umbrella of this concept of civic agency to grow our capabilities and capacities for engaging people in new forms of critical thinking, autonomous and collective action to address the challenges we face as communities and as a society today and for the future.

Civic Thinking for Civic Dialogue

Over the past six months or so I have been focused on my work for the UnBias project which is looking at the issues of algorithmic bias, online fairness and trust to provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ co-produced with young people and other stakeholders. My role has been to lead the participatory design process on the Fairness Toolkit, which has involved devising and facilitating a series of workshops with young people in schools and a community group, as well as with stakeholders in the ICT industry, policy, civil society and research fields. My colleagues in the Human Centred Computing group at the University of Oxford and the Horizon Digital Economy Institute at the University of Nottingham, as well as Informatics at the University of Edinburgh, have been wonderful collaborators – providing a rich intellectual and pragmatic context for developing the tools.

The co-design workshops with two schools (in Harpenden and in Islington) and with a young women’s group in Oxfordshire explored what their levels of awareness of the issues were, how relevant to their own lives they perceived them to be, and what they thought should be done. In each workshop, and with each group, we consistently encountered quite different perceptions and experiences – often unexpected and surprising – whilst also observing certain commonalities, which were echoed in the findings of the Youth Juries which our colleagues at Nottingham have been running for UnBias since late 2016. Many of the young people expressed a certain fatalism and lack of agency regarding how they use technology which seems to foster a sense of isolation and inability to effect change. This was coupled with a very limited sense of their rights and how the law protects them in their interactions with service providers, institutions and big companies. Unsurprisingly, they often feel that their voice is not listened to, even when they are the targets of some of the most aggressive marketing techniques.

The tools have thus been informed and shaped by young people’s perceptions and their burgeoning understanding of the scale and depth of algorithmic processes affecting modern everyday life. The tools have also been designed to address the atomising effect that personalised technologies are increasingly understood to have – whereby the increasing personalisation of platforms and services isolates our experiences of media and the mediated world from each other. Where broadcast technologies used to be understood to have a homogenising effect on societies, networked technologies, and the highly personalised software services running on them, are creating a sense of isolation from other people’s cultural and social experiences as they serve each of us something more bespoke to our own tastes and preferences. Recent controversies over the use of targeted advertising in US and UK elections has exposed the iniquitous consequences of such hyper-specific campaigning, and offered a new set of insights into the wider, and deeper social and cultural impacts happening around us.

I have tried to design a toolkit that could build awareness of these issues, offer a means to articulate how we feel about them, and provide a mechanism for ‘stakeholders’ (in the ICT industry, policymakers, regulators, public sector and civil society) to respond to them. What has emerged is something I call a ‘civic thinking tool‘ for people to participate in a public civic dialogue. By this I mean a mode of critical engagement with the issues that goes beyond just a  personal dimension (“how does this affect me?”) and embraces a civic one (“how does this affect me in relation to everyone else?”). And then, when we participate in a public dialogue about these issues, it is not simply conducted in public, but it embraces the co-construction of our society and acknowledges everyone as having a stake and a voice within it. It is about trying to find co-constructive and non-confrontational means to engage people in critical reflection about what kind of world we want to have (and the roles algorithmic systems in particular should play in it).

On Monday we held a workshop to preview the first draft of the toolkit and seek feedback from a variety of stakeholders. Take a look at the presentation below to find out more:

The response has been very encouraging – highlighting the strengths and revealing weaknesses and areas that need additional development. The next stage is to start a testing phase with young people and with stakeholders to refine and polish the toolkit.

We are also developing relationships with “trusted intermediaries” – organisations and individuals who are wiling to adopt and use the toolkit with their own communities. As the UnBias project concludes in August, our aim is to have the toolkit ready for deployment by whoever wants to use it this Autumn.

Fairness and Bias in an Algorithmic Age

unbias-logo2

Last month a new research project of which I am part got underway – UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy. Its a collaboration between the Universities of Nottingham (Horizon Digital Economy Institute), Edinburgh (Informatics) and Oxford (Human Centred Computing) funded by the EPSRC through its Trust, Identity, Privacy and Security in the Digital Economy strand. Over the next two years it will look at the complex relationships between people and systems increasingly driven by personalisation algorithms and explore whether, and to what degree, citizens can judge their trustworthiness.

My role will be to lead a co-design process that will create a ‘fairness toolkit’ : raising awareness about the impact of algorithms on everyday behaviours; devising pragmatic strategies to adapt around them; and engaging policymakers and online providers. We will be working with schools and young people to co-develop the toolkit – following in the wake of previous projects exploring young people and social media, such as Digital Wildfire.

For me this project cuts to the quick of concerns at the heart of today’s society: empathy, agency, transparency and control. I will be bringing ideas and practices to the project I have been exploring from a number of different trajectories over the past few years, from my work on the Pallion project to data manifestation and reciprocal entanglements. I am particularly excited as this marks my first formal collaboration with Oxford’s Human Centred Computing research group with whom I’ve been in dialogue for a couple of years.