Tag Archives: trust

People Centric Practices: a field guide

Back in May this year I published a small booklet – A Field Guide for People Centric Practices.

This contains my personal reflections on what a set of principles for working from a people-centric perspective might be. For me, people centric practice implies not just a human centred approach, but one which encompasses the whole context in which we live and work, and impacts on other creatures and lifeforms that are part of such environments – the more-than-human world. It addresses the whole ecologies of which we are part, on upon which we depend for our very existence. People does not have to mean exclusively human – we might consider other species (trees, birds, mammals etc) as peoples, as some indigenous humans have done, since they constitute their own societies and ways of being in the world. All have as much right to life as each other, it is only human hubris which champions our right to own and exploit everything else as paramount.

The booklet brings together, in a simple way, a set of principles and guides for working based on empathy, common sense, trust and agency. It is centred on establishing and following an ethos – through listening and responding, trusting and being trusted; anticipating consequences and reflecting on what you do. It adds into the mix principles for building trust borrowed from Baroness Onora O’Neill’s 2002 Reith Lectures, as well as the Precautionary Principle, Duty of Care and the Nolan Principles of Public Life. It also includes my own personal values: passion, intensity, intimacy, pleasure, obligation, responsibility, culpability.

The booklet is free to download on bookleteer, or read the online version.

A Calculation is Not a Judgement

When human judgement is drained from a system and reductionist rules are applied to complex situations, the results can lead to terrible injustices and harms. If we privilege the procedural outcomes of artificial systems over the importance of humanity, life and experience and the more-than-human world, we will likely face a self-reinforcing feedback loop of such effects, not unlike the existential threat of runaway climate change.

I wish to advance a proposition… a distinction. Namely, that there is a significant gulf between the mathematical operation by which a calculation can be arrived at, and the emergent process of evaluation by which a judgement is made. I think this is an important distinction for our times, because it describes the difference between a procedure of abstraction and a process of conscious deliberation. A calculation can be determined by a non-sentient entity following a series of steps to accomplish an end (such as an algorithm). Humans have created machines that can do this at scales and speeds far beyond our own individual capabilities. A judgement, however, requires a sentient being, imbued with consciousness and the capacity to exercise discernment and perception, to arrive at an authoritative opinion.
And time.
It takes time to absorb and reflect, to ruminate and pass judgement. What I hope to argue here is that consciousness itself is an irreducible constant fundamental to fair and trustworthy judgement.

I am, perhaps, re-treading old ground: the argument between quantitative and qualitative methods has rolled on for at least two centuries – rooted in the slow rise to dominance of a kind of scientism as the prevailing order of knowledge and worldview. Both methods have merits, and their integration or synthesis can lead to remarkable achievements. Both are rooted in very human beliefs and traditions of how knowledge comes about. Wielded together, they stimulate extraordinary benefits but, when asymmetrical in influence and power, the drawbacks are considerable.

We now live in a world where the quantitative has achieved ascendancy in almost all areas of life, where computations and automated decision-making affect the everyday lives of billions of people. Tremendous advantages in speed, efficiency and technical capabilities across the panoply of human activity have resulted. But they also amplify injustices and inequalities, or compound environmental and ecological over-exploitation and destruction. In doing so their scale and speed disempower and degrade the intrinsic agency of human beings in favour of inflexible and unfeeling systems. It is crucial to see that it is a deliberate choice to quantify and sort the world in this way, not an impartial effect of some immutable logic that cannot be challenged.

As Oscar Wilde might have framed the distinction, the difference is one of knowing “the price of everything and the value of nothing” (from The Picture of Dorian Gray). It is trivial to calculate the price of something according to a formula of tangible inputs and costs – yet far more elusive to judge its value. That demands a broader spectrum of parameters, such as context, emotion, culture and other intangibles. Our human fallibilities lead us to both extremes. Judgements, too, can be unsound. Intention and ethos determine how and why we adopt a particular trajectory – as much as our adherence to one method or another, one disciplinary process or another.

The predicaments outlined above are, I believe, at the very root of the proliferating existential dilemmas which humans, indeed all life, now face. The stabilities of our ways of living are being challenged everywhere by changes in natural forces we have clearly, recklessly, contributed to – possibly beyond our capability to re-balance, notwithstanding an irrevocable and devastating loss of biodiversity. I believe that the over-exploitation of the natural world, of other creatures and lifeforms has been facilitated by precisely the unfeeling calculation of systems based on abstracting life into discrete parts that can be separated from a complex whole and used indiscriminately without repercussion. It is a brutal and destructive alienation that does not factor in to its calculation of profit and loss the consequences and costs of its atomistic unravelling of mutual interdependence. We see the results across the planet in the systematic extraction of specific resources causing catastrophic loss of entire environments and ecologies surrounding them. There is no doubt that this can only persist for so long.

We also see it in the human sphere when bureaucratic systems over-emphasise adherence to rules above consideration of individual, or even collective, circumstances. One of the most appalling examples in recent years has been the terrible injustices and harms inflicted on the Windrush Generation by Theresa May and the UK Home Office’s “Hostile environment” policy. And these are just the most visible examples of intentional applications of the technology of bureaucracy, and its component methods and tools, to harm the vulnerable. They are almost certainly intended more as a distraction, or sleight of hand, whilst other yet more egregious activities are kept in the shadows. It seems to me that much of this is being done as a climactic frenzy of industrial capitalism – to squeeze every last drop of advantage from a system that is so weighty with its own entropy that it cannot possibly endure indefinitely. Banking the last pennies to hedge against an uncertain future where, it is assumed, the wealthiest will command the most safety, luxury and authority.

But, I doubt it will go the way anyone currently anticipates – the speed of environmental and ecological transformation will most likely confound our best models and projections, since none can reliably forecast the full range of interdependent, interwoven forces and factors we have interrupted.

I doubt the wisdom of focusing our civilisation’s faith too closely on systems that use automated, statistical calculation of probabilities to make future-facing decisions on our behalf, let alone in the here-and-now. It would itself be a further profound disconnection from our very humanity to hope that such technologies will ‘save’ us from the profound disconnection of the human from the more-than-human natural world. This has been gathering pace for hundreds of years, since at least the European discovery of the Americas and the growth of modern industry and global capitalism. Our technologies are reflections of our cultures and societies, not simply neutral, inevitable outcomes of rational enquiry and engineering. They arise out of our cultures, beliefs, behaviours – they are value-driven… the products of choices, intentional or unconscious.

Evidence is growing (as documented by ProPublica among others) that demonstrates how algorithmic decision-making has a tendency to amplify existing biases leading to exacerbated injustices and inequalities, as well as other pernicious effects. Instead of the promise of impartiality that has justified an increasingly reliance on both bureaucracy and algorithmic systems, we have come to realise that they have all of our human fallibilities coded in, but with the additional twin enhancements of speed and scale – rippling the effects out further and faster. Now would be an apposite time to check the headlong rush to automate how we manage our societies and everyday lives, especially as we must shift our economies and industries from extractive and destructive activities to ones which preserve and maintain life and ecologies. The two are inextricably linked.

… some are already engaged in experiments that try to make the possibility of a future that isn’t barbaric, now. Those who have chosen to desert, to flee this “dirty” economic. war, but who, in “fleeing, seek a weapon,” as Deleuze said. And seeking, here, means, in the first place, creating, creating a life “after economic growth,” a life that explores connections with new powers of acting, feeling, imagining, and thinking.”
Isabelle Stengers, In Catastrophic Times (2015)

Knowledge, Skill Acquisition & Competence

Stuart and Hubert Dreyfus’ model of skill acquisition is a useful guide in discerning the distinction between a calculation and a judgement, through tracing the path from novice via advanced beginner, competent, proficient through to expert. It describes how, in the early stages, the novice must learn the rules and understand how to use them. As their experience grows (and presumably confidence in their ability to apply the ‘right’ skills), they rely less on formal analytical application of the rules and more on their intuitive knowledge of what will work best in the given situation.

“Dreyfus and Dreyfus’ essential point is to assert that analytical thinking and intuition are not two mutually conflicting ways of understanding or of making judgements. Rather they are seen to be complementary factors which work together but with growing importance centred on intuition when the skilled performer becomes more experienced. Highly experienced people seem to be able to recognise whole scenarios without decomposing the into elements or separate features.”
Mike Cooley, Architect or Bee? The Human Price of Technology (1980)

This model complements the four stages of competence (often attribution to Abraham Maslow), which describes the path from Unconscious incompetence via Conscious incompetence, then Conscious competence to Unconscious competence. Again, from a baseline of lack of ability, and even a lack of awareness of inability, there is a trajectory towards competency becoming innate. It becomes embodied not just in the mind, but absorbed into a whole sense of self such that the delivery of expertise is often described as the expert having an intuitive feeling for the right thing to do.

Experience then becomes the key to transcending the application of rigid rules-based approaches and developing craft, skills and expertise. It is also the domain of art and creative practices. What this amounts to, is another order of knowledge that Michael Polanyi called “tacit knowledge”. It is not the procedural, codifiable, step by step, “explicit knowledge” approach that calculation and computation are so excellent at, but something transmitted through experience itself so that the learner eventually acquires the ability to judge what is right to do. Not simply a linear problem-solving trajectory, but a holistic awareness of the whole problem or task. It is committed and informed, acquired by desire and often with passion and with care – a praxis established through dialogue and reciprocal exchange. Being relational, it is a foundation for cooperation and collaboration.

“While tacit knowledge can be possessed by itself, explicit knowledge must rely on being tacitly understood and applied. Hence all knowledge is either tacit or rooted in tacit knowledge. A wholly explicit knowledge is unthinkable.”
Michael Polanyi, Knowing and Being (1969)

Irrational Logics

The Judgement of Solomon (Hebrew Bible/Old Testament 1 Kings 3: 16-28) offers a classic example of wisdom in a judgement that realises justice not through a direct procedure, but through what could be described as an irrational logical path. The story tells of King Solomon called to make a ruling between two women, both claiming a baby as their own, as to who is the actual mother and should keep the infant. With no other way to tell which woman was the true mother, his perverse solution was to propose cutting the baby in half, dividing it equally between them. His wisdom is reflected in the story when one woman gives up her claim to save the life of the child, thus revealing her as the (most likely) true mother.

The story is of a classic type that has parallels in the literatures and storytelling traditions of other cultures. Such stories illustrate how, sometimes, there is no rational path to truth or a just decision but, instead, an irrational, counter-intuitive approach can reveal it in unexpected ways. It is imaginative and transgressive, employing techniques familiar in creative, artistic practices – excessive, surreal and disturbing. These are not quantities but qualities of imagination. It may be perfectly possible to compose a fiction or a piece of music or an artwork to order, by following rules and formulae (for instance the ‘police procedural’ novel or many a three minute pop song). Yet something else is needed for it to become art or literature that transcends the skeleton of its construction and rises above hackneyed cliché and routine prosaicness. Our entire mode of existence and civilisation now hinges on dilemmas as, or even more, knotty and seemingly irreconcilable as the problem faced by Solomon. We are going to need the wisdom of irrational logics and unfettered imaginations to devise visionary, engaging and realistic ways to resolve them.

“Hard times are coming, when we’ll be wanting the voices of writers who can see alternatives to how we live now, can see through our fear-stricken society and its obsessive technologies to other ways of being, and even imagine real grounds for hope. We’ll need writers who can remember freedom – poets, visionaries – realists of a larger reality.
Ursula K. Le Guin, “Freedom” in Words Are My Matter (2016)

Beyond Measurement: the incalculable heart of humanity

Fairness and trust are both qualities or conditions of human experience rather than fixed rules that can be applied indiscriminately. Neither are particularly amenable to formulaic measurement, indeed they are often critiqued precisely because they are almost impossible to quantify. In the context of automated algorithmic decision-making systems (e.g. in Artificial Intelligence and Machine Learning) this lack of fixity and highly subjective nature is frequently alluded to. The lack of stable frames of reference for what is at any one time fair, is a feature of its contingent nature. Likewise with trust – what constitutes the nature of trust in any given situation is highly contingent and almost impossible to codify into a stable matrix of elements and factors.

Yet we instinctively know what feels fair or unfair, and what trust feels like, as equally, when it switches to distrust. Thus it appears that consciousness is also a necessary factor in experiencing fairness and trust, just as I reckon it is for arriving at a judgement. And, since feeling is such an important aspect of both fairness and trust, it could be that these two conditions, like our human intelligence, are bound up not only in the mind and thinking, but are co-located and co-created in our embodied experience of knowing. Perhaps neither are at all suitable for programmatic calculation.

What then, drives some persist in trying to automate trust and fairness in an effort to remove the human from the loop in deciding what is fair or trustworthy? It seems perverse to me to be using such technologies to replace the human, instead of defining alternatives that could enhance our understanding and judgement by doing what computers and systems do best – classify, sort and order huge quantities of information to reveal patterns that are not immediately obvious. The analysis and calculation of data could then inform human-derived judgements that encompass broader contexts and situations including mitigating factors and contradictory states not suited to binary classifications. Better together, one might say.

The upshot of the successes of Deep Blue against Gary Kasparov in 1997 and AlphaGo against Lee Seedol in 2016 has been to invigorate both chess and go with new approaches and strategies, enhancing the potential and pleasure of the process of playing. The successes of these systems have not diminished either game, but suggested new possibilities. And here there may be a lesson in determining the difference between a sentient player with consciousness for whom the playing itself may be the point, and a procedural system wholly focused on achieving a finite goal: winning. By focussing on the objective of an end as the goal, those seeking to train “artificial intelligences” might be missing the fundamental point – and value – of playing; that is, the sensations it provides a sentient being of being alive and of existing in relation to something other than themself. A continuity of consciousness.

A deeper question to be addressed is cui bono? Who ultimately benefits from the increasing automation of aspects of our society? Just as the Industrial Revolution and factory production reduced the independence and skills of many craftspeople, so too the automation of everyday life is removing ordinary people from participating in decision-making. It places the definition of how parameters are set higher and higher within a social hierarchy increasingly isolated and removed from the experience of living among ordinary people. Such a rarefied extraction of authority without direct connection to context and situation also shrugs responsibility, and provides an effective insulation against culpability. Witness the degeneration of our politicians and political system – how lies, deceit and incompetence have become normalised, even venerated, without meaningful consequence.

I perceive there to be a parallel between the political imposition of strict rules and the mechanistic fallacy of atomising everything into discrete parts without perceiving the crucial balance of relations between them. They both ignore the basic truth of life that, while everything is indeed made up of the same elementary particles, their unique composition into the infinite variety of matter and life is absolutely particular. Local specificity is a feature of life’s mutability – how everything is in constant flux and adaptation in relation to its local context and environment. Scale seems to be a crucial issue here – universal laws function well at the atomic level and at the cosmic, but clearly not so unambiguously at the scales we inhabit as lived reality. There, diversity and locality are contingent on achieving any effective equilibrium.

Complex living systems just don’t seem to obey laws and rules that are based on reductionist concepts. Perhaps at the extremes it is possible for static rules to operate seamlessly, but in the elastic middle we need flow and dynamism. As atoms themselves are held together by the forces, or relations, between electrons, protons and neutrons, so all of matter and reality are bound by the multifarious forces and relations that govern the natures of different entities. To overlook the reality of our relational existence and to reduce everything down to inert and unconnected bits is, inevitably, to be missing a key part of a whole equation – a series of calculations that will never completely add up. Connection, interdependence, reciprocities are the fundamental forces that bind and make whole the matter of life.

We will need all of humanity’s diverse knowledges and skills, from poetry, art and music together with mathematics, physics and many others, to find the necessary paths to a fair future for all life on the planet. We shall need the ability to calculate and build machines that can help us sort and make sense of vast amounts of data, whilst simultaneously we must retain our independent, fluent, human capability to judge – soundly – what decisions are most appropriate for each circumstance we encounter and must respond to.

The Uses of Not
Thirty spokes
meet in the hub.
Where the wheel isn’t
is where it’s useful.

Hollowed out,
clay makes a pot.
Where the pot’s not,
is where it’s useful.

Cut doors and windows
to make a room.
Where the room isn’t,
there’s room for you.

So the profit in what is
is in the use of what isn’t.
Lao Tzu, from Tao Te Ching: a book about the way and the power of the way
(a new English version by Ursula K. Le Guin, 1997)

London, July 2019

Stimulating and Inspiring Civic Agency

Over the past couple of weeks – at the V&A Digital Design Weekend and the UnBias Showcase at Digital Catapult – I’ve been sharing and demonstrating the UnBias Fairness Toolkit to people from all kinds of walks of life. The response has been enormously enthusiastic as people have immediately imagined using it in the contexts of their own working lives and interests. They have instantly grasped its power to stimulate critical thinking, find and share people’s voices on these issues (bias, trust and fairness in algorithmic systems) and see how this can contribute to a public civic dialogue that involves industry, government, the public sector and civil society too.

What the Toolkit Offers

  • A pragmatic and practical way to raise awareness and stimulate dialogue about bias, trust and fairness in algorithms and digital technologies.
  • It is designed to make complex and often abstract ideas tangible and accessible to young people and to non-experts across society.
  • It supports critical thinking skills that can help people feel empowered to make better informed choices and decisions about how they interact with algorithmic systems.
  • It helps collect evidence of how people feel about the issues and what motivates them to share their concerns by contributing to a public civic dialogue.
  • It provides a communication channel for stakeholders in industry, policy, regulation and civil society to respond to public concerns about these issues.
  • It can also be used by developers of algorithms and digital systems to reflect on ethical issues and as a practical method for implementing Responsible Research and Innovation.

Where Next?

The next stage is slowly becoming clear – what I believe we need is a national programme to train people, especially those working with young people, in using the toolkit, and to inspire people working in industry, regulation and policy to understand how to use it as an applied responsible research and innovation tool. We want to get the toolkit into as many schools, libraries and other places where young people, and others of all ages, can enhance their awareness, their critical thinking skills and understanding of the issues we face for digital literacy and the profound effects on our society and democracy that digital technologies are having.

Over the coming months I will be sounding out potential partners and sponsors/funders to make this possible.

This would be the first step in a more expansive programme on enabling agency, building on this, and much of my and Proboscis’s previous work. Its not something I expect to achieve alone – so I am hoping to bring like-minded collaborators together under the umbrella of this concept of civic agency to grow our capabilities and capacities for engaging people in new forms of critical thinking, autonomous and collective action to address the challenges we face as communities and as a society today and for the future.

Civic Thinking for Civic Dialogue

Over the past six months or so I have been focused on my work for the UnBias project which is looking at the issues of algorithmic bias, online fairness and trust to provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ co-produced with young people and other stakeholders. My role has been to lead the participatory design process on the Fairness Toolkit, which has involved devising and facilitating a series of workshops with young people in schools and a community group, as well as with stakeholders in the ICT industry, policy, civil society and research fields. My colleagues in the Human Centred Computing group at the University of Oxford and the Horizon Digital Economy Institute at the University of Nottingham, as well as Informatics at the University of Edinburgh, have been wonderful collaborators – providing a rich intellectual and pragmatic context for developing the tools.

The co-design workshops with two schools (in Harpenden and in Islington) and with a young women’s group in Oxfordshire explored what their levels of awareness of the issues were, how relevant to their own lives they perceived them to be, and what they thought should be done. In each workshop, and with each group, we consistently encountered quite different perceptions and experiences – often unexpected and surprising – whilst also observing certain commonalities, which were echoed in the findings of the Youth Juries which our colleagues at Nottingham have been running for UnBias since late 2016. Many of the young people expressed a certain fatalism and lack of agency regarding how they use technology which seems to foster a sense of isolation and inability to effect change. This was coupled with a very limited sense of their rights and how the law protects them in their interactions with service providers, institutions and big companies. Unsurprisingly, they often feel that their voice is not listened to, even when they are the targets of some of the most aggressive marketing techniques.

The tools have thus been informed and shaped by young people’s perceptions and their burgeoning understanding of the scale and depth of algorithmic processes affecting modern everyday life. The tools have also been designed to address the atomising effect that personalised technologies are increasingly understood to have – whereby the increasing personalisation of platforms and services isolates our experiences of media and the mediated world from each other. Where broadcast technologies used to be understood to have a homogenising effect on societies, networked technologies, and the highly personalised software services running on them, are creating a sense of isolation from other people’s cultural and social experiences as they serve each of us something more bespoke to our own tastes and preferences. Recent controversies over the use of targeted advertising in US and UK elections has exposed the iniquitous consequences of such hyper-specific campaigning, and offered a new set of insights into the wider, and deeper social and cultural impacts happening around us.

I have tried to design a toolkit that could build awareness of these issues, offer a means to articulate how we feel about them, and provide a mechanism for ‘stakeholders’ (in the ICT industry, policymakers, regulators, public sector and civil society) to respond to them. What has emerged is something I call a ‘civic thinking tool‘ for people to participate in a public civic dialogue. By this I mean a mode of critical engagement with the issues that goes beyond just a  personal dimension (“how does this affect me?”) and embraces a civic one (“how does this affect me in relation to everyone else?”). And then, when we participate in a public dialogue about these issues, it is not simply conducted in public, but it embraces the co-construction of our society and acknowledges everyone as having a stake and a voice within it. It is about trying to find co-constructive and non-confrontational means to engage people in critical reflection about what kind of world we want to have (and the roles algorithmic systems in particular should play in it).

On Monday we held a workshop to preview the first draft of the toolkit and seek feedback from a variety of stakeholders. Take a look at the presentation below to find out more:

The response has been very encouraging – highlighting the strengths and revealing weaknesses and areas that need additional development. The next stage is to start a testing phase with young people and with stakeholders to refine and polish the toolkit.

We are also developing relationships with “trusted intermediaries” – organisations and individuals who are wiling to adopt and use the toolkit with their own communities. As the UnBias project concludes in August, our aim is to have the toolkit ready for deployment by whoever wants to use it this Autumn.

Obligation. Responsibility. Culpability

A conversation with a friend a few days ago has made me think about another frame in which to consider action and effect. One which acknowledges the complex, entangled nature of our social relationships and the delicate balance of forces that hold that web of relationships together.

Obligation
To whom do I have (mutual or reciprocal) obligations and what are the nature of those obligations? Will my actions demonstrate my enacting those obligations or an avoidance of them?

Responsibility
What are my responsibilities to those with whom I have relationships? How do I balance my needs and any impact my actions may have on them?

Culpability
Am I willing to accept the consequences of my actions? Am I willing to amend my behaviour in future to reflect any detrimental impact of my actions on others?

In these muddled times, full of fear and anxieties, being able to think clearly, to make informed choices and decisions seems ever more urgent and necessary. Understanding that we are all connected in multi-lateral and multi-dimensional ways, and to have the critical means to make assessments that acknowledge this instinctively, is like a ray of sunlight cutting through the gloom.

Reciprocities of Trust

From Informed Consent to Reciprocal Exchange

At the heart of working with communities, indeed anyone, is the issue of trust. In this piece I am setting down some of my thoughts about how collaborative practice can be built upon reciprocities of trust and why that is different to some of the established models I have encountered in research and practice stemming from the ethics of informed consent and how they are often applied.

At the root of how someone behaves is their ethos – not a checklist of ‘ethical guidelines’ to be adhered to, but the fundamental characteristics of who they are as well as why, and how, they do what they do. A person’s ethos is not merely a set of changeable beliefs but the core that directs their actions on both conscious and subconscious levels. If a person’s own ethos is not in alignment with the ‘ethical’ guidelines they are required to follow for a project, then it seems to me that their adherence to an ethical framework and use of informed consent is unlikely to be more than an instrumental procedure that does not necessarily guarantee the reciprocity of trust that is implied.

In my experience, the mechanisms by which trusted relationships can be established between participants in a project have to be tailored for each specific instance. Sometimes it is possible to use well-honed mechanisms and processes whereby each party can validate the others to establish a basis of trust on which to proceed. In other situations individuals have to build up trust through demonstrations and actions that directly establish their trustworthiness to each other. How we create these kinds of reciprocal relationships is critical to our ability to realise cooperative and collaborative creations of value between people.

Below are some notes on formulations that trace my own path from informed consent to reciprocal exchange. My reason for writing this is not to suggest that all uses of informed consent are wrong or badly applied, but to articulate and explore my own approaches to formulating something analogous to the rigour of well-applied informed consent that works within the particular contexts of independent engagement practices.

Informed Consent

Since first entering the world of academic research in the late 1990s I have become increasingly uncomfortable with the mechanisms and procedures of how I have often encountered ‘informed consent’ being applied. My feeling has always been that the language of ‘informed consent’ tends to pre-suppose a top-down hierarchy, where research ‘subjects’ are studied, acted upon or have information or knowledge extracted from their situation without always the concomitant return. The principle of engaging people in research on an informed and consensual basis is clearly fundamental to any practice that strives to work on an equitable basis. However, often the processes and application of ‘ethics’ appear instrumental and unsound – focused more on limiting institutional liability than truly safeguarding those whose consent has been sought. To me this smacks of an acceptance, even comfortableness, with unbalanced power relations that extract value in one direction only.

As an artist working independently of (although often in partnership with) institutions I have rarely been subject to the requirement of seeking approval for my work from an ethics committee or having to justify my methods and approach in this way. These structures simply do not exist. In their absence artists and arts organisations have to develop their own idiosyncratic ways of working that reflect ‘ethical’ positions and best practices. As such these are often hard to evaluate because they are not so codified or even designed to be measured in the ways that institutions are obliged to provide evidence of to comply with policies, law and other conventions. Both approaches have much to offer – standards can be agreed upon and evaluated against, whilst the dynamism of more individual and idiosyncratic ethos-driven processes can be more fluid and adaptable to specific situations and contexts.

In 2008 a four year national programme, Beacons for Public Engagement, involving over twenty UK universities started which sought to address historic imbalances between research culture and the public (and often those being studied) in order to “change the culture in universities, assisting staff and students to engage with the public”. The programme defined engagement as “a two-way process, involving interaction and listening between all parties, with the goal of generating mutual benefit.” Changing attitudes and behaviours within well established systems can only be achieved over long timeframes, therefore it will be interesting to see whether any sustained evaluation of this programme is undertaken, for instance at five or ten year intervals, that may demonstrate not only the value to universities of engaging with the public in mutually beneficial ways, but also to what extent communities have been able to identify and articulate value from such engagement too.

The emergence of the Free, Prior & Informed Consent (FPIC) model of working with indigenous peoples (and enshrined in 2007’s UN Declaration on the Rights of Indigenous Peoples) is also a hugely significant development along these lines, applicable in the development of trusted relations between any parties where there is a discernible imbalance of power.

Informed Disclosure

This concept emerged during the early stages of a collaboration with Dr Lizzie Coles-Kemp on what became the Pallion Ideas Exchange. This project was sited within a deprived ward of the city of Sunderland where, in the face of massive benefits change and the erosion of the welfare infrastructure on which this largely deindustrialised area depends, an intergenerational group of locals had expressed a desire to create their own online ‘knowledge network’ to address the growing lack offered by the State. As we explored with the group what they meant by this, it became apparent that we were looking less at a database of fixed information points and much more at a process of documenting and sharing what people knew and had experiences of that could be shared with others who needed help with specific problems.

Growing out of the VOME research project led by Lizzie, it was obvious that this process would need to engage with the use of social media as well as the web and mobile technologies at various levels. Among the general issues relating to privacy that affect any use of social media and the internet, there are ofttimes specific reasons where the identification of individuals within localities and communities presents proximate dangers. In Pallion this was an issue that required the close attention of all participants as, either through forgetfulness, ignorance of the potential consequences or a kind of carelessness, there was clearly potential for individuals to experience harm (such as benefits sanctions as well as other issues) as a result of participating or sharing what they knew. Lizzie and I attempted to address this in the project through reiterating a rubric of ‘informed disclosure’ at each workshop and meeting, and building prompts and reminders into the tools we co-designed and created with the group to consider potential consequences before sharing. As part of a generic toolkit which was developed out of this project we also created a visual guide to choosing online services that works as a perception axis between private/public and open/restricted. This was a practical attempt to help someone think through the consequences of choosing and using different online services and social media before adopting them, embedding the concept of informed disclosure at the starting off point as well as along their online journey.

For me, the principle of informed disclosure went much further than this pragmatic example; it encompassed how my colleagues at Proboscis and I collaborated with Lizzie and her co-researchers as well as the community in Pallion. It meant being open about what we each hoped to achieve from the project, what we were each getting from it as well as what we were each putting into it. It meant being open about our roles as paid professionals engaging with unpaid volunteers in a community that has experienced multiple deprivations since the UK Government agreed to the closure of the key industry in North East England back in the early 1980s – shipbuilding – and the consequent disappearance of the subsidiary industries that relied on it resulting in cross-generational long-term unemployment, loss of skills, hope and aspiration for the future. This kind of openness was crucial to establishing trust and maintaining it not only during the project but long afterwards too.

Engaged Consent

Whilst in Reite village in Papua New Guinea with James Leach during November 2012,  James and I wanted to push the concept of informed consent further as we co-designed some initial TEK Notebooks with several community members. This was an initial experiment intended as a proof-of-concept for the discussions we had had with James’ longterm Reite collaborator, Porer Nombo, both before and following our participation in the Saem Majnep Memorial Symposium on Traditional Environmental Knowledge at the University of Goroka in PNG the previous month. A number of the delegates had discussed their approaches to informed consent and working with indigenous people in PNG in their presentations, some of which were really admirable.

Once in Reite village itself, we felt that it was important to establish what was actually meant by consent in this context – and this then informed how the whole experiment was shaped. The notion of engaging the consent of the participants who filled in the TEK Notebooks was relevant not just to demonstrate our ‘ethical’ approach to outsiders (such as academic and NGO colleagues), but was woven into the nature of the prompts used in the notebooks themselves. The participants were asked not just to give their consent to their notebooks being digitised and shared online, but also to indicate what rights they had to share the knowledge they were including, essentially linking back to the community and heritages of which they are part. This seemed a crucial way to ensure that what was being done was relevant and respectful to the internal relationships of the communities of Reite and Sarangama (a nearby village some of whom also took part). It provided declarations that the kinds of knowledge being shared were not secret or privileged, were given freely and with a description of the author’s right to do so. The prompts themselves emerged out of discussions with a number of community members (in which I was mainly an observer) which provided a strong sense that what we were doing was truly co-created, emerging from a process of open collaboration.

As a time-limited experiment, our formulation of engaged consent was necessarily only partly developed but points, I think, to how we can do research with people, not just upon them.

Reciprocal Exchange

Since completing both the Pallion Ideas Exchange project and following on from my first Indigenous Public Authoring field trip to Reite I have found myself moving more and more towards articulating my personal aim of ‘reciprocal exchange’ with the people and communities with whom I work. My goal in entering into collaborations is to learn from others, experience things I cannot (or would not) make happen on my own – to stretch myself in a continuous process of becoming. It would be a selfish or at least self-centred process without the sense of obligation to reciprocate with others, to offer whatever knowledge, skills and experiences I have in a way that enables others to adopt and adapt them for themselves.

Perhaps this is why I have often felt uncomfortable with the use of ‘ethics’ and ‘informed consent’ as I have seen and encountered it applied in some research contexts. My research work is not based on creating objective studies so much as engaged directly in working with people to effect social and cultural transformation. For this I believe that more is needed than just consent – it requires active participation, mutual trust and reciprocal exchange.

This value of reciprocal exchange also underpins the work I have been doing with Oxford Brookes University on developing a process of engaged participatory design for a new kind of rehabilitation measurement tool which survivors of traumatic brain injury (TBI) will be asked to use to share their rehab experiences. Previous methods and tools have primarily focused on what clinicians and researchers needed to know. However we have started from the point of also trying to understand what benefits may derive from the activity for the TBI survivors themselves – as they see it – and how the process of contributing information to help clinicians better understand their experiences can be part of their own rehabilitation. This is a challenging step in developing tools within a medical context – embedding the patient’s perspective at the heart of designing a process intending to learn from their information and data is not as common as many may think.

* * * * *

James and I are now gearing up for the next stage of our Indigenous Public Authoring collaboration : a field trip in early 2015 (and another in 2016) back to Reite to work further with community members and explore methods and tools appropriate to their situation and context – ultimately aiming to put together a kind of simple, adaptable toolkit and process for recording and sharing traditional environmental, cultural and ecological knowledge that has been co-designed and co-created in situ with the community.

At the heart of this project, for me, is this question of reciprocal exchange – what is each participant in the process bringing and taking away? How does it bind us into relationships of exchange and obligation to each other? The disparities of our ways of life and the worlds we inhabit mean that establishing an equitable relationship is unlikely to be based purely on material exchange – as it might be in the industrialised world of goods and money – although undoubtedly this will be involved. More likely, it seems to me, an equitable relationship will emerge out of shared acceptance of obligations to each other, and the articulation of these obligations through processes of collaboration and making things.

And the only material that these relationships can be forged with is trust.