Web3 and communities at risk
Recent report by Minderoo Centre for Technology and Democracy at the University of Cambridge
Academic research is important because it provides surgical articulation to the issues within any given topic. It is generally not constrained by corporate influence. It can sound alarms, raise concerns in the early stages, or uncover problems after the fact. I came across this valuable report this week, and if you work on Web3 projects (in any capacity) I think you should read it. There is both a pdf version and an accessible plain text version at the link below:
Web3 and communities at risk: Myths and problems with current experiments
From the executive summary:
The report demonstrates 3 key areas where Web3 experimentation from start-ups, humanitarian and development aid organisations, and other non-traditional commercial partners is targeting marginalised groups: Payment, Currency, and Identification.
The author’s point is that these startup products are being tested in areas where the stakes are high, and users have much more to lose than if they were tested in other markets.
The report features case studies and provides 10,000-foot view recommendations that are cautionary for anyone working in this space. The final recommendations are summarized this way:
Web3 technologies, especially untested cryptocurrencies, should not be imposed experimentally on marginalized communities.
Public institutions must coordinate around vetting private Web3 companies.
We need qualitative evidence-based research on the design, maintenance, and use of Web3 technologies.
The final recommendation is what I latched on to, and though the author is unable to provide insight as to the design approach and practices of any of the firms developing the tools described, her findings point directly at the fallout some of these projects have left in their wake. It’s hard not to see design being an obvious solution. Research and design are always necessary in developing digital tools, but ESPECIALLY when you are crossing borders and working in different cultural settings.
In the case of the Sarafu network, the nonprofit Grassroots Economics seems to have done extensive research - at least numerous papers and articles are listed to assert that “research” has been done. And while the report highlights this project as one that has been tested with pilot programs, the author also indicates the outcomes are still uncertain in regard to surveillance risk. (I’ve not had time to look up any of the other projects and search for the firms that built them.)
I would caution any designers from assuming that just because they are “designers doing design” harm will be avoided. This is simply not true, and if it is assumed, it is being done so from a position of privilege and entitlement. Designers can cause harm, even if unintended. It can come from not advocating for accessibility. It can stem from cognitive or cultural bias. It can come from a system not having customer support or risk mitigation built in from the very beginning. Every day people are abused, stalked, and robbed of their identities through digital tools ostensibly “designed by designers doing design.”
I had many questions while I read the report -
What kind of ethnographic research were they doing?
Did any of these teams test their projects in other markets before rolling them out to these at-risk users? Or did they all just “move into the neighborhood” to test their MVPs?
What kind of infrastructure was implemented or “designed” to support users, if any? Is there customer support if someone’s crypto gets lost? Or if they get locked out of their account or lose their passphrase?
If loss of currency is possible, and lack of knowledge of how the system works is common, how are these tools being designed to support the user when the system fails?
What was the cultural makeup of the design team?
Startups historically want to “move fast and break things.” It appears from this report that it is not obvious that marginalized groups are not something anyone should be willing to break. New products need to be tested within their intended market, but that is not the position UX designers should be starting from. It is our job to protect users. In this era of digital extraction, that may mean objecting to how your project goals, or even your project team, are designed. It may be to question whether or not something should be designed at all. If you are unable to speak up for those who cannot speak for themselves, then you should question whether or not the project is ethically viable. I understand this is a black-and-white statement, and every project has its own nuances. But there can be no middle ground if “breaking” the security and safety of those we are designing for is a risk.
We need more research like this regarding blockchain tech because things are moving so fast, and startups don’t often stop to ask who their “disruption” is hurting.
Recommended reading:
Design for safety by Eva PenzeyMoog
“Eva PenzeyMoog explains how even the most well-intentioned design can be weaponized for interpersonal harm. Through poignant, all-too-common examples, Eva demonstrates how to identify a design’s potential for abuse, how to avoid and mitigate the damage, and how to bake safety into every step of the design process. We can’t build good digital products unless we recognize that our users’ safety, and lives, are at stake.”
Did you read the report? I’m curious what questions you came up with.