Implementing Ethics Principles In Quantum Computing
The tech ethics community is at an inflection point. The broad and pervasive applications of Artificial Intelligence and other technologies are no longer a future possibility but an immediate potential harm. As practitioners, we need to respect that urgency by moving beyond words to principled action. We can only do so by reflecting on our own values as individuals and as stewards of our organizations and applying those values consistently and unflinchingly. We can also do so by being empowered and enabled by the leaders who hired us. Our responsibility is to operate in the uncomfortable spaces between technology and society and advocate for humanity.
The infectious nature of techno-optimistic Silicon Valley has led the public to place an immense amount of trust in tech companies to respect our privacy, autonomy, and well-being as we reshape society to embrace new technologies. However, it is also the techno-optimism of Silicon Valley that leads to well-intentioned ‘unintended consequences,’ — the context-less application of technology in a way that does not respect the intricacies of the unique communities they impact. These shortcomings are not evenly distributed. Unsurprisingly, it is the already marginalized and excluded who are the ones suffering the most, as we shift our pre-existing institutionalized cultural and social biases from physical to digital.
Leaders are listening, and the latest thing Silicon Valley seeks to reinvent, it seems, is democracy. We are embracing the need for democratic processes, whether this entails town halls, internal and external review boards or other forms of discussion and debate. Governance is not new to companies. However, this current iteration embraces the current shareholder and stakeholder activism that urges all companies— not just Big Tech — to move beyond being simply legally compliant to actively doing good, or at least, not doing harm. Our technologies push the boundaries of law and question the fundamental role of corporations in society.
Recently, the MIT Technology Review published an article from a community of academic and civic group experts with salient advice on how tech giants need to think about governance. As practitioners working in the companies trying to implement these democratic processes, we share their concerns and express our solidarity with their words. We are the individuals tasked with executing leadership ambitions to create more fair, accountable, and transparent systems and processes. We welcome the voices of impacted communities to the conversation and agree that we need to work together to determine the best path forward.
In order to continue the conversation framed by our colleagues, we as practitioners have our own thoughts we’d like to contribute in the hopes that we spark a collaborative effort in the spirit of collective action:
Rumman Chowdhury, Global Lead for Responsible AI, Accenture: Critical to good governance of Artificial Intelligence is a culture of constructive dissent at an organization. Do workers feel empowered to raise red flags on systems and processes? What are the explicit protections they have against retaliation if they raise awareness on critical issues? Ultimately there is long term benefit for organizations to respect and protect employees who are rightfully raising questions in the best interest of society and the long-term benefit of the company.
Second, corporations cannot differentiate AI ethics from their core values and mission statements. Our support of social issues need to be reflected by design in the products we build. Rather than create a board that is introduced at the end of product development, a better purpose would be to utilize this resource as an early-stage education tool. No one board is able to encompass all of the skills necessary to provide this nuanced education on all technological implications. A better plan is to use this body as a sourcing group to identify experts from a wide range of backgrounds and empower them to shape design and development.
At Accenture, we are constantly refining our Responsible Business practice, which is an interdisciplinary leadership and practitioner community that encompasses technology, sustainability, legal, corporate social responsibility, and others. We evangelize and apply the principles developed by this group via our Responsible Innovation groups, which includes Responsible AI, but also encompasses other technologies, including blockchain, AR/VR, and quantum computing.
Timnit Gebru, Research Scientist, Google: Many times, there are people inside corporations who have different types of expertise and have thought about things from angles that people in leadership positions have not thought of. There are people with specific lived experiences and context that many do not have. And all of us have blind spots. Even within the activists inside and outside of US based multinational corporations I can see, for example, that many of our current discussions around ethics are done with a western centric viewpoint, and an ableist viewpoint — because those are the voices that are most represented. And always being in some sort of bubble or another means that it will be very difficult for us to identify our blind spots. Many of us become defensive when people sound alarms, criticize a process or some part of our work. Just like any individual, those in leadership positions at corporations should welcome feedback that may not be aligned with the short term goals of the corporation. Those from marginalized communities in corporations should not be tokenized or dismissed. They need to be heard, respected, and given a seat at the table. It is not enough to have “diverse” faces, or “ethical” faces. It is important to give those people the power to effect change while at the same time not putting the entire burden on their shoulders.
Margaret Mitchell, Research Scientist: One thing that seems important at this point in history is to have an informed discussion about the different forms of governance for artificial intelligence. Within corporations, a direct democracy may not lead to decisions that are the most beneficial for society or for the corporation; other forms of governance bring with them the question of whose voices matter more than others. At the same time, there are regular and predictable biases in the AI community around whose voices matter more than others when it comes to gender, race, ability, religion, neuro(a)typicality status, ideology (this list is not exhaustive). Overcoming these ingrained human biases requires active work. We have the opportunity to go beyond the limits of our own biases as we work towards more intelligent AI by creating governance structures that prioritize inclusion from the start.
Hilary Mason, Founder and CEO, Fast Forward Labs, Author of Ethics and Data Science: My suggestion is two-fold: First, an effective board ought to have the autonomy and authority to audit, set policy, and make recommendations for what the ethical standards look like. That board should be representative of the people who are impacted by the ethical decisions as well as the people making them. Second, most policy is currently so vague that a person actually building a product may not know how or if it applies to their work. I would love to see companies focus on tools, processes, and resources so that the person who is writing the code and collecting the data is empowered to make the best decisions possible.
Luke Stark, Postdoctoral Researcher, Fairness, Accountability, Transparency and Ethics (FATE) Group, Microsoft Research Montreal: AI companies often make promises to their employees and customers to honour and champion diversity, inclusion, and dignity. Inherent in those ideals is a commitment to treating all people equally and with respect. Those promise and commitments are empty when black and indigenous people/persons of color, women, and LGBT people suffer discrimination both working inside companies as employees, and outside them as people disproportionately targeted and harmed by these technologies.
There is an onus on those with structural power and privilege within our companies to actively ally with, and take their lead from, those whose lives and status as equals are threatened by AI systems and the business models driving their current popularity. Our industry leaders need to practice what they preach with regards not only to respecting all human rights, but actively speaking out, and acting forcefully, against fascism, racism, misogyny, xenophobia, homo- and transphobia. Industry leaders claim to deplore these ills within their companies — we all need to stand together to make sure they also stand against them in the contexts where AI is deployed.
Anima Anandkumar, Director of Machine Learning Research, NVIDIA, Bren Professor of Computing at California Institute of Technology: I am happy to see that there is increasing attention being paid to ethical and fair use of AI. Unfortunately, it is harder to convert these good intentions into actionable changes. It starts with an understanding of how the use of AI affects different stakeholders, especially marginalized communities, and drawing from their experiences. The recent Amazon incident regarding the sale of its face recognition services to law enforcement demonstrates this. Instead of trying to discredit the academic researchers who demonstrated bias in AWS services, I wish there was a more productive discussion on how we can improve fairness, transparency, and accountability of AI services.
Lisa Dyer, Director of Policy at the Partnership on AI: We, the people of the global AI/ML community, must be able to freely and easily come together to conduct research, develop best practices, and educate the public about AI/ML to create the world in which we want to live.
People are the essence of the Partnership on AI to Benefit People and Society (PAI), encompassing its staff, Partners, and colleagues in the global AI community. Our Partners consist of people from civil society, academic organizations, and for-profit companies who believe that our future depends on creating, developing, and operating AI/ML systems that benefit people and society. Those in the AI/ML community, including our Partners, are focused on creating a better future for all people around the globe in the AI/ML age. “To Benefit People and Society” is the most important aspect of the title of our organization.
This shared belief is vitally important, simply because the stakes are high. The promises and the perils of AI/ML systems affect people around our world and are not limited to geographic boundaries, nationalities, specific systems of government, or those people making up the AI community. Ultimately the people of the global AI/ML community — researchers, members of civil society organizations, companies, academic institutions, public servants, neuroscientists, anthropologists, ethicists, and civil rights and community-based organizations — will determine if the world realizes the potential of AI/ML, while minimizing its peril.
Rachel Thomas, co-founder of fast.ai, Professor at the University of San Francisco: Talking about ethics is meaningless if not accompanied by substantial action. For those feeling overwhelmed about where to start in embodying AI ethics in their workplace, I recommend using this checklist for data projects from Ethics and Data Science, as well as implementing the processes from the Tech Ethics Toolkit.
While implementing more ethical processes within companies is important, government regulation will be a crucial component of how we protect human rights. As Yoshua Bengio said, self-regulation is as ineffective as voluntary taxation. In the process, we need to be careful that we don’t end up with meaningless regulations written by industry lobbyists. The actions of industry lobbyists are often in direct contradiction to the statements being made by leaders at the same companies.
Amanda Askell*, Ethicist and Research Scientist in Policy, OpenAI: Machine learning and AI are going to affect more and more people both within the US and around the world. And if people will be profoundly affected by a technology, it seems reasonable that they should have some say in how it affects them. So if technology companies want to take some responsibility for the impact of their products, they’re probably going to have to solicit the views and concerns of people they strongly disagree with and consider them on their merits. This is a difficult task in many respects. It’s often not feasible to consider the views of everyone who will be affected by a technology, and it’s not clear how to decide which views to include and exclude in a way that everyone will consider reasonable. It’s also not easy to maintain norms of civility and respect when dealing with deeply entrenched disagreements. Finally, it’s not clear how companies should respond when there is persistent disagreement about whether a technology should be developed or how it should be released. In light of these difficulties, perhaps the most constructive message is simply that we don’t have an ideal way of dealing with these issues yet, and we should all acknowledge that mistakes are likely to be made while we try to find one.
*Note from OpenAI: Comment is not to be taken as an indication of full agreement with the letter above. For example, I believe that what technology companies are currently pursuing is not best described as democracy because company-appointed review boards are at best an instance of non-electoral representation.
Note: Content may be edited for style and length.
Source: M. Industry Ethicists. From Principles to Action: How do we Implement Tech Ethics?