We have publicly stated our Company Values — Trustworthy, Caring, Open, Inquisitive, and Inventive — and these inform our decision-making processes.
We seek to align our work with the OECD’s AI Principles that promote the use of AI that is innovative and trustworthy and that includes respect for human rights and democratic values.
We recognize that while we are machine learning experts, we are not necessarily experts in identifying and avoiding all possible unintended consequences in the solutions we build. In order to build better solutions, we have an ever improving set of ethics-related internal methodologies and tools, are increasingly active members of the AI ethics community, we intentionally foster a diversity of opinions and perspectives on our teams, and we consult subject matter experts who can help us understand potential ethical implications.
Our employees have the right to advocate for the company to turn down work that isn't aligned with our company values and we do turn down such work. Our employees also have the right to choose to work only on projects that align with their personal values. We encourage a thorough and open discussion about these issues during our weekly Ethics Discussion Group meeting. Through this process, we have decided as a company:
Any work that makes predictions about human beings, i.e., any model that takes in information about an individual and makes a judgment about them, will be approached with special consideration and vetted to ensure subgroup populations of the dataset are fairly represented and/or addressed. Our aim is to reduce as much as possible the algorithmic bias in our models that originates from systemic biases present throughout society.
We will not work on the following:
AI applications related to weapons, offensive military technology, and addictive products, including but not limited to gambling and nicotine products.
We are also broadly opposed to surveillance without informed consent and we will not create models to identify individuals using biometric data for the purposes of surveillance. This includes, but is not limited to, facial recognition, gait recognition, or iris recognition.
We will also not create models using data that is overly personal for business use cases without proper disclosures or approvals, or using data that has been harvested deceptively.
We respect, credit, and treat fairly the people we work with to build AI solutions, including but not limited to end users, data annotators, and data subjects.
As we enter the age of AI, we aspire to lead by example in only building ethical and responsible AI systems and to help our clients to do the same. We also seek to find ways to help ensure that all companies developing AI understand their responsibility to only build ethical and responsible AI and avoid unintended consequences. We do this because it is aligned with our values, living our values makes us a better company and we want to help create a future we want to live in.