Ethical Artificial Intelligence

Timnit Gebru, an Ethiopian American, is known for her advanced AI ethics research. The Co-lead of Google’s Ethical Artificial Intelligence Team tweeted on Thursday that she had been fired from Google for a research paper that highlighted bias in AI. While ethical artificial intelligence researchers and sociologists are concerned about this move, there is more than a clear gap in organizational approaches to AI ethics. This collaborative effort is to create a culture that upholds AI ethics principles.

Ownership or access to data or technology will always be unbalanced, with economics and politics on one side and rights and transparency on the other. With concerns about bias, political influence, hate speech, and discrimination, Technology Ethics is becoming a boardroom conversation for technology companies worldwide, on platforms or product ethics and responsible, ethical artificial intelligence.

Several frameworks and principles for AI ethics have been developed and shared by academic think tanks and technology behemoths. Typically, frameworks or corporate principles attempt to cover many of the themes, such as privacy, non-discrimination, safety and security, accountability, transparency and explainability, and protecting human values. Companies such as Google and Facebook have made their ethics principles public and begun sharing insights on how they are dealing with critical ethical issues in technology today.

Concentrate On Ethical Artificial Intelligence

The corporate efforts revolve around three major areas. 

(a) Establishing principles, policies, guidelines, checklists, and focus teams to deal with AI ethics, including responsible, ethical artificial intelligence leaders and product managers; 

(b) Researching to understand and find solutions for critical ethics issues (leveraging or collaborating with academic/scholarly support where required) and periodically publishing or sharing updates on efforts or research outcomes; and 

(c) Aligning strategy and initiatives towards AI ethics principles and in so doing

These efforts, while commendable, are minimal. Many of these efforts are led by a few individuals or groups of individuals, definitions of fairness and responsibility are fluid or evolving, and actions are directed at the most visible challenges. It does not address the organization’s conflicting approaches. There is a significant gap in these: the collaborative creation of culture to stand for the principles.

The Omission

About Ethics in AI, we have revealed fewer challenges than there are. When Joy Buolamwini gave her Ted Talk in 2017, organizations that use facial recognition began to reconsider their products and the inherently embedded bias in such products. Algorithm tendency does not end there; it continues as more models are developed, more data is annotated, and more use cases are identified. We as a society are inherently biassed and are only partially attempting to recover in some areas.

Culture cannot be built with a set of tasks; organizations that have established their ethical artificial intelligence principles as their purpose must consider moving forward. While policies and principles are an excellent place to start, enabling the culture could be long-lasting, with people having a shared mission and voluntarily aligning to responsible behaviour. For the stakeholders who work collaboratively on the task, alignment of some factors such as beliefs (an acceptance that something exists/is true without proof), perceptions (the way something is understood or regarded), identity (characteristics determining who or what someone/thing is), imagery (visual symbolism), judgement (conclusion or opinion), and emotion (an instinctive or intuitive feeling) would be required.

Approach To AI Ethics Based On Culture

None of these variables can be solely based on facts. Each of these factors has an independent or interdependent impact on our thoughts and actions. Consider the following key elements and how they can be influenced to drive a cultural phenomenon:

Develop A Broader Emotional Response To The Ethics Principles

Emotions are mighty messengers and influencers of the values that corporations uphold. They are, at a macro level, enablers of effective communication. Ethical artificial intelligence on an individual level inspires pride and a sense of purpose in many people carriers of such emotion towards the principles. Emotion can be developed through structured communication and engagement with stories, life experiences, and efforts to enable society to evolve away from patriarchy. Emotion elicited a positive neural response when actions and steps toward stated AI ethics principles. It produces a negative neural response when such principles are violated, serving as a deterrent.

Instill Organizational Beliefs About The Importance Of Ethical Principles

Beliefs and perceptions are important because employees are not always aware of how deeply the organization believes in the ethical artificial intelligence principles. Instilling shared beliefs and perceptions would necessitate a strategic approach to shaping the business model in alignment with the purpose, aligning leadership across levels, designing communications, and demonstrating expected behavior. For example, suppose stakeholders do not believe that an organization’s effort to address discrimination is limited and untrustworthy. In that case, the previous steps will have little impact on them.

Suppose the organization expects its employees and stakeholders to behave in a certain way. In that case, it must pay close attention to instilling the beliefs. This can be accomplished by incorporating ethical principles into employee and stakeholder goals and objectives or ensuring that moral principles are a critical strategic discussion point among stakeholders. For example, Martin Fishbein and Icek Ajzen mention in their ‘Theory of Reasoned Action’ that intention to a behaviour precedes the actual conduct. Such intention results from the belief that performing such behaviour will result in specific outcomes. These, in turn, assist organizations in creating an environment in which people can raise their voices in support of the values and principles for which the organization should stand, thereby steering the organization’s direction.

Maintain A Consistent Level Of Enrichment Of Beliefs And Emotions

Instilling beliefs and emotions will have little impact unless such efforts are consistent. Being consistent in this context entails not only doing the same thing over and over but also enriching the efforts with each attempt. For example, identifying or inventing new ways to engage employees and stakeholders, modulating storytelling to relate to real-world insights/events, and so on.

In his study (‘On the Origin of Shared Belief,’ Eric Van den Steen asserts that people prefer to work with others who share their beliefs and assumptions because such others ‘will do the right thing,’ and that ideas evolve through shared learning. This approach must be extended to new hires, including lateral hires, third parties and business partners, and, most importantly, to the board and senior management.

Conclusion

In some ways, remaining silent on ethical artificial intelligence issues may reflect poorly on an organization and demonstrate collaboration, as in Gebru’s case. This isn’t about semantic differences in how people communicate within and outside organizations. Instilling shared beliefs or enabling positive perception and creating a compelling emotion towards ethical artificial intelligence is necessary and imperative for the brand and business to thrive, as the consequences of being inconsistent with the stated principles/values can be disastrous. As a result, it is necessary to have a holistic perspective on AI ethics and to collaboratively build a responsible AI ethics culture by instilling values among stakeholders across the spectrum.

If you’d like to learn more about how your company can use AI ethics to boost growth, please contact the ONPASSIVEteam for more info.