As businesses and organizations pick up the pace of AI adoption, big questions are being asked about business accountability. In this article, we talk with Dr. Karina Alexanyan, Director of Strategy at All Tech is Human, about how businesses are implementing Responsible AI practices and what the path ahead might hold.

What does Responsible AI mean to you?

Portrait of Dr. Karina Alexanyan

It’s pretty straightforward, actually. It is really all about ensuring that our tools are developed with societal impact in mind and that they are in line with the public interest. And that might sound very obvious. But it’s not what is happening today.

I think, for many people, Responsible AI is about safety and mitigating potential harms. But I would argue that it’s about taking a much more proactive and systemic approach to ensure we are developing technologies in a way that actually benefits people and aligns with social values, including civil liberties and notions of fairness and democracy.

Are you seeing more businesses starting to take responsible and ethical AI seriously?

I would say that businesses are starting to understand that creating responsible products is not just about image. It can also help businesses anticipate and mitigate risk. It can help them get ahead of the regulation. It creates a deeper and more authentic relationship with customers and employees. And all of these things contribute to the bottom line.

Some leaders are starting to really understand the return on investment behind responsible and ethical AI. They are moving to empower their workers to innovate responsibly, giving them very clear guidelines and encouraging them to speak up when they spot potential issues. And I think that is good progress where it is happening.

That being said, I do think that Responsible AI is experiencing the same narrative hump that ESG faced a decade ago. At first, businesses thought it would slow down production and impede profits. Now they recognize that sustainability is vital for our survival and that sustainability and profitability are not mutually exclusive. You can do good and do well at the same time. And I think that Responsible AI is moving in that direction.

What are some of your concerns about AI risks?

My concerns about AI are less about hypothetical long-term risks and more about current harms. I believe that if we improve our systems for recognizing and addressing the harms we are aware of today, we can provide useful groundwork towards addressing potential future dangers as well.

The most common example is bias. People are using AI to guide financial decision-making, healthcare choices, legal decisions, employment decisions and so on. But they really need to understand that bias is baked into every step. It is in how the data is collected, how it is used, how it is coded, how the algorithms are built and how the results are interpreted and applied.

Moreover, it’s a continuous cycle because technology and society are interconnected. We influence the tools we create, they influence society, and society then turns around and influences the next generation of tools. Our values and biases appear at every stage. Yet I also think there is a lot of good work going on in that space to educate users and to drive accountability and transparency in the models.

I would argue there are a few issues that are also important but less frequently discussed. Things like exploitative human labor practices in the AI industry. And we don’t spend a lot of time understanding and mitigating the energy and environmental costs of our technologies.

Are you worried about the human tendency to anthropomorphize AI?

I am increasingly worried about our tendency to anthropomorphize AI. We need to remember that this is a machine that is generating outputs based on math and an algorithm. It has no idea what those letters mean. But when we start to anthropomorphize AI, we turn it into a conscious thing that could potentially turn against us.

I do see a very interesting divergence in the focus and narrative between AI safety people and AI ethics people. I think AI ethics people – like me – tend to want to not worry much about things like AI domination and focus on the immediate harms and impacts that we can actually address today.

That being said, I do think many organizations now see AI – generative AI in particular – as a big race. And I worry that some of the safeguards are being ignored as a result. There are certainly risks there.

What can organizations do to better mitigate the risks and leverage responsible AI as a value and impact driver?

One of the really impactful ways to develop responsible technologies is to diversify the workforce. That means not only a diversity of gender, ethnicities and backgrounds but also a diversity of capabilities – experts with social sciences, philosophy and humanities fields of focus who think about the implications of technology.

Culture is also important. You need to provide your employees with a safe space where they can raise and discuss concerns about ethics and responsibility. You need to bring your ecosystem of customers, stakeholders, investors, partners, vendors and others along with you.

Most importantly, companies need to recognize that ethical AI is not simply a nice thing to do, but rather a fundamental requirement for future business success.

Portrait of Dr. Karina Alexanyan

About Dr. Karina Alexanyan 

Dr. Karina Alexanyan is a social scientist with 15+ years of experience at the intersection of technology, media, education and social impact. Dr. Alexanyan works closely with academia, civil society and industry to ensure our technologies are aligned with the public interest. Her work helps organizations leverage emerging technologies for societal benefit, with a focus on issues related to respectful technology behaviour and diversity, equity, and ethics in AI education and talent pipelines. Dr. Alexanyan is the Founder/CEO of Humanication.io, a Responsible Innovation/Ethics consulting firm, serves as Director of Strategy at All Tech is Human, a non-profit building out the Responsible Technology ecosystem, and is an Advisor to the Emergence program on Impact Entrepreneurship at Stanford University. She holds a Ph.D. in Communication from Columbia University.