The rise (or not) of AI ethics officers

The rise (or fall) of AI ethics officers (19459000)

Four Years After the World Economic Forum 79% of executives agree that the WEF (World Economic Forum) has called for chief AI ethics officers AI ethics is important for their enterprise-wide AI strategy

Outside large technology vendors, the role is not well-established. Is centralising this responsibility the best option, or should organizations look at other governance models? What will the person in charge of AI ethics be doing?

For starters, many enterprises are reluctant to call it ethics. Forrester vice president Brandon Purcell says: “Ethics is a word that can connote morality or a set of norms and multinational companies often deal with ethical issues.” Many different culturesIn the US, by April 2025, the number of job postings for “responsible AI architects” on LinkedIn was up 10% YoY.

But the most important thing that organisations are looking for, Purcell says, is AI governance. “Some companies create a role for a AI governance lead, while others see it as a shared responsibility among everyone who touches on the AI value chain. She sees roles like AI governance lead or risk officers as ensuring “accountability” for AI outputs and impact. Bola Rotibi is the chief of enterprise research for CCS Insight. She says that whatever the title of the role, “the role is steeped with the latest regulations, insights, and trends – they are going to industry discussions, they’re home of all this knowledge around AI ethics.”

AI is not the only topic. Data ethics are also important. The ethics of spatial computing could be more important than AI ethics in the future. If you are looking for a person to fill a strategic position, a broader umbrella, digital ethics, may be more appealing.

Protecting data is not enough

EfficientEther’s founder Ryan Mangan believes a dedicated AI Ethics Officer remains a unicorn. “Even cyber-security struggles to get a board-level position, so until an ethics lead lands in the C-suite, with real veto-power, the title risk being just another midlevel badge, more myth than mandat.”

According to a recent survey by Dastra many organisations (51%) believe that AI compliance is the responsibility of the data protection officer. However, Dastra cofounder Jerome de Mercey believes the role should be expanded. The most important question in AI is “What is the purpose, and how do I manage risks?”

Both positions involve both technical and regulatory questions, communication across the organization, and delivering strong government. De Mercey believes that the General Data Protection Regulation (GDPR)’s concepts of fundamental rights also play a key role in AI ethics. “The economic and societal risks are always [pertinent] since there are people who have personal data, and DPOs know how to assess this type of risk,” she says. In most cases, the responsibility for ethical oversight falls under someone else’s responsibilities, usually in data governance or law, with limited influence. It’s all well and good up to a certain point, but the risks become harder to manage as AI deployments grow.

Purcell argues that DPOs are unlikely to possess enough expertise in AI or data science. “Of course there is no AI without the data. Today’s AI models have been pre-trained using vast amounts of data that are not contained within the four walls of a company. [They may not know the right questions to ask] the data used to train these models, how they were evaluated, the intended uses and the limitations and vulnerabilities of the model.

He also notes that data science expertise is not enough. “If we define justice as ‘the best candidate gets the job,’ that’s great. But we also know there are all kinds of problems with the data that is used to determine who’s the most qualified. We could look at the distribution and acceptance rates of different types of applicants given an algorithm. Marisa Zalabak, Global Alliance for Digital Education and Sustainability.

The AI ethics lead wrestles deeper questions – Is this decision fair? Is it explainable?” Does it reinforce or decrease inequality?”

Boiniodiris asks another question: “Does the application of AI align our company values?” Who could be adversely impacted? Do we understand the context in which the data is being used to create this AI? Was it collected with consent? Were we clear about how this AI is to be used? Are we transparent?”

By asking what human values AI should reflect, it is a reminder of the need for legal, social science data science, and ethics expertise.

“Responsible AI Teams are lawyers, researchers, or psychologists. The responsible AI team does not always have someone who is certified in AI Ethics,” says Marisa Zalabak, co-founder and co-founder of Global Alliance for Digital Education and Sustainability.

There are more than 250 standards of ethical AI, and another 750 are in the works. They will need training. Zalabak recommends that they attend the Center for AI and Digital Policy ( ) While organisations build their resources, that covers more than just “the two things people consider when they think about AI ethics – data privacy and bias – because there are a wide range of things including multiple psychosocial effects”.

The ability to say no

Although they have access to decision makers, neither architects nor DPOs are senior sufficient to have a sufficient impact or to be visible of new projects early on. AI ethics must be considered at the design phase.

Polavarapu say, “The role should be with executive leadership, reporting to the CEO, risk committee, or the board, to pause or recalibrate models that threaten fairness or safety.” Purcell agrees that a responsible AI leader should be at the vice-president level or higher. If data, analytics, and AI are owned the CIO, then they sit within this organisation.

They need visibility as well as authority. Purcell says that the person who is involved in the conception of an AI project will be able to determine the appropriate level of responsibility for this use case, which can be high-risk and consequential.

They are responsible for bringing additional stakeholders that will be affected, to identify potential harms. They are responsible for ensuring that best practices are followed in the development of the systems, including monitoring and observability. They will also have a voice in the final evaluation of the system, which is whether it meets the requirements that we set at the beginning.

This will involve bringing additional stakeholders from diverse backgrounds and perspectives to test the AI system’s concept and identify where it might go wrong.

Rotibi says that it is similar to other new officers, such as ESG officers and heads of sustainability, who are responsible for keeping up with the regulations surrounding this capability. “The AI ethical officers, like any other officers, should be a part of a governing board that looks at the overall posture of the company, whether it’s around data privacy or AI, and asks, ‘What is the exposure?’ What are the vulnerabilities of an organisation? Rotibi believes this needs to be structural. “You give them governance and escalation channel authority, you give them ability to do impact assessments so that there is an explanation in what they say.” Boiniodiris also agrees that if there are no attendees at the meetings, progress will be impossible. This is why Buytendijk believes that a digital ethics board should not be a steering group, but rather a digital ethics officer who chairs it. “There should be no barriers for line or project manager to submit their ethical dilemmas.” If it’s a steering group, project managers and line managers lose control of their project. This is a barrier.

Doing good by doing well

Despite this, AI ethics officers will not be able to stop widespread trends that have ethical implications, such as agentic AI, which automates workflows, and may reduce the need for staff.

The tensions are evident in a recent NTT Data survey ( ): 75% say that the AI ambitions of the organisation conflict with the corporate sustainability goals. A third of executives believe that responsibility is more important than innovation. Another third place innovation above safety, and the third place both equally.

The answer may be to view AI governance and ethics as a proactive way of generating long-term value, rather than as a cost to avoid loss (of reputation, trust, customers, or even money if fines were incurred).

Boiniodiris says that responsible AI is not a barrier to profits, but rather an accelerator for innovation. She compares them to guardrails that allow you to go fast and safely on a racetrack. “If you embed strong Governance from the start, then you create a framework that allows you to scale responsibly and with confidence.” Companies that embrace Al ethics audits report a ROI more than twice as high as those who do not. The Center for Democracy & Technology report on Assessing AI (19459024) is a comprehensive look into how to evaluate projects in order to achieve these kinds of returns.

By embedding strong governance at the beginning, you can create a framework that allows you to scale responsibly and confidently
Phaedra Boiniodiris IBM Consulting

A recentROI of AI ethics from the Digital Economist builds upon tools such as IBM and Notre Dame’s Holistic Return On Ethics Framework and Rolls-Royce’s Aletheia Framework AI ethics checklist Instead of treating ethical AI like a cost, it’s a sophisticated financial strategy that can generate substantial economic returns.

The lead author Zalabak describes the book as “the right info for someone who doesn’t care about ethics, but is interested in the business case”. She describes AI ethics as a “huge opportunity for people to be amazed at the exponential potential for good”. Anush Newman, CEO of JMAN Group, says that a clear ethical AI framework can make a company more attractive and less risky to invest in. A robust data strategy that includes ethical considerations is no longer a ‘nice-to-have’, but a necessity. AI regulations are still patchy, but standards may help. Polavarapu says that many enterprise customers are now looking for verifiable controls, such as ISO/IEC 42001. This certifies the effectiveness of an Artificial Intelligence Management System.

The conversation is moving away from staying on the right-side of regulations such as the EU AI Act and towards embedding AI Governance throughout product lifecycles. Grandi says that UK firms are looking to the AI Opportunities Action Plan (also known as the AI Playbook) for guidance, but still need the internal clarity that an AI ethics officer can bring. Purcell (19659049) recommends aligning AI systems to their intended outcomes and company values. “AI alignment is not just about doing the right thing. It’s also about determining if we are meeting our business objectives with AI. This has a material effect on a company’s profitability. A good AI ethics official is someone who can demonstrate that alignment with business goals also means being responsible, doing what’s right, and putting appropriate guardrails in place.

Effective AI governance requires principles such as fairness and transparency, policies and practices to ensure systems follow policies and deliver these principles. Many companies have not defined their principles. Purcell warns that AI will take care of articulating your company’s values if you don’t. “That’s the reason you need a chief AI ethical officer to codify your company’s values and principles.”

If you need an incentive to encourage the kind of cross functional collaboration that Purcell admits is lacking in most large enterprises, he predicts that at least one organisation will experience a major negative outcome such as increased costs from “an agentic systems that has some level of autonomy that goes out of the rails” over the next year.

www.aiobserver.co

More from this stream

Recomended