Home News How to Avoid Ethical Red Flags when Working on AI Projects

How to Avoid Ethical Red Flags when Working on AI Projects

0
How to Avoid Ethical Red Flags when Working on AI Projects

I’ve been immersed in AI ethical issues for over a decade and have seen first-hand the evolution of this field. A growing number of engineers are developing AI solutions, while navigating complex ethical issues. Responsible AI deployment requires more than technical expertise. It also requires a nuanced grasp of ethical implications.

As IBM’s AI Ethics global leader, I have observed a significant change in the way AI engineers operate. They no longer talk to other AI engineers to learn how to build technology. Now they must engage with those who know how their creations affect the communities that use these services. IBM recognized several years ago that AI engineers had to include additional steps in their development process. These were both technical and administrative. We created a guide that provided the tools needed to test issues such as bias and privacy. Understanding how to use the tools correctly is essential. There are several different definitions for fairness in AI. Consultation with the affected community and clients is required to determine which definition applies.

Francesca Rossi, in her role as cochair of IBM’s AI ethics committee, helps to determine the company’s core principles and internal process. Francesca rossi,

Education is a key component in this process. One team thought their project was free of bias because it did not include protected variables such as race or gender. They didn’t know that other features such as zip code could be used as proxies for protected variables. Engineers often believe that technology can solve problems. Software tools are helpful, but they’re not the end. The real challenge is to learn how to effectively communicate and collaborate with diverse stakeholders.

Pressure to release new AI tools and products quickly may cause tension with thorough ethical evaluation. We established a centralized AI ethics governance at IBM through an AI ethics council. Individual project teams are often pressed for time and have quarterly results to meet, making it hard to consider the broader impact on client trust or reputation. The centralization of principles and internal processes is necessary. Our clients–other companies–increasingly demand solutions that respect certain values. In some regions, the law now requires ethical considerations. Even major AI conferences now require papers to address ethical implications, which encourages AI researchers to think about the impact of their research.

IBM began by developing tools that focused on key issues such as privacyand explainabilityand fairnessat first. We created open-source tool kits with code guidelines and tutorials for each concern to help engineers implement it effectively. As technology advances, so do ethical issues. We are concerned about the creation of violent or offensive content, as well hallucinations, with generative AI. IBM’s Granite modelsinclude safeguarding models that evaluate input prompts and outputs to determine issues such as factuality and harmful material. These models are designed to meet both our own internal needs and the needs of our clients.

Software tools are helpful, but they’re only the beginning. The real challenge is to learn how to communicate and work together effectively.

The governance structures of companies must be flexible enough to adapt to the technological evolution. We constantly assess how new developments such as generative AI and agentsic AI could increase or reduce certain risks. We evaluate if releasing models open source introduces new risks, and if so, what safeguards are required.

We have an internal review procedure that could lead to modifications for AI solutions that raise ethical red flags. Our assessment goes beyond the technology properties (fairness and explainability) to include how it is deployed. Deployment may respect or undermine human dignity and agency. We conduct risk analyses for each technology use-case, recognising that understanding risk requires knowing the context in which technology will operate. This approach is aligned with the European AI Act’s framework. It’s not that machine learning or generative AI are inherently risky. However, certain scenarios can be high- or low-risk. High-risk use scenarios require additional scrutiny.

In a rapidly evolving landscape, responsible AI requires constant vigilance, adaptability and a commitment towards ethical principles that put human well-being in the center of technological innovations.

www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version