Guardians of AI: Amir Banifatemi of Cognizant On How AI Leaders Are Keeping AI Safe, Ethical…

Posted on

Guardians of AI: Amir Banifatemi of Cognizant On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

An Interview With Yitzi Weiner

human-centric development ensures AI serves its intended purpose while respecting human values and needs. In practice, this means regular engagement with a variety of users with a strong emphasis on user empowerment.

As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Amir Banifatemi, Chief Responsible AI Officer at Cognizant.

Amir Banifatemi is a leading technology executive, investor, and thought leader with over 25 years of experience creating technology-based ventures and new markets. His career has focused on advancing AI and human empowerment while prioritizing ethics and safety, demonstrating responsible innovation at scale. As Cognizant’s Chief Responsible AI Officer, Amir defines, enables and governs the Company’s approach to responsible and trustworthy AI. He also leads Cognizant’s Responsible AI Office, driving AI governance internally and in deployments of AI solutions with clients and partners, ensuring that the Company’s technologies, services, and capabilities meet the highest standards for ethical, safe, and responsible use. Previously, Amir was the Chief Innovation Officer at XPRIZE, where he orchestrated global competitions that significantly advanced AI applications for societal benefit. He also initiated the AI for Good movement and partnered with ITU to create the annual AI for Good Global Summit in Geneva, aligning technological advancement with UN Sustainable Development goals. Amir has also worked in executive positions at the European Space Agency, Airbus, AP-HP, and the European Commission and managed venture capital funds. Amir holds degrees in Electrical Engineering, an MBA, and a Doctorate in System Design and Cognitive Sciences.

Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

My journey began with my studies as an electrical engineer and doctorate studies in control and information systems. I chose to focus on medical research to help with diagnostic devices such as ultrasound imaging and detection of lesions through image recognition, and automated therapeutic protocols for cancer patients. I believe technological advancement must be balanced with ethical responsibility, and to serve life improvement and knowledge creation. After spending over two decades working at the intersection of innovation and human empowerment, I’ve seen firsthand how critical it is to establish strong governance frameworks for emerging technologies. My work across various sectors — from space technology to healthcare — has consistently reinforced that responsible innovation isn’t just an ideal, it’s a necessity for sustainable advancement.

None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?

Several people were really impactful in my education and career. I had the opportunity to be raised in a scientific family and my father was a nuclear and quantum physicist. He had the amazing ability to explain complex theories and get you excited about how the world works and how we historically uncovered some of its mysteries. That had an immense influence in building my curiosity, sense of discovery and appetite for scientific exploration. Later on, and during my first work experience, I met a mathematician and statesman that was one of the co-founders of Arianespace. He taught me invaluable lessons about macroeconomics and showed me how taking a long-term view of problems leads to more consistent approaches in both exploration and solving economic challenges.

You are a successful business leader. Which three-character traits do you think were most instrumental to your success? Can you please share a story or example for each?

The three-character traits that have been fundamental to my journey are: intellectual curiosity, strategic vision, and a focus on innovation for our common future.

Intellectual curiosity drives me to take a comprehensive approach and look beyond first-level solutions. While launching the AI for Good movement, I witnessed firsthand how AI could be applied to address sustainable development goals (SDGs). However, I noticed that solutions were typically created by data scientists and researchers who weren’t directly affected by the problems they were trying to solve. By digging deeper and examining implementation challenges, I realized that those affected by the problems needed to be included in the solution design process. Their collaboration with designers would ensure that the root causes were properly identified, and the solutions would be better adapted, more likely to be adopted, and capable of delivering the expected change

Strategic Vision: I’ve learned that seeing beyond immediate trends to identify transformative opportunities is crucial for meaningful impact. Through the AI Commons initiative, we recognized that the key to AI adoption wasn’t just about sharing algorithms — it was about creating a framework that would enable effective collaboration between researchers, practitioners, domain experts, and local communities. This led us to develop initiatives and a structured approach for stakeholder collaboration, ensuring that AI solutions could be effectively adapted and implemented across different contexts and challenges.

And innovating for our common future means going beyond traditional models where solutions come from a select few. I believe in creating environments where diverse perspectives can contribute to solving our shared challenges. For example, at XPRIZE, we pioneered a unique approach that instead of relying on traditional R&D or grants, used incentive competitions that opened innovation to anyone with a solution, regardless of their background or credentials. For instance, when tackling complex challenges like carbon capture or literacy, we saw breakthrough solutions coming from unexpected sources — small teams, students, and innovators from developing countries who normally wouldn’t have access to traditional funding channels. By allowing anyone to participate, we tapped into humanity’s collective genius and created solutions that could scale globally.

Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?

The three things that excite me the most about the current state of the AI industry are:

  • We are witnessing unprecedented capabilities such as seeing AI systems that can understand and generate human-like content across multiple modalities, thus opening new possibilities for human-AI collaboration.
  • We are working towards democratization of access: AI tools are becoming more accessible to organizations of all sizes, enabling broader innovation and problem-solving capabilities.
  • And finally, there is a growing focus on responsibility with increasing industry-wide recognition that AI development must be guided by ethical principles and robust governance frameworks.

Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?

My first concern is that there is a lack of standardized governance frameworks. One potential solution is to continue efforts for the development of industry-wide standards and certification processes for AI systems. Certification is key and will help create benchmarks for governance and monitoring of systems.

My next concern surrounds the potential for bias and unfair outcomes. We need mandatory fairness and explainability testing at various stages of the AI lifecycle, as well as diverse representation in AI design and development teams.

My final concern revolves around privacy and security vulnerabilities. Industry and research labs need to continue working to implement privacy-by-design principles and enhanced security protocols.

As the Chief Responsible AI Officer, how do you embed ethical principles into your company’s overall vision and long-term strategy? What specific executive-level decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?

As a leader in Responsible AI, my role focuses on embedding responsibility across our entire organization through a comprehensive framework addressing critical dimensions.

Safety and reliability are foundational priorities, ensuring our AI systems perform consistently and safely across different scenarios. We’ve implemented rigorous testing protocols and monitoring systems throughout the AI lifecycle.

User-Centricity is a critical focus where we ensure AI solutions not only serve user needs but maintain appropriate human oversight and control. This means implementing robust human-in-the-loop processes where AI systems support rather than replace human decision-making. We carefully assess the level of autonomy granted to AI systems, ensuring it aligns with our current ability to understand and control their behavior. This includes clear escalation paths to human operators and maintaining meaningful human agency in critical decisions.

Accountability is embedded through clear governance structures and defined responsibilities. Transparency and fairness are ensured through comprehensive frameworks for explainability and equitable treatment. Privacy protection is integrated at every development stage through strict data governance policies.

One executive-level decision that we believe will keep us ahead in developing safe, transparent, and responsible AI technologies revolves around the development and internal usage of our agentic AI-powered tools. We made the strategic decision to limit its autonomy and ensure it operates as a decision-support tool rather than a decision-maker. We implemented real-time explainability features and bias audits to ensure fairness. Furthermore, we established a feedback loop with various stakeholders to continuously improve the system’s performance and address any ethical and trust concerns.

By embedding safety and ethical principles into our vision and strategy, we’re not only mitigating risks but also building trust with customers, regulators, and society at large. This positions us as a leader in responsible AI innovation, ensuring long-term success and sustainability.

Have you ever faced a challenging ethical dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and ethical responsibility?

Earlier in my career, I faced a noteworthy ethical challenge during the development of a healthcare decision support AI system. The system demonstrated remarkable potential for improving diagnostic accuracy, yet our rigorous testing revealed concerning patterns of bias in its recommendations across certain demographic groups — a direct result of underrepresentation in our training data.

Despite substantial business incentives to move forward with deployment, given the system’s overall positive impact on diagnostic capabilities, we recognized that proceeding without addressing these biases would risk perpetuating existing healthcare disparities. We suspended the deployment timeline and initiated a thorough assessment that involved consulting with medical professionals and community representatives from the affected demographic groups and identifying data groups that needed to be included in the solution.

The insights gathered led us to develop and implement a more nuanced approach. We established a data enrichment program to ensure broader demographic representation, followed by a carefully staged rollout with enhanced monitoring protocols. The experience led to the development of more stringent pre-deployment assessment protocols and a strengthened commitment to inclusive AI development practices.

Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?

The question of AI safety requires a comprehensive and systemic approach that extends beyond technical considerations. At its core, effective AI safety demands the integration of robust technical safeguards with strong governance frameworks and meaningful human oversight.

Our approach at Cognizant begins with fundamental technical protections, including testing protocols and built-in safety constraints. However, we’ve learned that technical measures alone are insufficient. They must be supported by a strong governance framework that clearly delineates accountability and establishes transparent reporting mechanisms. Human oversight remains critical, however it must be structured and empowered at the most critical moments. We’ve developed comprehensive approaches for human intervention and system monitoring, supported by ongoing training programs for system designers and adapted guidelines for system operators. This approach got us closer to a consistent level of human oversight that we implemented in our solution and engineering practice.

Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if they are trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?

The challenge of AI hallucinations and accuracy requires a sophisticated and multi-faceted response. Our experience has shown that improving AI accuracy demands attention to both technical and operational aspects of system development and deployment.

Data quality serves as the foundation. A comprehensive data validation process that emphasizes quality, provenance and representativeness, must be refined through iterative assessments, ensuring our AI implementations maintain high standards of accuracy over time.

Beyond data quality, we need to focus on model testing. When appropriate, use integrated fact-checking mechanisms and confidence indicators to provide users with clear signals about output reliability. These technical solutions are complemented by robust operational controls, including systematic accuracy assessments and clear processes for error correction.

User education and communication complete our approach for ensuring accurate and transparent results. We need to maintain open communication about system limitations and provide comprehensive guidance on appropriate use cases. This transparency builds trust and enables users to engage more effectively with AI systems.

Through this integrated approach, Cognizant has created a framework that not only addresses current challenges in AI accuracy but also adapts to emerging issues as they arise. Our commitment to continuous improvement ensures that we remain at the forefront of responsible AI development and deployment.

Here is the primary question of our discussion. Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”? Please share a story or an example for each.

The five things that are needed to keep AI safe, ethical, responsible and true are:

  1. A Comprehensive Governance Framework
  2. Robust Testing and Validation
  3. Transparency and Documentation
  4. Human-centric Development; and
  5. Continuous Learning and Adaptation

First, a comprehensive governance framework forms the foundation of responsible AI development. At Cognizant, we’ve implemented an integrated oversight structure that embeds ethical considerations throughout the development lifecycle. Our experience has shown that clear accountability and regular compliance monitoring are essential.

Next, robust testing and validation must be woven into every stage of AI development. We implement pre-deployment testing across diverse scenarios and check for and address critical vulnerabilities.

Third, transparency and documentation are key for trust in AI systems. We’ve made this an important step in our approach by maintaining detailed documentation of system capabilities and limitations. For example, when we deploy our customer service AI solution, clear communication about its capabilities help set appropriate expectations and improve client satisfaction.

Fourth, human-centric development ensures AI serves its intended purpose while respecting human values and needs. In practice, this means regular engagement with a variety of users with a strong emphasis on user empowerment.

And finally, continuous learning and adaptation keep AI systems relevant and responsible. Through regular updates based on performance data and active participation in industry dialogue, we ensure our systems evolve responsibly. Our investment in AI safety research enables us to stay ahead of emerging challenges and opportunities.

Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?

I anticipate significant evolution in AI governance. Industry-wide standards will likely become more harmonized globally, and I hope to see enhanced cooperation between stakeholders. I also expect to see more sophisticated validation frameworks emerge, alongside increased public engagement in AI development decisions. These changes will demand stronger accountability mechanisms and more transparent oversight processes.

What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?

The primary challenge facing our industry will be maintaining responsible innovation while keeping pace with rapidly advancing AI capabilities. This requires a delicate balance between innovation and ethical considerations. At Cognizant, we are preparing by developing adaptive governance frameworks and implementing increasingly sophisticated safety measures. Our experience shows that maintaining transparency becomes more challenging as systems grow more complex, yet it remains essential for maintaining trust and accountability. Success will require committed collaboration across the industry. We’re actively working with partners to develop frameworks that can evolve alongside AI capabilities while ensuring robust safety measures remain in place.

You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger. 🙂

If I could inspire a movement, it would be “AI for Human Agency.” I believe everyone should have the power to shape how AI influences their lives. Just as we recognize education as a fundamental human right, access to AI and the ability to influence its development should be a right for all.

This isn’t just about learning to use AI tools — it’s about ensuring every person has a voice in how AI evolves and impacts our society. When AI makes decisions that affect our lives — from job applications to healthcare — we should all have the power to understand, question, and shape these systems. We need to move beyond having AI decisions made for us, to having a say in how these decisions are made.

This idea redefines AI from being just a technology created by a few, to becoming a shared resource that empowers everyone to participate in our collective future.

This movement would combine three elements:

  1. Education that makes AI concepts and foundation accessible to everyone. This means helping people understand not just how AI works, but how it affects their daily lives. This includes practical knowledge about using AI tools, understanding AI’s capabilities and limitations, and recognizing when and how AI is being used in decisions that affect them
  2. Platforms for diverse communities to influence AI development. This could be through public forums, community councils, or digital platforms where people can share their needs, concerns, and ideas about AI applications in their communities. It’s about ensuring AI development isn’t just driven by technologists but by the very people it will serve.
  3. Frameworks that ensure AI benefits are equitably distributed. For example, developing fair guidelines for AI deployment, ensuring equal access to AI tools and resources, and establishing safeguards that prevent AI from widening existing social gaps.

We’ve seen glimpses of this potential in community AI literacy programs, but scaling this globally could transform how we develop and deploy AI technology. The most powerful technologies should reflect the wisdom and values of all humanity. AI acts as a social equalizer, creating more advantages uniformly across society, benefiting humanity and promoting greater equality.

How can our readers follow your work online?

I can be found on LinkedIn at: linkedin.com/in/abanifatemi and x.com/a225. More information about Cognizant’s AI efforts can be found at: https://www.cognizant.com/us/en/services/ai/rewire-for-ai

Thank you so much for joining us. This was very inspirational.


Guardians of AI: Amir Banifatemi of Cognizant On How AI Leaders Are Keeping AI Safe, Ethical… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.