Guardians of AI: Prathiba Krishna Of SAS UK & Ireland On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

AI & Human Rights Protection — Establishing global standards that prevent AI-driven surveillance, discrimination and loss of privacy.
As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Prathiba Krishna.
Prathiba is an AI & Ethics Lead at SAS and a seasoned Data Scientist with a rich background within the finance industry. With a Master’s degree in Operational Research with Applied Statistics and Risk, her passion takes form through seeing the varying applications of Machine Learning and AI techniques, and how they propel data scientists to build better models and solutions. Skilled in data analysis and modelling, she utilises SAS software and Open Source to assess and address problems within enterprise organisations.
Prathiba is a dedicated advocate for Trustworthy AI, recognising its critical importance in ensuring the responsible development and deployment of AI systems. She works on initiatives that promote ethical AI practices, focusing on transparency, fairness, accountability and inclusivity.
Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to get to know you a bit better. Can you tell us a bit about your backstory and how you got started?
I completed a Master’s degree in Operational Research with Applied Statistics and Risk at Cardiff University, and my background is in the finance industry. My passion is seeing the varying applications of Machine Learning and AI techniques, and how they propel data scientists to build better models and solutions. I put that into practice at SAS by using our software to assess and address problems within enterprise organisations.
None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?
I’ve had the privilege of being supported and challenged by several managers throughout my career, and I’m truly grateful for each of them. Some pushed me in ways that were tough, but ultimately helped me grow, while others presented challenges that took me to the edge of my limits. I’m thankful to all of them for giving me the freedom and opportunities to develop as a Data Scientist.
Early in my career, I had the unique experience of presenting my Gradient Boosting Machine (GBM) models to a room full of stakeholders who took the time to evaluate my work. I greatly respect that they made the effort to listen and provide feedback — it was an important milestone for me.
However, if I were to single out one person I’m especially grateful to, it would be my current boss, Dr. Iain Brown. Iain has provided me with invaluable opportunities that have opened doors to the wider world. Through his guidance I’ve been able to engage with senior stakeholders and the C-Suite, participate in public speaking events, write informative articles around AI, lead several AI and ethics initiatives, and promote myself as a thought leader in the industry. All of these experiences have been pivotal in shaping my career and I wouldn’t be where I am today without Iain’s support.
Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?
I believe there are three key character traits that have been instrumental to my success as a Data Scientist:
- Getting the Basics Right: My background in Statistics and Computer Science has been essential in shaping my approach to Data Science. When I transitioned from an analyst to a Data Scientist, I found it much easier to grasp the core concepts of Machine Learning. With a solid foundation, I was able to quickly understand algorithms and iterate on them, applying both theory and practice. This strong base gave me the confidence to experiment, make adjustments and troubleshoot effectively.
- Perseverance and Determination: In the early days of my career, I spent countless long hours fine-tuning models and working through complex challenges. There were many instances where I had to strike a balance between model accuracy and its practical validity. It wasn’t always straightforward, but I learned to push through the tough times, iterating and adapting until I found the right solution. This determination helped me overcome obstacles and keep progressing, even when the results weren’t immediate.
- The Art of Storytelling: I quickly realised that being a Data Scientist isn’t just about building advanced AI models — it’s also about communicating the results effectively to a wide range of audiences. In my early days, I understood that my ability to articulate complex models in a clear, engaging way was just as important as the technical work itself. Whether I was presenting to stakeholders, leadership or peers, storytelling allowed me to convey the value of my work and make it accessible to everyone, no matter their background. This skill has been essential in building trust and driving decisions based on the data I’ve worked so hard to develop.
Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?
One of the most exciting aspects of AI is its transformative potential across industries. In sectors such as healthcare, education and finance, AI is revolutionising operations by automating processes, improving decision-making and enhancing customer interactions.
In financial services specifically, AI enables real-time fraud detection, personalised customer service and data-driven strategies, fostering safer and more efficient financial ecosystems. These advancements not only optimise operations but also create tailored, secure experiences for customers.
Another exciting feature is AI’s real-time problem-solving capabilities. AI systems can process massive amounts of data instantaneously, enabling quick detection of anomalies and suspicious patterns that traditional methods may overlook.
For instance, AI can detect fraudulent transactions by identifying unusual user behaviours or deviations from established trends. This capability allows financial institutions to respond swiftly, protecting both customers and themselves from significant financial harm while maintaining trust.
Finally, the growing collaboration between regulators, governments, and businesses to create frameworks for AI development is promising. This cooperative approach ensures that innovation in AI is guided by societal values and remains aligned with standards. Such partnerships help safeguard against misuse while fostering a balanced environment where technology can thrive responsibly.
Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?
One major concern about the AI industry is the risk of bias in AI models. When AI systems are trained on biased or incomplete datasets, they can produce discriminatory outcomes, particularly in sensitive areas like credit scoring or hiring decisions.
To mitigate this risk, institutions must adopt fairness-aware machine learning techniques, conduct regular audits, and ensure diverse representation in training datasets. These efforts help minimise bias and promote equitable decision-making.
Human intervention is equally important in reducing bias in AI models and supporting adaptation to the AI-driven landscape. Without robust oversight of data quality and the use of trustworthy AI, the risk of biased insights or inaccurate outcomes could undermine public trust.
Another concern is the lack of transparency in AI processes. Some AI models operate as ‘black boxes’, making it difficult for users and regulators to understand how decisions are made. This lack of transparency and explainability can erode trust in AI systems.
Addressing this issue requires prioritising transparency by developing interpretable AI models and openly communicating how these models function. Financial institutions, for example, must explain the algorithms behind fraud detection or loan approvals to foster customer confidence and regulatory compliance.
Lastly, data privacy and security remain critical challenges. AI systems often handle sensitive personal data, making them attractive targets for cyberattacks. Organisations must implement robust cybersecurity measures, comply with stringent data privacy regulations, and establish protocols for secure data handling and storage. These measures are essential to protect user data and maintain public trust in AI technologies.
As part of an AI-driven organisation, how do you embed trustworthy principles into your company’s overall vision and long-term strategy? What specific decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?
SAS, with its legacy and industry footprint, has always prioritised Trustworthy AI, embedding its principles into the core fabric of its AI strategy. The key Trustworthy AI principles that guide SAS include:
- Human-Centricity — Ensuring AI solutions augment human decision-making and improve societal outcomes.
Robustness — Developing AI models that are resilient, reliable and capable of handling real-world complexities.
Inclusivity — Designing AI systems that are fair, unbiased and accessible to diverse stakeholders.
Accountability — Establishing clear governance structures to oversee AI decision-making.
Privacy & Security — Implementing safeguards that protect sensitive data and maintain compliance with evolving regulations.
Transparency — Enabling explainability at every stage of the AI and analytics lifecycle.
With the rapid evolution of Generative AI (GenAI) and the increasing adoption of advanced analytics, organisations must proactively prepare for the governance and ethical challenges of AI. SAS enables businesses to:
- Embed Ethical AI by Design — Integrate fairness, explainability and bias detection mechanisms throughout AI development, and leverage AI governance frameworks to align AI applications with ethical standards and regulatory requirements.
- Strategically Integrate Trustworthy AI — Ensure AI principles are embedded from ideation to deployment rather than as an afterthought, and provide tools and methodologies to help organisations operationalise responsible AI governance.
- Drive Responsible Innovation — Balance AI advancements with ethical considerations to mitigate risks, implement human-in-the-loop systems where necessary to maintain oversight, and focus on AI’s impact on society, ensuring it is aligned with business values and societal expectations.
- Enable Transparency at Every Stage of the Analytical Lifecycle — Offer explainable AI solutions that help users understand and trust AI-driven decisions, maintain auditability and traceability to ensure AI models comply with internal and external governance standards, and foster collaboration between data scientists, business leaders and regulators to build AI that is both innovative and accountable.
By embedding these principles into its AI strategy and solutions, SAS ensures the development of safe, transparent and responsible AI technologies that empower organisations to leverage AI with confidence and integrity.
Have you ever faced a challenging dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and responsibility?
During the development of a predictive AI model for a financial institution, it was discovered that the model exhibited bias in credit risk assessment, disproportionately affecting certain demographic groups. The challenge was to meet business objectives (accurate risk prediction and profitability) while ensuring the AI system was fair, explainable, and compliant with regulations.
To navigate this challenge, the first step was to identify the root cause of the bias. A comprehensive bias audit and root cause analysis were conducted to examine potential data imbalances and to pinpoint which features were contributing to the disparities in the model’s output.
Cross-functional teams, including data scientists, compliance officers, and domain experts, were engaged to validate the findings and address any potential ethical concerns. This collaborative effort was critical in understanding the broader implications of the model’s performance and its fairness.
Ethical and business trade-offs were carefully considered in the next phase. On the ethical side, reducing bias in the model could potentially impact its accuracy and profitability, particularly in the short term.
However, from a business perspective, deploying a biased model posed significant risks, including regulatory penalties, reputational damage, and the erosion of customer trust. Striking a balance between ethical responsibility and business goals was essential.
The team leveraged the SAS Viya AI platform to implement responsible AI adjustments. This platform provided the necessary tools for bias and fairness testing, as well as features that enhanced the model’s explainability. With these adjustments, users were able to better understand how credit decisions were made, increasing the transparency of the process.
To further ensure compliance with regulatory standards, such as GDPR and the Equal Credit Opportunity Act, the team engaged regulators early in the process. This proactive engagement helped demonstrate the company’s commitment to aligning with industry regulations.
In the long term, a monitoring framework was developed to ensure that ethical standards were maintained after the model’s deployment. The system incorporated a human-in-the-loop approach, allowing for manual reviews of edge cases that the AI system could not address autonomously.
To ensure that the principles of responsible AI were embedded in future projects, stakeholders, including executives and product teams, were educated on the importance of fairness and transparency in AI-driven decisions.
As a result of prioritising fairness alongside business objectives, the company achieved several key outcomes. Not only were regulatory risks and potential lawsuits avoided, but customer confidence in AI-driven credit decisions was also significantly improved.
Moreover, the company was able to create a blueprint for ethical AI governance that would guide future models, ensuring that responsibility remained at the core of their AI initiatives.
Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?
Keeping AI safe requires a multi-faceted approach. Firstly, transparency is key. By making AI algorithms interpretable and explainable, organisations can identify errors, biases, or unintended consequences. This also builds trust with customers and regulators.
Collaboration between businesses and regulators is another critical factor. Companies must work closely with policymakers to create adaptive regulations that address AI’s unique risks while promoting innovation. Rigorous testing of AI models in diverse and realistic scenarios can also help identify vulnerabilities and mitigate potential risks.
It is possible to build AI solutions safely by designing them with risk assessments, fairness testing, explainability and transparency across every stage of the analytics lifecycle. It is important to establish strong AI Governance and regulation to adapt to global compliance standards. AI should not be a ‘deploy and forget’ technology. It must be continuously monitored for risks. End-to-end AI Governance alongside model monitoring capabilities is needed, with a champion / challenger model approach to monitor model drift.
By integrating all of these trustworthy AI principles into its platform, SAS enables organisations to develop safe, responsible, and future-proof AI systems while balancing innovation and compliance.
Despite huge advances, AI can still produce incorrect results if it is trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?
To reduce inaccuracies, organisations must start with high-quality, verified and diverse training data. Poor data quality often leads to unreliable outputs, so using curated datasets is essential.
Human oversight is another crucial component. Experts should review AI-generated results to validate their accuracy and provide corrective feedback when needed.
Implementing feedback loops allows organisations to continuously refine AI models based on real-world performance and user interactions. Developing explainable AI models that justify their predictions with clear reasoning further enhances trust and reliability.
Finally, adopting industry-wide regulatory standards can help establish benchmarks for AI accuracy, ensuring consistent and transparent outcomes across applications.
Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”?
- Transparency: Building trust through explainable AI systems is critical. For example, a bank that openly explains its fraud detection algorithms to regulators and customers fosters confidence in its technology.
- Accountability: Organisations must take responsibility for the outcomes of their AI systems. Mechanisms should be in place to address unintended consequences, such as revising a biased credit-scoring algorithm.
- Collaboration: Working with regulators, governments and other stakeholders ensures that AI is governed by adaptive policies that balance innovation with societal values.
- Bias mitigation: Proactively addressing biases in AI models through fairness-aware technologies and diverse datasets reduces the risk of discriminatory outcomes.
- Continuous monitoring: Regular audits and updates of AI models help ensure compliance with standards and technological advancements.
Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?
Over the next decade, I hope to see the establishment of standardised global AI governance frameworks. These frameworks should balance the need for innovation with responsible usage, providing clear guidelines for businesses while remaining flexible enough to adapt to evolving technologies. Such governance will encourage responsible AI development while mitigating risks and fostering public trust.
What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?
The biggest challenge will be managing the intersection of rapid technological advancements and regulatory lag. As AI evolves, outdated regulations may fail to address new risks, creating gaps in oversight. To address this, the industry must adopt agile policy-making processes, invest in interdisciplinary research, and encourage collaboration between stakeholders. By doing so, AI can continue to innovate responsibly while safeguarding societal interests.
You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger.
In a world where AI is shaping economies, healthcare, governance and daily life, this movement would ensure AI benefits everyone, not just the privileged few. It would focus on:
- AI Ethics & Trustworthiness — Embedding fairness, transparency and accountability into AI to prevent bias, misinformation, and harm.
- AI for Social Good — Leveraging AI to solve real-world challenges — climate change, poverty, healthcare access and education for underserved communities.
- AI Literacy & Accessibility — Educating the public about AI’s potential and risks, ensuring everyone, not just tech elites, understands and benefits from AI.
- AI & Human Rights Protection — Establishing global standards that prevent AI-driven surveillance, discrimination and loss of privacy.
With the rapid evolution of Generative AI and automation, the risk of unethical AI usage, misinformation, and job displacement is higher than ever. A movement like this would unite businesses, policymakers, researchers, and citizens to create AI that is innovative yet responsible, powerful yet ethical.
To make ethical AI governance a reality, several key actions are necessary. First, collaboration is crucial — tech companies, governments, and civil society must join forces to shape AI policies that are both effective and inclusive. This cooperative effort will ensure that AI is developed and deployed responsibly, with broad input from all sectors of society.
Second, education and awareness play a vital role. AI literacy should be made mainstream, empowering individuals with the knowledge to understand their rights in an increasingly AI-driven world. By fostering widespread awareness, we can ensure that people are equipped to navigate the complexities of AI technologies.
Third, transparency and accountability are essential in building trust in AI systems. These technologies must be explainable and subject to regular audits for fairness and safety. Only through clear, understandable processes can we ensure that AI systems are operating as intended and are free from biases or harmful impacts.
Finally, global policy advocacy is needed to push for AI laws that strike a balance between protecting human rights and encouraging responsible innovation. By advocating for strong, forward-thinking legislation, we can create a framework that not only safeguards individuals but also promotes the ethical development of AI technologies worldwide.
AI will define the future — this movement ensures it’s a future where technology serves humanity, not the other way around.
How can our readers follow your work online?
You can find me on LinkedIn, and keep up to date with SAS on our website and LinkedIn.
Thank you so much for joining us. This was very inspirational.
Guardians of AI: Prathiba Krishna Of SAS UK & Ireland On How AI Leaders Are Keeping AI Safe… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.