Guardians of AI: Alastair Parr Of Mitratech On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

Like humans, an AI’s past can determine its future. If our models are training on heavily biased material, we can expect that to translate to heavily biased AI functions that continue to support the echo chamber it was trained on. While the bias may be accurate in a statistical sense, those building AI tools need to objectively review data feeds to establish if that’s actually desirable in the model. A recent example is the targeted inverted review bomb ‘marketing’ of the venerable Angus Steakhouse in London.
As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Alastair Parr at Mitratech.
Alastair is the Executive Director of GRC Services at Mitratech and has spent decades building technology solutions that target the automation of complex risk and compliance activities. Alastair co-founded 3GRC, a third-party risk management company, where he oversaw the development of technology and services that scaled the identification and analysis of problem vendors. Prior to this, Alastair provided strategy for data loss prevention programs tracking user behavior against millions of business assets.
Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?
I have always been interested in scale problems, and the hardest ones to solve are processes involving people. The unpredictability of user behavior presents unique challenges that require empathy for what people will do in business and big data analysis. I came from a background in data loss prevention and auditing, which, when done effectively, means finding streamlined but secure ways for organizational processes to function.
None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?
Rather than a specific individual, I’m grateful to the various CISOs and practitioners who highlighted the value of balancing audit/risk pragmatism with enablement. I recall a specific nameless CISO of a very large audit consulting firm who treated every colleague, regardless of position, with respect, tolerance, and concern. While it may seem foundational, it allowed them to remove any barriers to really identify and solve internal problems. His organization went from being a business inhibitor to the enabler that brought technology and processes that made a difference. He now oversees operations for a Fortune 500 company, which is a testament to his approach.
You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?
The most important traits to me are empathy, pragmatism, and efficiency. I’ll be a happy man if I can continue developing my skills in those core areas.
Empathy is critical in any business setting. While often clustered under the term ‘teamwork,’ empathy to me means making a conscious effort to better understand and sympathize with the challenges that peers face. This lets us consider process choices and provide solutions that actually can make a difference. Being empathetic to a problem doesn’t have to mean you agree with it but can apply context accordingly.
Pragmatism is the most important skill set for any auditor making recommendations for a business. Textbook examples of ‘best practice’ will usually fall flat, as there are nuances that mean it can often be unrealistic or inhibiting. By being pragmatic, we can consider alternatives that keep the business going while reducing the likelihood of the audience returning to old, bad behaviors.
Efficiency is more abstract as it relies less on context. Tackling any business problem with an efficiency-driven mindset typically means accepting that perfection is the enemy of progress. Processes will always evolve, and incremental gains are more successful in most cases than sitting back and spending years building a magnum opus that is outmoded when it is launched.
Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?
AI has had a level of mainstream scrutiny and focus that hasn’t been seen in technology for decades. While some conversations are aspirational, the things that excite me most are tangible and accessible today. Conversational AI, anomaly detection, and trend analysis specifically bring huge benefits to people having to sift through large, complex datasets to make business decisions.
While anomaly detection and trend analysis focus on analytical models at scale, finding the needles in the haystack, conversational AI specifically brings these skills into the hands of everyone rather than just technology power users. Business intelligence can be captured by asking a simple conversational question rather than having a degree in statistical analysis. While this may not sound exciting for the average consumer, what it will enable behind the scenes is cheaper, faster, more intelligent solutions, from AI-driven support functions to recommendations for the things we actually want to consume.
Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?
Most of the industry is rightly concerned about cognitive bias, hallucination, and broader security. The reality is that AI has an insatiable appetite for data to train the models better. This is in volumes far beyond the reach of human analysts to review and curate (short of delaying progress by spending years validating every piece of material).
The impact is that not only is our behavioral and potentially personal data being consumed, processed, and shared across multiple regions in unfamiliar and potentially nefarious ways, but the fallacy of human bias gets fed into our AI solutions, too. It is human nature to trust what a perceived source of authority says, so there is a huge risk of AI providing misleading guidance. Real-world examples include models advising researchers that churros make great surgical instruments as they can be pliable and smell good. While this may be true, it seems unlikely.
Unfortunately, a degree of governance and transparency needs to be embedded into AI models to alleviate these concerns. Of course, this should be done pragmatically and not slow the progress of technological innovation, but meaningful decision-making should always include human validation of AI-driven guidance. This is still far faster than deriving the initial analysis manually.
How do you embed ethical principles into your company’s overall vision and long-term strategy? What specific executive-level decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?
AI is very much a business enabler and, in the proper use case, can be vastly superior to teams of analysts and practitioners who manually do the same activity. We at Mitratech are focused on ensuring that training models remain ethical by restricting personal or sensitive data processing while also performing regular deep dive analysis into what the AI tools are recommending from a contextual and statistical bias perspective.
To achieve this, we have applied controls on where and how AI interacts with our large datasets and enabled customers to make their own decisions on how far down the AI rabbit hole they wish to go based on their own risk appetite. The reality is that AI has tremendous potential for third-party risk analysis, which involves continuously assessing thousands of third-party organizations for indicators of potential risk while minimizing the impact on everyone involved.
Have you ever faced a challenging ethical dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and ethical responsibility?
The path of least resistance for AI is simply linking to LLMs and training AI on local datasets without restrictions. This would make us the first to market, but it wouldn’t have made us the best. In efforts to protect sensitive data and provide transparency, we took the time to selectively isolate the information that could train an effective AI model and go through many cycles to iterate and improve. While not the ‘easy button,’ this pays dividends in broader adoption and has proven to be the right way in the long term.
Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?
Cognitive bias and hallucination are two considerations where information returned from AI can be simply wrong or misleading. Still, there is a real-world risk of AI lacking ethics and empathy, resulting in actual harm. Many conversational AI tools have already taken steps to provide guard rails on prompts and responses, even though this can be seen as reducing the quality and capability of tools. Like any technology, guard rails, which can be perceived as ham-fisted on day one, will evolve and become more sophisticated. In this case, contextual filtering and processing of the prompts and responses is key. The guard rails need to exist but become less binary in how they respond to users.
The worry about the potential for AI to harm humans has been widely discussed across technical, ethical, and regulatory spheres. There are a few measures that should be taken to ensure AI remains a safe and effective tool:
- Keeping Up With Regulations — As AI becomes more and more prominent, governments and international organizations continue to develop and enforce regulations to cover safety, ethics, and accountability.
- Transparency — AI systems should be designed to be explainable, so users and stakeholders alike can understand how decisions are made. Developers can do this by documenting the design, training data, and intended use cases for AI systems, ensuring transparency and reducing risks of misuse.
- Continuous Monitoring — This is something that we are very used to in risk and compliance. AI systems should be regularly audited to detect and mitigate unintended consequences or biases.
- Limiting Use in High-Risk Applications — Disallowing the use of AI in areas where its deployment poses significant risks to human safety, such as autonomous weapons or systems that make life-altering decisions without human oversight.
- Human Oversight — To pick up the last point, maintaining human oversight in AI operations ensures humans have the ultimate authority to override AI decisions. It is important to note that AI works best with humans, not instead of humans.
Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?
Unfortunately, trained datasets need a degree of human analysis through sampling and contextual consideration. Like any human, the inputs will determine the outputs, and investing the time upfront to curate what our fledgling AI brain is exposed to will provide long-term dividends in minimizing bias and hallucination. While throwing as much data as possible at the training model may seem logical, it introduces the common problem of bad in and bad out.

Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”?
1. ‘Good Data In, Good Data Out’
Like humans, an AI’s past can determine its future. If our models are training on heavily biased material, we can expect that to translate to heavily biased AI functions that continue to support the echo chamber it was trained on. While the bias may be accurate in a statistical sense, those building AI tools need to objectively review data feeds to establish if that’s actually desirable in the model. A recent example is the targeted inverted review bomb ‘marketing’ of the venerable Angus Steakhouse in London.
2. Transparency in Models
If the creators of AI are confident in their datasets, providing transparency into how the AI model was trained should be no concern. While expecting them to have reviewed every shred of data is unrealistic, safeguards should exist to pseudonymize sensitive information and omit irrelevant or surplus data. All these decisions should be documented and made accessible to those actually making informed decisions based on the AI outputs. People want to know if 50 million social security numbers and associated behavioral choices are fed into a data model.
3. Choice
Ultimately, end users should be presented with a choice. Leveraging technology shouldn’t, by default, mean AI consent, and users should have the option to use AI in a manner selectively they see fit. Transparency supports this to help make informed decisions. This doesn’t mean that users can simply bypass an AI agent in a support workflow, but it should allow them alternative ways of interacting with data or processes, even if they are less efficient. This choice extends to controlling what data of theirs is leveraged in AI models.
4. Selective Implementation
AI shouldn’t be the de facto solution for every business problem. The 80/20 rule should apply, whereby humans verify or provide oversight to AI functions, at least until AI takes over. Until we can accurately trust unequivocally what AI generates, which I expect is some way away, oversight is imperative to allow future training, and incrementally build on use cases. Tied to this is understanding. We should accept AI isn’t perfect and, in turn, use it with that in mind.
5. Accessible
AI solutions are only as good as the human interfaces we use to query and consume outputs. This is why conversational AI is proving so exciting in the mainstream. Rather than having to know a series of prescriptive chained commands or prompts, we can be flexible and organic in how we interact, similar to everyday human life. The challenge with this is that we can get varying outputs with varying organic inputs. Consideration in building AI inputs should include normalizing and grouping varying inputs so consistent outputs can be provided. It may seem counterintuitive to restrict conversational AI in this way, but it will allow us to apply the necessary guardrails for sensitive applications in the foreseeable future.
Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?
By 2025, I hope the AI industry will have evolved into a mature ecosystem where safety, ethics, and collaboration are at the core of governance. AI systems should work with human knowledge responsibly, empowering individuals — without undermining privacy, security, or freedom.
What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?
The biggest challenge for AI over the next decade will be balancing rapid innovation with the need for robust safety, ethical oversight, and equitable access. As AI systems become more advanced and pervasive, risks such as bias, misuse, economic displacement, and unaligned AI behavior will grow exponentially. To prepare, the industry must invest in global governance frameworks, prioritize AI alignment research, and develop mechanisms for transparency and accountability. At the same time, educating users and policymakers and addressing societal impacts like workforce transitions will be essential to ensure AI continues to benefit us while mitigating its risks.
You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger. 🙂
I’d just like to encourage people to be empathetic and pragmatic. Whether at work or home, we can tackle the day’s problems without causing undue stress or amplification of someone else’s problem list!
How can our readers follow your work online?
All the great work of Mitratech can be seen on our website and blog, and I am always happy to connect with like-minded people on LinkedIn.
Thank you so much for joining us. This was very inspirational.
Guardians of AI: Alastair Parr Of Mitratech On How AI Leaders Are Keeping AI Safe, Ethical… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.