Guardians of AI: Regina Jaslow Of Innocuous AI On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

Always validate the data source — In the AI age, end users must take responsibility for verifying the information they consume; neglecting this step can lead to costly mistakes. A well-known example is the case of attorneys who used chatGPT to cite fictitious case law without proper validation, highlighting the risks of relying solely on AI-generated outputs. AI offers remarkable convenience, but it must always be accompanied by careful validation to ensure accuracy and credibility.
As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Regina Jaslow.
Regina Jaslow is the co-founder & CEO of Innocuous AI, a tech startup providing regulatory and compliance information using AI for the insurance industry. She is a second-time co-founder with two successful exits and has more than 20 years of marketing, sales, and customer success experience. Previously, she was the Chief Revenue Officer of a health-tech startup, where she tripled their market share in two years, enabling their successful exit.
Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?
My first introduction to the world of AI was at an emergency response tech startup called RapidSOS. They were an early-stage tech startup when I joined them in 2015 to lead marketing. The firm was building technology to enable 9–1–1 dispatchers to receive accurate location from mobile callers and rich incident data (e.g. videos from smartphones) that provides first responders with situational awareness prior to arrival. Ultimately, the vision was to use AI to save lives by providing predictive analytics using intelligent safety data. This experience motivated me to learn Python and dive deeper into the world of AI and its applications in the business world.
None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?
My inspiration to become a tech founder came from my experience working as Chief Revenue Officer with Matt Johnson, co-founder & CEO of Amplicare, a health-tech startup focused on improving patient health outcomes. He was a pharmacist by training and was stepping into the CEO role for the first time. Working with him and watching him grow into that position gave me confidence that I, too, could take on such a challenge. After working alongside him, I knew that I was ready to become a co-founder and CEO myself.
You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?
- Focus on customer needs — By utilizing design thinking and diving deep into the customer’s problem — along with genuine curiosity about their world and their day-to-day workflow — you increase the odds that the solution created will effectively address their needs in a way that they will embrace and adopt. Through the Global Insurance Accelerator program, we spoke to over 100 insurance industry professionals to learn about their pain points rather than pitching a solution. That discovery process ultimately led us to a solution we hadn’t originally anticipated developing.
- Get drawn to the difficult things — Most people want to take shortcuts or choose the easier path; it’s human nature. However, when you allow yourself to be drawn to the difficult challenges and embrace them, you will find it’s often a blue ocean strategy. The problems you solve in that area tend to be highly meaningful to that community. In the same insurance industry example, the problem we are solving has been described as “solving the impossible” (as one head of claims at an insurance carrier put it). As a result, we face fewer competitors and have customers who are grateful that someone is finally addressing their challenges.
- Be a perpetual learner — The reality is that it’s impossible to know everything. Being curious, humble, and committed to lifelong learning can lead you down unexpected but fulfilling paths. I have worked in eight different industries so far and I find it fascinating to learn about each one from scratch. I’m not afraid to be the person in the room asking the “dumb” questions because doing so can lead to uncovering solutions the industry hadn’t considered before.

Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?
- Immediate application of AI — As AI continues to proliferate and become embedded in nearly everything we touch, it is exciting that all consumers and business people are able to see, touch, and experience the immediate application of AI in their day-to-day lives.
- Redefining what it is to be human — For centuries, humans have taken on everything that needs to be done, from manual labor to complex intellectual tasks. With the advent of AI, we are embarking on a process of self-discovery — rethinking what it really means to be human. As AI reshapes our roles, we will adjust our perspectives on work and the broader meaning of life which is a new and exciting journey for humanity.
- Unleashing the power of humanity — The most exciting aspect of AI today is its ability to eliminate mundane, tedious, and repetitive tasks, allowing us to focus on what we are truly meant to do. I genuinely believe AI has the power to unlock human potential, empowering us to achieve things beyond what we ever imagined possible.
Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?
- Bad hats using AI — Unethical cloning of voices and likenesses, along with the increasing difficulty in distinguishing what is real from what is AI-generated (such as deepfakes), is deeply concerning. In the insurance world, this elevates fraud to an entirely new level. The workforce, as well as recipients of voicemails, emails, and other content, must be trained to be more discerning and exercise critical thinking to differentiate between AI-generated and real content.
- Use of biased data — Algorithmic bias stemming from incomplete or biased data is another major concern, as it can perpetuate existing fairness issues. To address this, greater transparency around data sources is essential, along with building a more diverse workforce capable of questioning and rigorously testing the data for potential biases.
- Large carbon footprint — The true cost of AI is often overlooked, particularly the heavy computational demands of data centers, which leave a significant carbon footprint. While tech startups are working to address this issue and there is growing interest in using alternative energy sources such as nuclear power, these efforts may not be enough. Venture capitalists investing in solutions specifically targeting this problem may face limitations in scale. The most effective way to tackle this challenge is to integrate sustainability solutions into existing systems in a way that is highly scalable and does not require users to adopt new behaviors. Reducing compute usage through automatic optimization of inefficient AI processes — without changing end-user behavior — offers a powerful, subtractive approach that can drive meaningful impact.
As a CEO leading an AI-driven organization, how do you embed ethical principles into your company’s overall vision and long-term strategy? What specific executive-level decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?
For some context, my company has developed a generative AI solution tailored for the insurance industry to enhance efficiency and ensure compliance with state statutes and insurance regulations governing claims management. Given the highly regulated nature of the insurance industry, ensuring that our AI provides accurate information and avoids hallucinations is a top priority. Compliance exists to ensure the fair and ethical treatment of policyholders by insurance companies, and we take this responsibility seriously by embedding principles into every aspect of our operations.
To uphold these standards, we prioritize full transparency regarding our data sources, allowing for proper fact validation and traceability. We actively encourage our end users to fact-check outputs, reinforcing a culture of accountability and trust. Additionally, at the executive level, we have implemented rigorous internal review processes to proactively identify and mitigate potential risks. These measures ensure that our technology aligns with regulatory requirements while maintaining the highest ethical standards.
Have you ever faced a challenging ethical dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and ethical responsibility?
At Innocuous AI, our mission is to present factual information to stakeholders involved in an insurance claim, based on the law and state-enforced insurance regulations designed to ensure the fair and ethical treatment of policyholders by insurance companies. When a claim occurs, the policyholder — who is essentially the customer — finds themselves on the opposite side of the negotiation table from the insurer. Ideally, when the insurer does the right thing and pays the customer what they are owed according to the signed policy, the process is straightforward. However, disputes arise when the insurer believes coverage does not apply or should be provided at a lesser amount than what the policyholder interprets from the policy language. Our solution does not seek to take sides; rather, it is designed to present the facts objectively.
One ethical dilemma we encountered was when insurance carriers questioned whether we should offer our solution to public adjusters who represent policyholders in claim disputes. From an ethical standpoint, we believe that both parties involved in a dispute should have equal access to the same level of convenience and factual information. Fortunately, in this case, doing the ethically responsible thing — providing our solution to both sides — not only aligned with our values but also supported our broader business goals by reinforcing our position as an unbiased and trusted resource in the industry.
Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?
AI is only as safe as the creators and users behind it. While many believe that AI must be regulated by governments, the reality is that bad actors are unlikely to abide by the law in the first place. This means that policymakers must strike a careful balance — ensuring regulations do not stifle innovation that could actually help combat malicious uses of AI. Overly restrictive or slow-moving regulations risk giving bad actors an unfair advantage, as they can operate without the same constraints that ethical innovators face. Ultimately, fostering a collaborative approach between policymakers, industry leaders, and AI developers is key to ensuring AI remains safe and beneficial for society.
Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if they are trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?
The data curation process is important — after all, garbage in results in garbage out. As mentioned earlier, transparency of data sources is essential so that users can validate the results produced by AI.
A key concept for people to understand is that AI operates in the realm of probabilities; it does not function like a rules-based engine that guarantees, “if X, then Y” every time without fail. Instead, AI makes predictions that, while often accurate, can also be incorrect. For example, a shopping website might suggest that if you buy X, you might like Y — but it’s never a 100% certainty.
AI hallucinations often occur when the dataset is too broad, introducing noise that reduces accuracy and leads to incorrect predictions. The most effective way to mitigate this issue is by using a dataset that is highly specific to the intended use case. For example, at Innocuous AI, we work with carefully curated data tailored to a narrow niche within the insurance industry. Unlike AI models trained on vast amounts of internet data — where the breadth introduces unnecessary complexity and inaccuracies — our targeted approach ensures greater precision and reliability.
Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”?
1. Require transparency of data sources — Ethical and responsible AI development firms should provide complete transparency regarding their data sources, allowing users to easily validate the information at any time. At Innocuous AI, every AI-generated output includes a convenient validation link to the original data source, empowering end users to verify accuracy. We highly encourage users to click on the link and provide feedback if they find discrepancies.
2. Always validate the data source — In the AI age, end users must take responsibility for verifying the information they consume; neglecting this step can lead to costly mistakes. A well-known example is the case of attorneys who used chatGPT to cite fictitious case law without proper validation, highlighting the risks of relying solely on AI-generated outputs. AI offers remarkable convenience, but it must always be accompanied by careful validation to ensure accuracy and credibility.
3. Include humans in-the-loop — Keep human oversight in the decision-making process is crucial, as AI operates on probabilities rather than certainties. A knowledgeable human can override AI-generated predictions and make informed judgement calls. At this stage of AI development, it is prudent to avoid full automation without incorporating human checkpoints. While human intervention doesn’t eliminate all errors, it adds an additional layer of protection. At Innocuous AI, we integrate human-in-the-loop processes to enhance decision-making without over-reliance on agentic AI.
4. Train humans to be vigilant — While efforts can be made to enhance AI safety, it is impossible to guarantee that all AI systems will be safe, especially with malicious actors seeking to exploit the technology. Educational institutions and companies must prioritize training individuals to remain vigilant and discerning, rather than passively trusting AI-generated data. At SafeAI@PennLabs, the University of Pennsylvania fosters collaboration between industry and academia to mitigate AI risks through education and research, emphasizing the importance of human vigilance in AI interactions.
5. Hire diversely — Companies should prioritize building diverse teams and fostering an environment where employees actively question data fairness and bias. A diverse workforce brings unique perspectives and helps identify potential biases that might otherwise go unnoticed. At Innocuous AI, we prioritize hiring individuals from diverse backgrounds and welcome non-traditional career pathways, as we believe different perspectives are essential for developing ethical and responsible AI solutions.
Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?
In the coming decade, there will need to be new and rapidly adaptive rules and regulations to help govern AI usage. However, it’s crucial that these regulations do not hinder companies’ ability to combat AI-driven threats with AI itself — especially since bad actors are unlikely to abide by the rules. History has shown us various technological races, from the arms race to the space race, followed by the internet revolution, and now we find ourselves in the midst of the AI race.
A prime example of this is fraud prevention. Bad actors are already leveraging AI to commit fraud at an unprecedented pace, forcing financial institutions, including insurance firms, to adopt AI solutions to stay competitive and, more importantly, stay one step ahead. Without AI, it would be impossible for any company to hire enough people to manually sift through vast amounts of transactions quickly enough to detect fraudulent activity. AI has become an essential tool in ensuring the integrity of financial systems, and governance frameworks must strike a balance between oversight and enabling innovation to fight these evolving threats effectively.
What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?
In the next decade, the speed of AI adoption will determine the new winners and losers across nearly every industry. Companies that take a wait-and-see approach to AI and other emerging technologies risk getting outcompeted by those that proactively experiment and collaborate with tech startups to shape solutions tailored to their industry’s needs. In traditionally conservative industries like insurance, there is a general reluctance to share data with software providers — an obstacle that could slow progress and hinder efforts to enhance the customer experience. This resistance will likely create a widening gap between the seamless digital experience consumers enjoy in other aspects of their daily lives and their often outdated interactions with insurance providers. As customers increasingly demand better, faster, and more intuitive experiences, the firms that listen and take the leap of faith will emerge as industry leaders.
While AI has been evolving since it was first coined in 1955, the pace of advancement will accelerate even further with the rise of quantum computing, which promises significantly faster processing of complex data. Although quantum computing is still in its early stages and not yet capable of performing large-scale calculations, experts — including McKinsey and MIT — predict that this could become a reality before the end of the 2030s. To stay ahead of this next wave of innovation, executives must keep an open mind and actively partner with forward-thinking startups. Preparing early and embracing these advancements will be critical for organizations looking to maintain a competitive edge in the rapidly evolving AI landscape.
You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger. 🙂
In an ideal world, organizations — both large and small — would solve problems of all sizes without being constrained by market size or profitability concerns. However, in today’s reality, companies often focus solely on solving issues that affect the majority, primarily due to resource limitations and the need for scalability. With advancements in AI and the future potential of quantum computing, we will have the opportunity to pursue passions and initiatives that benefit not only the majority but also the smallest and most underserved populations. These technologies have the power to unlock solutions previously deemed impossible, enabling humanity to reach new heights of innovations and inclusivity. The movement I hope to inspire is one of embracing these technological advancements with open minds and a willingness to think beyond traditional limitations. By doing so, we can move away from the mindset of neglecting smaller populations and instead act with greater humanity, ensuring that no one is left behind. This shift in approach has the potential to redefine what is possible and elevate humanity to an entirely new level of compassion and progress.
How can our readers follow your work online?
They can connect with me on LinkedIn: https://www.linkedin.com/in/reginajaslow/ or follow me on X:
https://x.com/ReginaJaslow. To learn more about our AI solutions for insurance, visit https://www.innocuous.ai/for-insurance-claims.
Thank you so much for joining us. This was very inspirational.
Guardians of AI: Regina Jaslow Of Innocuous AI On How AI Leaders Are Keeping AI Safe, Ethical… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.