Guardians of AI: Brian Cook of WellSaid On How AI Leaders Are Keeping AI Safe, Ethical…

Posted on

Guardians of AI: Brian Cook of WellSaid On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

An Interview With Yitzi Weiner

I’d like to inspire critical thinking across the world. Educating people across all walks of life on how to analyze and evaluate information logically, identify bias and make thoughtful decisions. If we could all think more objectively and respect opinions different from our own, the world would be a much easier place to live in.

As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Brian Cook, CEO of WellSaid.

Brian Cook is the CEO at WellSaid where he brings a wealth of experience in building and scaling companies. At WellSaid, Cook is focused on leading the company into its next phase of growth, and expanding its reach in AI voice technology for enterprise teams.

Prior to his current role, Cook was the co-founder, CEO, and chairman of workflow software company Nintex, where he spent 14 years growing the company into a global brand of over 5,000 customers across 90 countries. Cook also co-founded Hyperfish, an employee directory app that was later acquired by LiveTiles, and most recently founded Incredible Capital, an investment fund focused on software companies.

In addition to accelerating growth at WellSaid, Brian is also accelerating on the track as a current challenge driver for the Ferrari Challenge.

Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

Currently, I am the CEO of WellSaid, an ethical AI voice platform. WellSaid leverages Text-to-Speech (TTS) technology and proprietary AI models, which are trained on exclusive and licensed voice data, to generate natural sounding voice overs. Prior to joining WellSaid, I was the co-founder, CEO, and chairman of workflow software company Nintex, where I spent 14 years growing the company into a global brand of over 5,000 customers across 90 countries. I also co-founded Hyperfish, an employee directory app that was later acquired by LiveTiles, and most recently founded Incredible Capital, an investment fund focused on software companies.

None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?

I’ve had a mentor/coach/colleague in Australia that I’ve worked on and off with for over 20 years, Wayne Woolston. Wayne has been my long term business dad!

You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?

Perseverance, Curiosity & Discipline

  • Perseverance: My secret weapon, eventually I wear my competitors & detractors out! 🙂
  • Curiosity: I think it is important to always be asking, “Why” and to never take anything for granted.
  • Discipline: 80% of business is hard work that most of the time isn’t fun. That’s where I think many fail.

Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?

  • Quality of Content: Over the decades, advancements in computing power, algorithms and data processing have enabled significant innovations in the space. It’s exciting to see how far we’ve come, and the caliber of high-quality content that AI systems are able to deliver today.
  • AI for Good: AI technology has been leveraged for incredibly meaningful applications. For instance, individuals who’ve lost their voice due to neurological disorders or diseases like ALS have used advanced AI voice technology to recreate their authentic voice, making speech and the ability to communicate with loved ones accessible again.
  • Diverse Use Cases: AI has a wide range of diverse use cases that span across almost every industry. From healthcare, to advertising and marketing, to learning and development, the possibilities are endless, and we’ve only just begun to scratch the surface.

Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?

  • Growth at All Costs: There is a growth at all costs mentality that tends to come with AI, everyone wants to be the first to market, or the fastest platform. However, oftentimes, this means proper guardrails aren’t implemented to prevent misuse of the technology. While being the first or fastest can be appealing, it’s important to consider the potential risks associated with that.
  • Deepfakes and Misuse: Increasingly, deepfakes are being used to mimic artists, personalities and individuals without payment or consent, as well as in cases of criminal fraud and identity theft. This is because there are a number of companies that use a public AI model, enabling bad actors to pull from any source on the internet to create and manipulate content. The safest way to combat this is to operate on a closed model that only pulls from licensed data that is not publicly available.
  • Replacing Humans: AI, when used properly, can be an invaluable tool to complement human work, not replace it. While it can be a powerful tool to enhance efficiency and streamline workflows, it’s important to remember that humans are irreplaceable for creativity, emotional intelligence, and strategic decision-making.

As a CEO leading an AI-driven organization, how do you embed ethical principles into your company’s overall vision and long-term strategy? What specific executive-level decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?

  • As an AI voice company, we prioritize working with voice actors, rather than replacing them, by providing them with sustainable career opportunities. When leveraging their voice for our platform, we ensure we have their explicit consent and compensate them fairly for their talent, as well as pay them royalties for ongoing revenue share. End users can only access voices from talent who have provided consent, preventing the spread of deepfakes or voice theft.
  • We also made the decision to operate on a closed-source model to ensure customer data is secure and protected from outside use, and that they have commercial usage rights for any voice they create on our platform. By taking an untraditional, slower, approach to developing our AI model, we’re able to ensure that our content is ethically sourced and that our proprietary voices are properly trained. We are currently the only AI voice platform using a closed-source model.
  • Additionally, we’ve implemented a robust policy that prohibits the misuse of WellSaid’s platform, and content created on the platform is subject to content moderation, limiting the creation and release of content that does not align with our policy.

Have you ever faced a challenging ethical dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and ethical responsibility?

There’s a trend in the voice AI industry in which customers are able to create a clone of someone’s voice by simply uploading voice data to a platform. We recognize that this service is appealing to some, but such a practice does not align to our commitment to responsible innovation, which includes our requirement that we have a person’s explicit consent before we build a voice with their data. While we recognize this choice might cause a subset of customers to look elsewhere for this service, we’re confident that our stance here is the right thing to do and reflects our organizational principles.

Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?

  • We must keep humans in mind throughout the design process, and take time to consider and plan for risks, while incorporating the principles of “safety by design.”
  • Developers of the technology also need to maintain best practices for training on known, reviewed, or proprietary data to ensure a model is learning and generating safely.

Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if they are trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?

  • Keep humans in the loop throughout the entire product development process (from visioning to designing to testing and beyond).
  • Ensure the data used to train the model is representative and robust, and document the training process and the intended use of the resulting tool (including its potential limitations).
  • Incorporate rigorous and responsible QA practices for both preparing training data and evaluating the final model before opening up its use to customers or external users.

Here is the primary question of our discussion. Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”? Please share a story or an example for each.

  1. Content moderation: We have a comprehensive content moderation platform which includes automated moderation, human moderation, blocking content and users, as well as full cooperation with law enforcement.
  2. Regulation and compliance: In addition to being SOC2- and GDPR-compliant, we have a strict no deepfake policy and have never been part of any voice actor lawsuits.
  3. Patented, closed AI model: WellSaid is proud to be one of the first patented AI voice companies operating on closed AI models.
  4. U.S. hosted, secure data: Customer data is hosted and managed in the U.S. through Google Cloud Platform. We do not use customer data to train our model, and we do not sell customer data or PII.
  5. Accessibility: We are committed to meet WCAG 2.1 and other national standards, working towards compliance in these areas by 2025.Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?

What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?

Ethics and accountability. This includes data privacy and AI safety. It also means training workforce on how to work with AI to gain efficiencies.

You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger. 🙂

I’d like to inspire critical thinking across the world. Educating people across all walks of life on how to analyze and evaluate information logically, identify bias and make thoughtful decisions. If we could all think more objectively and respect opinions different from our own, the world would be a much easier place to live in.

How can our readers follow your work online?

Readers can follow along at WellSaid.io

Thank you so much for joining us. This was very inspirational.


Guardians of AI: Brian Cook of WellSaid On How AI Leaders Are Keeping AI Safe, Ethical… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.