Guardians of AI: Kasia Borowska Of Brainpool AI On How AI Leaders Are Keeping AI Safe, Ethical…

Posted on

Guardians of AI: Kasia Borowska Of Brainpool AI On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

Don’t be afraid to lean on external experts. In a rush to implement AI, with new regulations being enforced left, right and centre, the process can quickly become overwhelming. Businesses should not overlook the importance of seeking external support to ensure that compliance and ethics are consistently prioritised.

As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Kasia Borowska.

Kasia Borowska is the Co-Founder and Managing Director at Brainpool AI, an Artificial Intelligence services company powered by a global network of over 500 AI and ML experts. Having degrees in Mathematics and Cognitive Sciences as well as years of corporate experience working in marketing, Kasia realised how little of the academic research is actually applied in real life. Brainpool AI works with forward thinking businesses to solve their most pressing challenges with the latest advances of Artificial Intelligence. Kasia’s hope for the future of AI is a partnership between Artificial Intelligence and humans, where AI takes on the manual, repetitive and time-consuming tasks to allow people to focus on things that matter.

Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

At University, I studied Mathematics and Cognitive Sciences. At that time, I had no idea how to effectively apply any of this knowledge in real life to truly make a difference. Straight out of university, myself and Peter Bebbington launched Brainpool AI, which was initially a marketplace for data scientists that wanted to find work within the industry. This business has now transformed into a consultancy which helps businesses to integrate AI and unlock true value. Brainpool AI’s vision stems from our drive to make our theoretical science knowledge work in real life and it is incredible to see how far we have come, with 500 AI and ML experts building our “brainpool”.

None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?

As I mentioned, myself and Peter Bebbington founded Brainpool AI straight out of university. Over the past nine years, I have been fortunate enough to have been surrounded by PhD-level computer scientists, AI experts and engineers. What started out as a marketplace for data scientists wanting to find work within the industry has flourished into a “brainpool” of 500 AI and ML experts, and I will always be grateful for our experts and everything they have done for the business.

You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?

  1. Resilience. One key quality that I believe is instrumental to success is resilience. The business landscape is extremely turbulent, and this was especially the case during the pandemic. The pandemic caused us to lose a lot of clients at Brainpool which forced us to start again and grow from the ground up. Business leaders must remember that change and turbulence are inevitable, especially for startups, so it is crucial to be resilient.
  2. Perseverance. When I co-founded Brainpool and began to sell AI to businesses back in 2017, many peoples’ only association with AI was The Terminator. Despite this lack of education and skepticism from business leaders, we persevered and had meaningful conversations which allowed us to help businesses unlock new realms of opportunity with AI.
  3. Empathy. At Brainpool, I have been lucky enough to work alongside a wide range of PhD-level experts. Working alongside these experts has required me to understand that people think differently and have different views. It has shown me the importance of empathy and understanding — a crucial skill in building a successful business.

Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?

  1. Unlocking AI’s true potential. One thing that excites me about the AI industry is the realm of benefits that businesses can unlock with Agnostic AI infrastructure. Despite many businesses opting to use off-the-shelf solutions, there has been a clear shift in the industry, with many experts emphasising the importance of moulding AI around an individual business and its unique requirements. As the value of adopting an Agnostic AI approach begins to be realised, businesses will benefit from solutions that are made for their organisation and will therefore unlock value, rather than leveraging cookie cutter solutions that do not respond to their unique needs.
  2. AI’s impact on all industries. From leveraging AI to identify health conditions before patients begin to show symptoms to AI-powered timber design tools — the applications and possibilities for AI are endless. As someone who has been in the industry for over nine years, it has been incredible to witness the use cases grow as the technology has advanced. I look forward to seeing how these use cases continue to evolve over the next few years.
  3. Significant tangible benefits of AI. Undoubtedly, AI solutions have already made a significant impact on people’s lives. But with the rise of Agentic AI over the coming months, the presence of AI and its benefits in wider society and business will be profound. The time saving benefits will unlock significant productivity improvements and turbo-charge AI adoption in people’s personal and professional lives.

Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?

1 . AGI on the horizon. Two years ago, the average number of years that it would take us to reach AGI according to a top AI research institute was 34 years. Now this estimate has been lowered to anywhere between five and 20 years.

We are in control of AI now because we are the more intelligent species. However, with AGI on the horizon, which will be more intelligent than us, we must solve current safety challenges while we have control before we arrive at AGI when it will be too late. The only way to mitigate the risks of AGI is to regulate from the top down on an international scale. The regulation must mandate the amount of resources that need to be put into safety against development. Geoffrey Hinton says that at least a third of your spend should be invested into safety, but as it currently stands, less than 10% is being invested on average. This has to change.

Another way to ensure we have control before we arrive at AGI is by making all of our work transparent to regulators, which is exactly what the new AI Bill in California set out to do. In California, they looked to mandate large AI providers into showing what they have done with AI and be fully transparent about the data involved. This is something that all countries around the world should look to implement to help ensure a more ethical AI future.

2. AI “FOMO”. The hype around AI — especially generative AI applications like ChatGPT, has led some firms to succumb to “AI FOMO” — rushing into adoption without a clear strategy. This kind of impulsive, short-term decision-making lacks strategic foresight on how best to leverage AI for sustained success, and often prevents businesses from realising AI’s full potential.

In a rush to capitalise on the AI hype, almost half of businesses leverage off-the-shelf AI solutions. These pre-built solutions mean businesses do not need to develop their own technology and are designed to offer quick deployment and lower up-front costs. This makes them an attractive solution for businesses, however — these technologies are not as efficient as they may seem.

With research revealing over 80% of AI projects fail — now is the time for businesses to avoid being swept up by the AI hype and look to implement Agnostic AI rather than opting for off-the-shelf solutions. Agnostic AI provides businesses with increased flexibility, scalability and efficiency — the recipe for successful AI implementation.

3. The missing piece of the “AI ROI” puzzle — CEO and CTO collaboration. Across the industry, we are witnessing a misalignment between CEO and CTO expectations when it comes to AI. This misalignment stems from a CTO being excited for the next wave of technological innovation and a CEO only caring about the bottom line and ROI. With this in mind, at the early stages of every AI project, businesses must sit down with their technological and commercial representatives to discuss why they are building this integration and the expected ROI.

During these conversations, it is also crucial to set realistic expectations. Typically, AI projects will see a ROI in one to two years and will only begin to make a profit after that time. It is crucial for businesses to go into AI adoption knowing this instead of expecting to see a ROI after a week or so. If these expectations are set immediately, the misalignment issue between CTOs and CEOs can be resolved and AI projects will achieve results the entire C-Suite is happy with.

How do you embed ethical principles into your company’s overall vision and long-term strategy? What specific executive-level decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?

Ethical and responsible AI is something that I believe is not spoken about enough. Any company that is implementing AI has a responsibility to ensure it is transparent, ethical and responsible — and with new regulations such as the EU AI Act firmly on the horizon, this is more important than ever.

When we speak with businesses looking to implement AI, safety and ethics is a constant thread which runs throughout our conversations. To help ensure our clients are implementing AI in a way that is both ethical and in line with their company’s overall vision and strategy, we encourage them to take an agnostic approach to AI.

From an ethical perspective, Agnostic AI means that businesses will not be locked into a single vendor’s framework which will allow them to customise their systems as required. Businesses can also take this one step further by integrating risk management and human oversight capabilities to strengthen their models. Agnostic AI also provides businesses with enhanced data governance capabilities. This allows businesses to have more control over their data and ensure everything is transparent and bias free. This approach will also allow businesses to mould AI around their business and its needs, rather than taking a cookie-cutter approach by implementing off-the-shelf solutions which do not understand the nuance of a business.

Have you ever faced a challenging ethical dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and ethical responsibility?

At Brainpool, we have a very strict list of projects that we would never get involved in to ensure we are consistently balancing business goals with ethical responsibility. The projects we will never work on include any work with autonomous weapons, the fossil fuel industry, the Metaverse, AGI (until we solve the value alignment problem) and any activity which promotes misinformation.

Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?

We’ve seen AI develop rapidly over the past two years and I don’t envision this rate of change slowing down. The rate of development has been so significant that many individuals have raised concerns about the development of Artificial General Intelligence (AGI) and its potentially dangerous implications. Whilst many have raised this issue, and most prominently Geoffrey Hinton dubbed the ‘Godfather of AI’, the most significant players in AI development are seemingly ignoring this issue or at best are kicking the can down the road.

It’s vital to mitigate the risks of AGI and to initiate regulation from the top down on an international scale. The regulation must mandate the amount of resources that need to be put into safety against development. Geoffrey Hinton says that at least a third of your spend should be invested into safety, but as it currently stands, less than 10% is being invested on average.

In the entire history of human civilization, there has never been a case of a less intelligent species controlling the more intelligent species. We are in control of AI now because we are the more intelligent species. However, as we are now building AGI, which will by definition be more intelligent than us, it is crucial that we solve current safety challenges while we have control before we arrive at AGI when it will be too late.

Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if they are trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?

To help mitigate hallucinations and prevent the spread of misinformation, businesses should look to leverage Agnostic AI to make more robust, reliable models.

A good example of AI’s ability to hallucinate is Apple’s recent AI news feature which was designed to summarise breaking news notifications, but has been inventing false claims and spreading disinformation.

Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”?

  1. Prioritise data quality. It is important to consider that a model will only perform as well as the data it is trained on. From an ethical standpoint, businesses must ensure all AI models are trained using diverse and inclusive data sets that are free from bias. Businesses must also ensure they are transparent about the data their models are trained on with regulators. This helps to instill a sense of trust as stakeholders and regulators will have full visibility into how the model makes its decisions to ensure it is consistently ethical and responsible.
  2. Do not treat compliance as an afterthought. Too many businesses are guilty of diving head first into an AI project and focusing on the end result without giving a second thought to compliance. Before beginning a project, businesses must put a plan in place to ensure that compliance is prioritised at every turn. Businesses must also be sure to consider compliance within each specific context. For example, with many organisations increasingly leveraging AI to deliver more personalised customer experiences, they must ensure they adhere to data privacy regulations or they will face hefty fines and lose consumer trust. This is an example of a compliance issue that can easily be mitigated with the right level of upfront planning.
  3. Leverage flexible, modular architecture. The regulatory landscape is continually evolving and many businesses are struggling to keep up with the changing requirements. To ensure businesses are putting their best foot forward, they should implement a flexible, modular architecture by taking an agnostic approach to AI. This will allow businesses to choose the best model for each specific use case and easily change between these models as regulations evolve to ensure they are consistently compliant.
  4. Monitor models after implementation. Businesses must remember the job is not done once AI has been implemented. To ensure consistent compliance, businesses must regularly conduct ethical reviews and monitor all models to allow any ethical or compliance issues to be quickly identified and resolved before they get out of hand.
  5. Don’t be afraid to lean on external experts. In a rush to implement AI, with new regulations being enforced left, right and centre, the process can quickly become overwhelming. Businesses should not overlook the importance of seeking external support to ensure that compliance and ethics are consistently prioritised.

Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?

Echoing what I have mentioned previously, I hope to see governments around the world uniting to regulate AI while the task is still in our control. The technology has evolved into a tool that can transform individuals’ personal and professional lives, and in order to continue reaping the benefits this technology has to offer, governments must regulate before we reach AGI, which is too late.

What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?

One challenge for AI over the next decade is linked to job displacement. I think there will be a lot of jobs created from the AI revolution, as with every industry revolution. Think about when computers became mainstream and everyone was worried that their jobs would be taken over by computers, and now we have thousands of new jobs, like SEO experts, which never existed before.

With that being said, I do think that this industry revolution will be different as it will have much more of an impact. Governments and businesses must emphasise AI’s ability to enhance rather than replace. AI’s time saving capabilities are endless, and if used correctly, employees will have time to focus on higher value, creative tasks and unlock a realm of opportunity for businesses.

You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be?

I studied Cognitive Sciences at university which gave me a strong appreciation for the power of our minds. I think more people should take the time to understand how their brain works from an early age. Our brain is our most powerful organ, yet many of us shy away from learning about it, for example through therapy. There is also very limited education about brain anatomy and functionality at schools.

If done correctly, AI will be able to improve our human intelligence, but the only way to do this is if we keep working on increasing the capabilities of our human intelligence on its own. As individuals continue to rely on their phones and technology more and more, unfortunately we are heading in the wrong direction.

With this in mind, the movement I would like to start would be to preserve human intelligence in the age of AI.

How can our readers follow your work online?

For anyone interested in following myself and Brainpool AI’s journeys, you can follow me on LinkedIn here or you can follow the Brainpool AI LinkedIn account here. You can also visit Brainpool AI’s website here.

Thank you so much for joining us. This was very inspirational.


Guardians of AI: Kasia Borowska Of Brainpool AI On How AI Leaders Are Keeping AI Safe, Ethical… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.