Guardians of AI: Ken Huang Of DistributedApps.ai On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

Educating teams about AI-specific risks is critical. A well-informed workforce is the first line of defense in ensuring AI safety. During a generative AI deployment, we conducted tailored training sessions for developers, product managers, and leadership. By explaining risks like model drift and misuse, we empowered the team to recognize and address potential issues independently, fostering a culture of awareness and responsibility.
As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Ken Huang.
Ken Huang is co-chair of AI Safety Working Groups at Cloud Security Alliance and author of <Generative AI Security> and <Beyond AI> books, both published by Springer. He is the core contributor of OWASP Top 10 for LLM Applications. His recent work is on Agentic AI red teaming and AI vulnerability scoring system.
Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?
I began my career working on rule-based reasoning and expert systems, later contributing to the early development of OWASP ASVS. As Chief Security Architect for the Affordable Care Act project, I designed a robust defense in depth security framework for a critical national initiative. I’ve authored books on blockchain and generative AI security and created a popular video course on generative AI for cybersecurity, hosted by EC-Council, which has received widespread positive feedback on LinkedIn. I am a core contributor to OWASP Top 10 for LLM Applications and co-chair the CSA AI Safety Working Groups, actively contributing to advancing secure AI and cybersecurity practices. I have contributed to the NIST public working group on Generative AI. I am a sought after speaker and was invited to speak at WEF, RSA OWASP AI Security Summit, CSA AI Think Tank Day, World Digital Technology’s AI Summit, etc.
None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?
I would like to mention Jim Reavis who is CEO and founder of Cloud Security Alliance for his vision in both cloud and AI security and for many collaborative initiatives under his leadership.
You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?
Take ownership of issues, keep humble and focus on long visions.
Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?
1: Agentic AI can unleash the power of GenAI model, help us to build real world business applications.
2: Multimodal AI is still early and can be leveraged to bring model performance to next level.
3: Robotics, especially humanoid robot will create enormous values and next 10 trillion dollar company will be robotics company.
Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?
The industry faces several critical challenges that demand attention. First, the rapid adoption of advanced technologies like generative AI often outpaces the development of security standards and regulatory frameworks. This gap exposes organizations to vulnerabilities and risks. To address this, there must be a concerted effort to create agile and adaptable guidelines, such as extending initiatives like OWASP Top 10 for LLM Applications, while fostering collaboration between industry, academia, and policymakers.
Second, there is a significant shortage of skilled professionals in cybersecurity and AI safety. This gap leaves organizations struggling to defend against sophisticated threats. To mitigate this, we need more accessible education and training programs, such as EC-Council’s courses, along with greater investment in talent pipelines and public-private partnerships to upskill professionals.
Finally, the lack of transparency and accountability in AI system design raises ethical and security concerns. Black-box AI models make it challenging to predict behaviors or identify vulnerabilities. To alleviate this, promoting explainability and robust auditing mechanisms for AI systems is crucial. Frameworks like those developed by NIST and CSA AI Safety Working Group can play a vital role in standardizing these practices.
As a CEO leading an AI-driven organization, how do you embed ethical principles into your company’s overall vision and long-term strategy? What specific executive-level decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?
As the leader of a small AI-driven company, I focus on contributing to the broader industry to advance ethical and responsible AI development. My participation in initiatives like OWASP Top 10 for LLM Applications and the CSA AI Safety Working Group allows me to actively shape industry standards and frameworks, ensuring that ethical principles are prioritized in AI technologies. By contributing to NIST public working groups on generative AI, I help address emerging challenges and provide practical insights to improve safety and transparency.
With limited resources, I emphasize collaboration and knowledge sharing as key strategies. I actively engage with industry peers, researchers, and regulators to advocate for best practices and contribute to the development of actionable guidelines. This participation ensures that our work aligns with broader efforts to build secure and trustworthy AI systems.
Additionally, I share my expertise through books, courses, and public discussions to help bridge gaps in understanding and implementation of ethical AI practices. By focusing on these contributions, I aim to amplify the impact of ethical AI principles across the industry, ensuring that even smaller organizations can play a meaningful role in driving responsible innovation.
Have you ever faced a challenging ethical dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and ethical responsibility?
Yes, I’ve encountered challenging ethical dilemmas, particularly in balancing the need for innovation with the responsibility to ensure safety and fairness in AI systems. One instance involved a client’s request for an AI application that, while technically feasible, raised concerns about potential bias and lack of transparency in its decision-making process. The client prioritized speed and functionality, but I recognized that deploying the system in its current state could lead to unintended consequences and harm.
To address this, I initiated an open dialogue with the client, explaining the ethical implications and potential risks of the proposed deployment. I proposed alternative approaches, including integrating bias mitigation techniques and enhancing model explainability, even if it meant extending the development timeline. While this required careful negotiation to align with the client’s goals, they ultimately appreciated the value of building a trustworthy and responsible solution.
I also sought insights from industry frameworks, such as OWASP and CSA guidelines, to validate and strengthen my recommendations. This experience underscored the importance of not only identifying ethical challenges but also collaborating with stakeholders to find balanced solutions that prioritize long-term impact over short-term gains.
Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?
To ensure AI remains safe and minimizes potential harm to humans, an approach combining technical, regulatory, and collaborative efforts is needed. First, robust security measures must be integrated into AI development from the outset. In my Generative AI Security book, I discussed practical design principles for data security, model security, supply chain security, MLSecOps, and using GenAI security tools among many other useful practices. I strongly recommend you to get a copy of this book.
Second, transparency and explainability should be prioritized. AI systems must be designed to allow users and stakeholders to understand how decisions are made, enabling better accountability and trust. This includes adopting practices like model interpretability, robust documentation, and clear communication about the limitations and potential risks of AI applications.
Third, governments, industry, and academia must collaborate to establish comprehensive regulations and ethical guidelines that address safety, fairness, and bias. Contributions from working groups like those by NIST and CSA are instrumental in creating practical, implementable standards that keep pace with technological advancements.
Finally, ongoing monitoring and auditing of AI systems after deployment are critical. Developers must continuously assess performance, identify emerging risks, and implement updates to address new threats. Education and training for AI professionals on safety and ethical practices, coupled with public awareness initiatives, will further ensure a collective effort toward safe AI innovation.
Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if they are trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?
To ensure AI produces accurate and transparent results, several steps must be taken across the development lifecycle. First, improving the quality of training data is essential. This includes curating datasets that are representative, diverse, and free from bias or misinformation. Rigorous data validation processes and the use of synthetic data to fill gaps can help reduce inaccuracies and biases in model outputs.
Second, the deployment of AI models should emphasize explainability. Techniques such as attention mapping, decision trees for interpretable layers, mechanical interoperability, chain of thoughts reasoning, and counterfactual analysis can help illuminate how models make decisions. This transparency allows users to better understand and trust AI outputs, while also enabling developers to identify and address errors more effectively.
Third, integrating continual learning mechanisms can help AI systems adapt to new information and correct inaccuracies over time. Regular updates and retraining with high-quality, verified data ensure models remain relevant and accurate as conditions evolve.
Fourth, incorporating feedback loops is critical. Systems should be designed to allow user feedback to flag incorrect or biased outputs, which can then be reviewed and used to improve the model. This collaborative approach enhances accuracy and user trust.
Finally, deploying robust auditing and monitoring systems is necessary to detect and mitigate errors or hallucinations in real time. By focusing on data quality, transparency, adaptability, and oversight, we can significantly improve the reliability of AI systems.
Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”?
- Augment Existing Security Programs with Generative AI
Organizations should adapt their existing security frameworks to incorporate safeguards for generative AI systems. Drawing insights from my book< Generative AI Security>, this involves identifying unique vulnerabilities such as model inversion attacks or data poisoning and addressing them proactively. For example, in a recent project, we integrated prompt injection testing specifically for GenAI Apps into the client’s broader security program. This ensured that their GenAI systems were aligned with their overarching cybersecurity strategy, mitigating risks effectively. - Red Team Early, Red Team Often
Proactive red teaming helps identify and address vulnerabilities before malicious actors exploit them. For high-value AI systems, external red teams bring a fresh perspective and advanced expertise. In one instance, we were hired to red team an AI educational system designed to provide personalized learning experiences. During our assessment, we identified potential vulnerabilities where adversarial actors could manipulate user inputs to access unauthorized data or skew learning recommendations. By simulating these attack scenarios, we provided actionable insights that allowed the client to implement countermeasures early, significantly improving the system’s security and maintaining the integrity of its educational outcomes. - People Are Key to AI Safety
Educating teams about AI-specific risks is critical. A well-informed workforce is the first line of defense in ensuring AI safety. During a generative AI deployment, we conducted tailored training sessions for developers, product managers, and leadership. By explaining risks like model drift and misuse, we empowered the team to recognize and address potential issues independently, fostering a culture of awareness and responsibility. - MLSecOps
Adopting MLSecOps practices to integrate security throughout the machine learning life cycle. In one project, we established an MLSecOps pipeline to continuously monitor a generative AI system for vulnerabilities. Automated tools flagged suspicious behavior, unusual data patterns, allowing the team to address issues during model fine tuning time. This streamlined approach ensured ongoing security without compromising efficiency. - AI Safety, Responsibility, and Ethical Culture
Embedding a culture that values safety, responsibility, and ethics is foundational. Leadership must model these principles and integrate them into daily workflows. For example, we established ethical review checkpoints during the development of an AI-powered content generation tool. These checkpoints ensured the system avoided generating harmful or biased content, aligning the project with the organization’s ethical standards. This culture, reinforced by collaboration and open dialogue, promotes long-term responsible AI development.
Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?
Looking ahead, I hope to see significant advancements in industry-wide AI governance, particularly through the adoption of comprehensive frameworks that embed security, ethics, and transparency into AI development and deployment. As the co-chair leading the effort on the CSA AI Controls Matrix (AI CM), I have seen firsthand how such frameworks can serve as a foundational component of programs like the CSA AI STAR. The AI CM provides organizations with structured controls to assess and mitigate AI risks, requiring that security and governance are prioritized across the AI lifecycle. I envision this becoming a cornerstone for global AI governance efforts.
Additionally, I anticipate enhanced collaboration between governments, industry groups, and academia to establish adaptive and harmonized regulatory frameworks. Leveraging efforts like the CSA AI STAR program, these frameworks can provide clear, actionable standards while addressing emerging challenges. This will create a cohesive approach to governance, enabling both innovation and accountability on a global scale.
Finally, I foresee a stronger focus on global collaboration on AI Safety initiatives such as the one led by World Digital Technology Academy under UN frameworks..
What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?
The biggest challenge for AI over the next decade will be balancing rapid innovation with ensuring safety, ethical use, and accountability. Security threats, such as adversarial attacks and data manipulation, will grow in sophistication, requiring proactive measures like red teaming and continuous monitoring. Ethical concerns, including bias and misinformation, will demand investments in transparency and fairness. Regulatory uncertainty will necessitate collaboration between organizations and policymakers to develop clear and adaptive governance. Finally, addressing the talent gap in AI safety and security will be critical, requiring significant efforts in education and workforce development to keep pace with evolving challenges.
You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger. 🙂
If I could inspire a movement, it would focus on creating a global initiative for AI Safety Education and Awareness. This movement would aim to make AI safety, ethics, and security concepts accessible to everyone — from policymakers and developers to everyday users. By demystifying AI and equipping people with knowledge about its benefits and risks, we could empower individuals and organizations to use AI responsibly while mitigating harm.
This initiative could include accessible online courses, community-driven workshops, and partnerships with educational institutions to integrate AI safety into curricula. By fostering a culture of collaboration and shared accountability, this movement would not only reduce misuse but also amplify the positive impact of AI technologies on society. Educating people about AI’s potential and challenges can trigger a wave of informed innovation, ensuring that its benefits are widely and equitably shared.
How can our readers follow your work online?
https://www.linkedin.com/in/kenhuang8/
https://medium.com/@kenhuangus
Thank you so much for joining us. This was very inspirational.
Guardians of AI: Ken Huang Of DistributedApps On How AI Leaders Are Keeping AI Safe, Ethical, Res was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.