Guardians of AI: HP Newquist Of The Relayer Group On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

AI products need to source their answers, replies, and solutions such that users can determine where — and how — an AI came up with a specific response. This will also allow for verification by the user and reduce false or misleading answers.
As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing HP Newquist.
HP Newquist is one of the foremost authorities on the history and application of artificial intelligence. His career in AI spans four decades, beginning with the creation of AI TRENDS, the first-ever publication to explore the practical use of artificial intelligence. Since then, he has given presentations about the potential of AI around the world, and his work has appeared in and been cited by such diverse publications as The New York Times, The Wall Street Journal, Newsweek, Variety, Billboard, Forbes, Popular Mechanics, Rolling Stone, Computerworld, and USA Today.
Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?
I was writing for computer publications right out of college, and put my focus on the most advanced tech of the time, which was the first wave of artificial intelligence. This was in the 1980s and 1990s. Fortunately, no one else was regularly writing about AI other than as a curiosity and I managed to become the leading observer of trends in the AI industry. Now that AI is at the forefront of technology again, I’ve been asked to comment regularly on the current state of artificial intelligence.
You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?
Three key aspects of the working world that everyone– not just executives — should embrace are 1) Listen more than you speak. 2) Always acknowledge the contributions of people who contribute to a project, and share the credit with them. 3) Do not presume that everyone has all the same information as you do, or the same skills. Be patient. Always try to find common ground — or create it. Beyond these, and just as important, is that in every job I had within an established organization, the critical thing I learned–and observed–was what NOT to do. From peer interactions and handling tasks on to quality of work, it was evident that there were others who did not put in the effort or were not self-aware enough to understand why success eluded them. It’s relatively easy to find someone who does a great job, and want to emulate them, but it’s perhaps more important to identify and eliminate characteristics that you don’t ever want to incorporate in your own life. Being ungrateful, ignoring the contributions of others, and general disrespect are all significant obstacles to your own progress. You can see this occurring on a regular basis in any organization, and you can often identify the people who are behind it.
Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?
AI has the potential to improve the world in which we find ourselves. Yet, I think that the most exciting things are happening under the radar of the media and out of view of the general public. Specifically, I think the use of AI in making medical diagnoses with all available and up-to-date information — from research papers to case studies — will dramatically improve healthcare worldwide. Next, the use of AI in education, especially for teaching and aiding students on an individual basis, will have worldwide consequences given the perceived state of education as being in decay. Third, I think the use of AI to aid people in their lives is going to change the perception of the “daily grind.” While personal AI use by the vast majority of humans is still a ways off, there are going to be a lot of “aha!” moments when people realize that AI can help them plan birthday parties, write holiday cards, plan vacations, set budgets, assist with financial planning, design living spaces and gardens, and on and on.
Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?
Conversely, the most troubling things about AI are those that are on everyone’s radar. The first concern is AI companies that use work that hasn’t been licensed or attributed in order to build their language models. This needs to be corrected even if it’s to the economic detriment of AI developers, whether it’s via remuneration or citation of the original source. Second, AI companies that release software without guardrails need to be incentivized to produce safer and more transparent products. That means addressing the whole “black box” excuse that companies claim prevents them from seeing how an AI processes a query. If you’re smart enough to build the app, you’re smart enough to build another that pierces the black box. Third, I’m concerned that more and more AI companies are being viewed as rapacious and indifferent to user needs. We saw this decades ago with IBM, then Microsoft, now Google. The AI companies are not doing anything to rectify this perception (because in some cases it’s true), and I think that will have a deleterious effect on the whole industry, especially tarring smaller companies that are doing application-specific AI work in medicine, education, and research with the same brush. We shouldn’t have to wait for legislation or lawsuits to correct the industry’s waywardness.
As a CEO leading an AI-driven organization, how do you embed ethical principles into your company’s overall vision and long-term strategy? What specific executive-level decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?
Everyone in our company has been using AI for decades, so we’ve seen the pitfalls as the tech has evolved from tiny niches to a cultural phenomenon. We’re fortunate in that AI and its applications are not new to us, which is why we are in a unique position to discuss our AI experience with those individuals and companies who are just now facing the reality of a world where AI is seemingly everywhere.
Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?
As much as I hate to say it, the AI industry needs government regulation. There is no way that the current crop of AI companies is going to police themselves — not with so much money at stake. We are now talking about investment numbers verging on a trillion dollars. In my opinion, any tech CEO who says they are going to commit to ethical deployment of AI is either stalling for time or is being utterly mendacious. Government needs to impose restrictions on AI companies in order to forestall widespread abuse of the technology — which is certain to happen without constraints.
Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if they are trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?
I’ve been covering the development of AI since the 1980s. Most of the early approaches to AI failed miserably, but some of the concepts from those approaches are slowly being resurrected and applied to current AI approaches. The most important carryovers are those that force the AI to delineate a path of reasoning such that it can explain the way it has arrived at an answer, solution, etc. Earlier AI applications, notably expert systems, were constrained by a complex “If/then” approach, but their path to answering or decision-making could be easily followed. Current AIs need to be able to explain their reasoning if users are to trust their output. Thus, applying a similar mode of tracking the path from query to answer is something that will be a requirement for AI apps if they are to find widespread use and obviate hallucinations and incorrect answers.

Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”?
1. AI companies need to be held to a set of regulations that protect end-users. This needs to be enforceable and punishable. It may sound draconian, but we have the same sort of regulation for other potentially destructive elements — such as firearms and nuclear power — and even though we think of software as being benign, we need to consider that ultimately it could be incredibly harmful.
2. AI products that generate “original” material need to be watermarked or identified as being AI-generated versus being the work of a human creator. This includes text, audio, video, and images.
3. AI products need to source their answers, replies, and solutions such that users can determine where — and how — an AI came up with a specific response. This will also allow for verification by the user and reduce false or misleading answers.
4. Any company that can create an application capable of manufacturing deep fakes has the brainpower to create applications that identify those deep fakes. There should be a system of fail safes, or checks, that accompany AI apps to keep them honest. It might be as simple as a redundant system, where one AI fact checks the results of the previous AI’s reasoning, and then approves it or send its back. Companies would be wise to take the lead on this as a way to address the concerns that they’re not doing anything to prevent the spread of misinformation.
5. Unfortunately, AI will only be as ethical as the people who use it. There is no way to guarantee that AI will be more ethical than humans . . . and that can be a very low bar. In order to move forward, we will have to have a certain degree of trust that the people who use it will have good intentions.
Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?
Since I do not expect any AI companies to rein themselves in when it comes to governance, I would hope that governance would be enacted in the same way that volatile technologies like nuclear power are overseen. Establish guidelines, make sure they are adhered to, and take action against those who do not comply.
What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?
The biggest challenge for AI since its inception has been managing expectations. That extends from overpromising the capabilities of the technology — “It will do everything you need it to!” — to trying to get consumers comfortable with the technology without force feeding it to them. AI needs to naturally integrate into our lives in the same way that mobile devices have, and the industry has to be willing to give that time to happen.
You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger. 🙂
I would very much like to see AI help people become better informed and more aware of the world around them. Specifically, this could be an AI app on a mobile device that helps students learn subjects in a way that is engaging and relatable to their personalities. It should be a style of teaching such that the student is actually inspired to want to learn more. Our traditional system of education is horrifically tradition-bound, and that needs to change in order for students to love learning, not loathe it. A personal AI teacher/companion developed specifically to aid students in learning would go a long way towards the development of practical thinking and perhaps a greater pursuit of knowledge. Ideally, even those who are not students would benefit from this kind of AI.
How can our readers follow your work online?
Thank you so much for joining us. This was very inspirational.
Guardians of AI: HP Newquist Of The Relayer Group On How AI Leaders Are Keeping AI Safe, Ethical… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.