Guardians of AI: Hitachi’s Koichi Nagatsuka On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

It is essential to establish clear accountability and responsibility for AI outputs. This involves setting up robust feedback mechanisms where users can report issues and ensuring that there is a transparent process for addressing these concerns.
As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Koichi Nagatsuka.
Koichi Nagatsuka is a researcher in the Advanced Artificial Intelligence Innovation Center, part of the Research & Development Group of Hitachi in Japan. He holds a Master’s degree in Information Engineering from Soka University. His current focus is on artificial intelligence and natural language processing. Among his research endeavors is the development of techniques for embedding watermarks in AI-generated content, which contributes to creating a safer society with AI technologies.
Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?
I majored in information engineering during my university years, and it was around that time that deep learning emerged, marking the beginning of the third wave of AI. I have always been very interested in AI, and since childhood, I had a dream of being able to converse with computers, which led me to start researching natural language processing. After obtaining my master’s degree, I secured a research position at Hitachi, motivated by my desire to contribute to society through AI technology.
None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?
Many people have influenced my career, but I am particularly grateful to my supervisor during my university years, who played a significant role in shaping my identity as a researcher. He had been conducting research long before the AI boom and possessed a vast knowledge that extended beyond computer science to include insights from neuroscience. It is often said that research ideas emerge at the boundaries of different fields, and I believe I learned this valuable lesson from him.
Moreover, no matter how busy he was, he always made time for thorough discussions with his students. For a researcher, engaging in discussions is fundamental to developing research capabilities, and the conversations I had with him continue to nourish my work to this day.
You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?
As a researcher, there are three-character traits that I believe have been instrumental to my success:
- Enjoyment
This trait is essential not just for researchers but for anyone aiming for success. From my experience in various research themes since graduate school, I can say that research is a long-term endeavor, and the key to maintaining motivation lies in enjoying the research process itself. Observing successful researchers around me, I notice that they do not solely seek rewards from their results; rather, they find joy in the journey of discovery. - Intellectual Curiosity
Good research begins with formulating compelling research questions. To create these questions, it is crucial to maintain a child-like curiosity about everything. Engaging with a variety of information and disciplines may seem like a detour, but I believe it is vital for researchers. - Originality
Finally, establishing originality is perhaps the most important trait. Research is about constantly challenging oneself and tackling what others have not yet explored. For instance, in my own research, I developed a method for embedding multiple digital watermarks into text generated by large language models (LLMs). While digital watermarking is an established technology, applying it to LLM-generated text is still a relatively new and uncharted area. This originality not only allows me to contribute meaningfully to the field but also gives me a sense of purpose and impact as a researcher.
Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?
Agents, small language models (SLMs), and open-source LLMs. Agents enable AI to more autonomously support human tasks. The miniaturization of LLMs allows for faster and more cost-effective use of AI. Lastly, the open sourcing of LLMs is an important trend. Previously, we used LLMs in closed environments like OpenAI, but with the improved performance of open-source LLMs that anyone can use and develop locally, we can expect a so-called democratization of AI.
Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?
The hallucinations or malfunctions of generative AI, the spread of fake news and misinformation, and the clarification of responsibility. Regarding the occurrence of hallucinations, there is still no fundamental technical solution, so it is crucial for users to always carefully check AI outputs. Unlike hallucinations, fake news, which involves the intentional spread of false information, is also a problem. Additionally, knowing who created a piece of content (whether it was a human or generative AI) is an important clue in evaluating that information. In this regard, technologies like digital watermarking could have a role to play.
As a CEO leading an AI-driven organization, how do you embed ethical principles into your company’s overall vision and long-term strategy? What specific executive-level decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?
Generative AI is not just a technology; it has the potential to bring about social and cultural impacts. Therefore, I believe it is important to deepen collaboration with all stakeholders, including not only the engineers and researchers within the company but also government officials and the general public, including users, to reach a consensus on ethics and rule formation. By doing so, we can establish a transparent AI ethics framework that is not self-serving.
Have you ever faced a challenging ethical dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and ethical responsibility?
The dilemma between business goals and ethical responsibility frequently arises when integrating generative AI into business operations. For example, embedding digital watermarks in content created by generative AI is one way to fulfill ethical responsibilities. However, adding watermarks can technically degrade the quality of the content. As a researcher, I am constantly striving to technically resolve such dilemmas. (for example, by exploring methods to embed watermarks efficiently without compromising the integrity of the content).
Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?
To ensure that AI truly benefits people, it is important not only to focus on the advantages brought by AI research and development but also to consider potential risks in advance and, if necessary, communicate these risks to society. From this perspective, as a company, it is essential to establish a system for continuously checking the safety of AI products even after they have been released.
Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if they are trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?
The issue of hallucinations is being continuously addressed by researchers. A recent trend involves developing models that derive correct answers, even if it means increasing inference time. Therefore, when designing services with generative AI, it is important not only to seek convenience but also to consider a balance with the reliability of AI.

Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”?
- Prevention of hallucinations
To prevent hallucinations, a comprehensive approach is needed, ranging from proactive measures like improving inference methods and utilizing high-quality datasets to reactive methods such as detecting outputs after hallucinations occur. Tracking content generated by AI with technologies like digital watermarking is part of this effort
2. Transparency of origin
Businesses must clearly inform users when generative AI is integrated into their services. Additionally, from a fairness perspective, there should be mechanisms, such as digital watermarking, that allow third parties to detect AI-generated content.
3. Continuous development of guidelines
Establishing guidelines for developing generative AI is crucial. These guidelines should consider all stakeholders and must be continuously updated to keep pace with advancements in AI.
4. Ensuring employment
There are individuals, such as illustrators, who have already been adversely affected by the advent of generative AI. As AI continues to evolve, it is important to recognize the potential for economic disadvantages and work towards building consensus with society at large.
5. Accountability and Responsibility
It is essential to establish clear accountability and responsibility for AI outputs. This involves setting up robust feedback mechanisms where users can report issues and ensuring that there is a transparent process for addressing these concerns.
Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?
Currently, each company sets its own guidelines for AI utilization, but I believe there is a need to advance rule-making within a larger framework. Alongside this rule-making, it is also necessary to establish technical standards.
What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?
One of the biggest challenges for AI over the next decade will be ensuring its ethical use and preventing unintended consequences as AI systems become more autonomous. The industry should prepare by establishing comprehensive ethical guidelines and regulatory frameworks that prioritize transparency, accountability, and safety in AI development and deployment.
You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger. 🙂
Regardless of whether I am a person of influence or not, my desire to contribute to society through AI technology remains unwavering. Specifically, I aim to develop technologies such as digital watermarking and other innovations that minimize the negative aspects of AI, such as the spread of fake news.
How can our readers follow your work online?
https://cir.nii.ac.jp/crid/1390018971042579840
Thank you so much for joining us. This was very inspirational.
Guardians of AI: Hitachi’s Koichi Nagatsuka On How AI Leaders Are Keeping AI Safe, Ethical… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.