Guardians of AI: Kathryn Marinaro of argodesign On How AI Leaders Are Keeping AI Safe, Ethical…

Posted on

Guardians of AI: Kathryn Marinaro of argodesign On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

An Interview With Yitzi Weiner

Training corpus: ensure that training data is sourced ethically, respecting the ownership of such data, offering opt-in (rather than opt-out) data gathering, or paying for data. Also assess the corpus for a meaningful amount of diversity to reduce substandard outputs

As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Kathryn Marinaro.

Kathryn Marinaro is an author and award-winning Creative Director, and leads teams to imagine and design experiences and software at argodesign. She is the author of Prototyping for Designers (O’Reilly) and an international speaker, presenting talks and workshops on design thinking, AI, and prototyping at conferences including SXSW, UX Lisbon, O’Reilly Design, and MakerCon SF. Kathryn has over ten years of experience designing with AI, from creating experiences with IBM’s Watson, a large set of AI algorithms, to designing chat-based experiences leveraging emerging LLM technologies. Kathryn has been featured in articles in Fast Company, Communication Arts Magazine, Architect Magazine, ArtInfo, Make Magazine, and the Visual Arts Journal. See her personal website for more detailed information.

The best AI is the least AI possible to get the job done.” — Kathryn Marinaro

Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

I’ve been in the design industry in some capacity for over 20 years, first studying and interning in architecture, then working as a graphic designer. My background in art, work in graphic design, and interest in technology organically grew and converged into a skillset and curiosity that led me to work in web-based and digital products. I went back to school to study holistic product design at the School of Visual Arts, MFA in Products of Design in order to better pursue those interests.

The work I’ve done with or on artificial intelligence covers a wide spectrum, from designing APIs and developer experiences around IBM’s Watson to creating bespoke LLMs for financial tech clients at argodesign. I’ve also led teams to design interaction models for augmented reality for Magic Leap, designed fleet management software and training processes for humanoid robotics company Apptronik, and led a massive effort to digitally transform Salesforce’s website from a marketing website into a digital experience built on its own products.

None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?

I’ve been lucky to have many mentors and colleagues who have helped me throughout my career, but two people really stand out as key directional impacts. First, my dad was a huge influence on my path. He was a structural engineer who had a degree in architecture, enabling him to speak the language of engineering and architecture, which resulted in better, more beautiful buildings when he collaborated with architecture firms. I feel the same strength within myself, the ability to speak design, business, and technology to collaborate and find ways to create the best user experiences that deliver business value and are actually feasible to build.

Another influence was Allan Chochinov, the chair of the MFA in Products of Design at SVA. His point of view that we’re not designing products, we’re designing consequences, was deeply impactful. Seeing him pioneer a new program, building it from the ground up — and his leadership style along the way — was inspirational to my path as a designer and a leader.

You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?

The three traits I currently value the most are curiosity, openness, and intuition. The first two have allowed me to reduce the impact of my ego on collaborations, being more open to learning new mental models and working styles. Instead of getting frustrated at client pivots, I work to reframe and be curious, being open to new ways of thinking and approaching the work.

Intuition has always been a deep part of my work and life — understanding what the right path for me looks like and knowing when to take risks. I’ve had a few big “hell yes’s” in my life, the most recent being a trip to Antarctica, and I have never been disappointed when I embrace that intuition and take a leap, whether personally or professionally.

Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?

I’m excited and passionate about how AI will take on repetitive and potentially dangerous tasks for humans. This unburdening will give us back the one resource we can’t create more of: time. It will allow us to reinvest human time and attention in creative endeavours that we haven’t even thought of yet. And this reinvestment will lead to unforeseen leaps in art, community, and quality of life for all humankind.

Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?

Unfortunately, that utopian vision can only come to life if the right leaders are embracing and leading the charge for how best to make and manage AI. A main concern I have with the current trajectory of AI is the appropriation of content, data, and interactions for training purposes. There needs to be a safe, opt-in way to create training corpuses without illegally scraping content from the internet.

Another concern is the widening wealth gap that will continue if AI usage isn’t equitable across all areas and populations. Similar to the internet, AI could be treated as a utility that all people have a right to use. If only the most wealthy individuals are creating, training, and profiting from AI, then this gap will continue to grow and, rather than bringing humanity together, AI will drive us further apart.

The final concern is the energy usage that’s required to complete AI-based tasks. We’re already seeing companies invest in nuclear energy in order to supply their AI products, and depending on who’s investing, this can either drive the growth of renewable or lower-emission energy, or it can continue to exacerbate the fossil fuel-driven climate issues that we’re currently dealing with around the world.

As a leader within an AI-driven organization, how do you embed ethical principles into your company’s overall vision and long-term strategy?

My role as a creative director at argodesign, a product design agency, means that I’m working with AI in two ways: as a toolset to help our teams complete work and as a strategy for our clients to help them become AI-driven organizations, if applicable. Internally, we have ground rules for which AI tools we use, and what type of content we put into that tool. We don’t input client data or proprietary information into AI. We do use it quite heavily for building frameworks, strategic content, and imagery for our work.

When we work with clients, many are rushing to inject AI into their product. The most ethical principle for AI is to truly assess if there is a compelling and valuable enough reason to incorporate AI at all. The next step is to ensure that the right type and amount of AI is used. Not every interface needs visible AI or chatbots. Sometimes a “less sexy” form of artificial intelligence like a machine learning algorithm or simple analytics are enough to meet the need. The best AI is the least AI possible to get the job done.

What specific decisions have you made to help ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?

When I think about safe, transparent, and responsible AI, I tend to focus on the AI’s traceability and sources cited and creating feedback loops. Traceability is being able to “look into the box” and see how the AI came to its conclusions, tracing its “thinking” from point to point. Another aspect of traceability is knowing the corpus, or original content and data, that the AI is trained on. An AI algorithm is only as good as its training data and training process. Be wary of “amazing AI” that is cheap or free, because the company has either scraped content to get training data inexpensively, potentially using lower-quality data, or they’re gathering your interactions and data for free to further train their own proprietary algorithm.

In addition to traceability, AI technologies should cite their sources. Just like in a research paper, citing sources means that the AI provides the references it used to make its output. It’s another good way for a human in the loop to double check and ensure that the output is correct, for areas that have a factual output.

Which leads to my final point: ensuring that the right feedback loops are in place to improve both the AI itself and the outputs it creates. There are many ways to design a feedback loop, and we’re starting to see more of them pop up within AI tools. When you’re given two outputs to the same prompt and asked which is better, you’re training the system to improve itself. This can be done for qualitative outputs like images and binary outputs that are either correct or incorrect. It reduces hallucinations longer-term and improves the experience for users along the way.

Have you ever faced a challenging ethical dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and ethical responsibility?

Fortunately, I haven’t yet come across a challenging ethical dilemma due to the rigorous policies and principles that our teams and I follow throughout our process.

Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?

At the end of the day, AI is just a set of algorithms and code that’s created by people. I’m not concerned about AI itself, but who created it and who’s using it and to what end. Almost any tool or type of tool can be used to “harm humans” so the better question is how to ensure that humans stay safe using AI. Part of the answer is equitable access to AI and ensuring that there’s diversity in the training data and programmers who are creating the AI tools.

Unfortunately, with technologies like this, it almost always requires government regulations to force companies to create more ethical approaches instead of the fastest, most profitable approach. Similar to safety regulations around nuclear power plants or creating standards for the internet, a larger consortium will need to come together to create the guardrails necessary to ensure the safe creation and use of AI longer-term.

Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if they are trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?

As mentioned previously, the quality and types of training corpuses really drive the quality of outputs. We can work towards having quality standards for corpuses to ensure a certain amount of diversity, quality, and compensation for the data that’s used. Feedback loops are also essential to identify and improve hallucinations, but require users who are incentivized to double-check results and give that feedback. Companies should find the right ways to incorporate those interactions into their AI technologies.

Here is the primary question of our discussion. Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”? Please share a story or an example for each.

1. Purpose: have a valuable, specific reason for why this particular AI system needs to exist and ensure that a less-resource-intense version of AI (analytics, machine learning, etc.) can’t solve the problem instead

2. Training corpus: ensure that training data is sourced ethically, respecting the ownership of such data, offering opt-in (rather than opt-out) data gathering, or paying for data. Also assess the corpus for a meaningful amount of diversity to reduce substandard outputs

3. Equitability: ensure that access to the new technologies is not prohibitive and can be used by a wide variety of people. Assess a reasonable price for selling that access so that it doesn’t cater to only the wealthy. Provide varying levels of human training so that wider audiences can utilize the technologies. As AI unburdens people of certain jobs or roles, provide or create retraining programs to ensure that the humans are taken care of.

4. Traceability: incorporate traceability into the AI’s process to allow for general understanding of its “thought process.” It should cite sources and allow for new, high-quality data to be added as references as needed

5. Feedback loops: ensure that appropriate feedback loops are in place to gather user input on the quality of the outputs, the format and process of getting to those outputs, and the veracity of the outputs. Allow for reporting hallucinations in order to improve the system overall.

Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?

I hope to see companies self-organize and create unified AI guardrails and safety standards. But in order for that to happen, they will need to be outwardly incentivized through government regulation to create such controls, which should include the use of diverse, high-quality corpuses and compensation for their use. Other regulations and global standards should be made to ensure ethical and equitable use of AI — specifically how outputs are identified as “created by AI.” This may include watermarking outputs or incorporating metadata to ensure traceability and to help combat mis- and disinformation globally.

What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?

One of the biggest challenges in the field of AI is the current “gold rush” phase we’re in. Companies are sprinting ahead with creating newer and faster models, without consideration for unintended consequences and their impact on global politics, society, and climate change. There’s very little strategic thought going into embedding a proper amount of constraints and safety measures into this technology. It feels more like an arms race than a thoughtful application of new technology.

There will always be an aspect of pushing forward for progress’s sake, but, to quote the chaotician Ian Malcom in Jurassic Park, “scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” If we don’t take a moment to reflect on the potential negative impacts, and force that reflection on companies that are seeking profits, we risk building a dystopia of epic proportions. We may never get to that utopian vision of humans being freed from hardship and unlocking a golden era of creativity and quality of life. Instead of AI serving humankind, humans will serve AI.

You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger. 🙂

If I could start a movement it would be a global community of open studio art nights to enable and empower everyone to make art on a regular basis! We could all use more hands-on making and less screen time, and the best way to do that is in community. Once a month I host a women’s art night in my art studio where we listen to music, talk, and make art. I always end the night feeling energized, and if everyone could feel that I think the world would be a better place.

How can our readers follow your work online?

You can find me on LinkedIn https://www.linkedin.com/in/kathrynmarinaro or on my website: kathrynmarinaro.com

For more of the creativity that AI has freed me up to do, and to see my art nights, check out @kathrynmakes on Instagram!

Thank you so much for joining us. This was very inspirational.


Guardians of AI: Kathryn Marinaro of argodesign On How AI Leaders Are Keeping AI Safe, Ethical… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.