Defeating Deepfakes: Jon Geater of DataTrails on How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back
Understand the true meaning of work-life balance. There is a caricature of the modern workforce that paints a picture of extreme self-analysis, pathological self-care and regular prophylactic days off. I’m not talking about that. But time spent on one’s personal health and relationships genuinely improves one’s performance in the workplace too. I work hard, and I work a lot of hours and weekends, but I can only do that effectively because I also cultivate my relationships and make time to go rowing and go to the gym and feel good about myself.
Most of us are very impressed with the results produced by generative AI like ChatGPT, DALL-E and Midjourney. Their results are indeed very impressive. But all of us will be struggling with a huge problem in the near future. With the ability for AI to create convincingly real images, video, and text, how will we know what is real and what is fake, what is reality and what is not reality? See this NYT article for a recent example. This is not just a problem for the future; it is already a struggle today. Media organizations are struggling with a problem of fake people, people with AI-generated faces and AI-generated text, applying to do interviews. This problem will only get worse as AI gets more advanced. In this interview series, called “Defeating Deepfakes: How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back,” we are talking to thought leaders, business leaders, journalists, editors, and media publishers about how to identify fake text, fake images and fake video, and what all of us can do to push back against disinformation spread by deepfakes. As a part of this series we had the distinct pleasure of interviewing Jon Geater.
Jon Geater is co-founder and chief product officer at DataTrails. Jon held senior global technical roles at Thales eSecurity, Trustonic, ARM and nCipher. Jon is a serial leader of open standards at the board-committee level. He serves as co-chair of the Internet Engineering Task Force (IETF) Supply Chain Integrity, Transparency and Trust (SCITT) Working Group.
Thank you so much for joining us. Before we dive in, our readers would love to ‘get to know you’ a bit better. Can you share with us the “backstory” about how you got started in your career?
I started coding professionally (or at least I got paid for writing code!) at 14, and by the time I finished university I had had contracts in a wide range of areas: medical systems, digitizing cemetery records, early mobile media apps. So, I’ve never been afraid of change and always enjoy solving cutting-edge problems. After university I was really looking for a challenge — I wanted to work on hard things and learn from smart people — so I interviewed for a job a nCipher and started working on embedded cryptography and technologies that we would now call ‘confidential computing’.
After that it was just a matter of driving forward. With cryptography proliferating, and everything miniaturizing, I went to be CTO of the ARM Secure Services Division, driving forward things like TrustZone technology and embedded smartphone security, and from there created Trustonic, a spin-out joint venture of ARM, Gemalto and G&D which took this excellent but very raw technology and brought it to market in a consumable way.
After a few years of that and finding a stable base for the company, I went to Thales to be CTO of the eSecurity business line where, along with the main business of commercial and banking crypto, I got to see the cybersecurity aspects of exciting large infrastructure such as smart electricity grids and self-driving vehicle networks.
That led me to spot a gap in traditional trust systems that don’t properly service a digital-first world, and so I founded DataTrails to solve the issue of provenance and long-term integrity wherever data is used in powering decisions.
Can you share the most interesting story that occurred to you in the course of your career?
I’m lucky to have had a lot of interesting and formative experiences in my career, but the one I’ll pick is from very early in my leadership career.
We had been working with a company in a partnership for a while, they were building some really interesting and useful technology based on our platform, and over time the relationship developed to a point where we were their big hope for a route to market.
Unfortunately they were just too early. It didn’t take off and they ran out of money, and the board sent in a new team to turn around the business (which involved letting go of all but one of the technical team who I was working with). On the fateful day that the final decision was made I went into the office to have a series of 1:1 meetings with the team that I liked and admired, and had to tell them we couldn’t continue and they would need to find new employment.
One lady in the team had taken an interest in my family (we had children of a similar age) and always liked to make social chat before we got down to business. That day was no different, and because it was close to a special occasion, she beamingly told me that she had been looking forward to my visit because she’d gotten me a gift, and presented me with a neat package to take back to my family. Then I had to talk to her about how she wasn’t getting a pay check next month.
While reorganizations in business are necessary from time to time, that early experience really taught me a lot about human leadership and how to run a good business.
It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?
I wouldn’t say it’s funny, but one thing that really stuck with me was when I was beginning to climb the corporate ladder and was at my first commercial partner conference. I was in that interesting limbo land of moving from being a full-time coder to a leader, and I was trusted with representing the company at an important conference hosted by one of our biggest partners.
Unfortunately, as luck would have it, we had a huge bug in my product that led to an effective outage, so I was working with the team back at base to get it fixed. During the conference I had a brainwave and realized how to fix the issue, so I opened up my laptop (discreetly, I thought) and started coding and ran tests and fixed the issue. I felt so good about myself. But about 20 minutes later when the conference session was over the CEO of the company came over and told me in no uncertain terms that I was in a partner conference, and I could either be completely engaged, or not at all. Typing away on a laptop while others are presenting is not respecting the context I was in.
And he was dead right. I’ve never done something like that again. Even though what I achieved was great, and in the interests of the company, you have to be able to let go of individual pursuits and be present with the people and process you’re in. I should have waited or, better yet, delegated. And that really helped to kick-start my delegation and leadership motivation.
A lesson that I think helps today as remote work takes over.
What are some of the most interesting or exciting projects you are working on now?
Too many to count, honestly. We work with digital trust in carbon credit accounting, software supply chain, nuclear materials tracking, auctioning of historic artifacts, media authenticity, AI safety…
And in the standards community I’m engaged with, there are use cases from cybersecurity to cold chains to citizen voting.
The exciting thing about building a new standard for digital trust through provenance and transparency is that it has the chance to make almost every corner of industry safer and better.
For the benefit of our readers, can you share why you are an authority about the topic of Deepfakes?
I wouldn’t claim to be an authority on deepfakes per se, but I am an authority on information security and the safe distribution of digital materials in general, be they pictures, videos, software or business documents.
A large part of my work focuses on providing assurance that the artifacts reaching you through your ‘digital supply chain’ are authentic, come from a trustworthy and accountable source, and have not been tampered with while you weren’t looking. Something you might call “data provenance”. As the world in general becomes more and more AI-generated these provenance and integrity-based techniques are going to be vital in keeping the honest world turning.
Let’s now shift to the main parts of our interview. Let’s start with a basic set of definitions so that we are all on the same page. Can you help define what a “Deepfake” is? How is it different than a parody or satire?
This is a critical point: just because something is “not real” doesn’t make it “inauthentic” or “bad”. Technology has no morality, as the saying goes, and that’s just as true with GenAI as any other. The risk comes in how the technology is used, and how images and videos are presented, and that’s exactly the difference between a simple parody and a dangerous deepfake.
Satirical TV shows have done skits for years that look like a politician making questionable statements, or reporting on news events that never happened, and in some cases viewers tuning in at the wrong moment might for a moment be convinced that it’s real. But that impression quickly dissipates because in the end nobody is trying to convince you that it’s actually true.
Now with GenAI enabling very realistic generation, it’s possible to make similar parodies cheaply, without the need for wigs or a television studio. Anyone can do it from their bedroom and just the same anyone seeing it might briefly believe it’s real. And, so, the only difference between a deepfake and any other content is how it’s presented: does the person delivering the content to you tell you it’s real, to manipulate you, or do they clearly label it as art, to entertain you?
Can you help articulate to our readers why Deepfakes should be a serious concern right now, and why we should take measures to identify them?
Deepfakes are a concern because they are so convincing. Without giving it much thought one might see a video on social media and simply believe it.
Of course, we’ve had fake photos for as long as we’ve had any photos. This isn’t a new problem. But the power of the GenAI tools that have sprung up in the last year make it incredibly easy to create vast amounts of deepfake material quickly and cheaply. And it’s that huge volume of fake data along with the ability to produce very specific tailored versions that makes the threat so huge now.
Why would a person go to such lengths to create a deepfake? How exactly can malicious actors benefit from making them?
Deepfakes are not an end in their own right: they’re just tools in other activities. That means there are lots of different motives and benefits depending on what people are trying to do.
A few examples:
- Political actors create convincing videos of rivals or public figures saying or things to discredit their position (adding heft to the existing ‘fake news’ agenda)
- Financial criminals create convincing voice clones of individuals to hack into telephone banking, get past an office front desk, or convince a parent to hand over money by impersonating a child (scaling up opportunities for existing phone scammers)
- Jealous ex-partners create realistic pornographic videos of their ex and send to friends, family etc. to harm and humiliate them (an escalation on the existing phenomenon of ‘revenge porn’)
How can the public neutralize the threat posed by deepfakes? Is there anything we can do to push back?
If we all take just a little responsibility and adopt a conscious approach to data authenticity it will make it much harder for the scammers and ne’er-do-wells to disrupt the business of honest digital citizens. Insist on the apps and services you use adding content credentials that can be as easily verified as the padlock on your web browser.
Unfortunately, it’s a fact of life that people who want to be bad, will be bad. We can’t completely prevent that. But if applications build in a stack of protections including AI detection and provenance and authenticity verification we can regain the upper hand, sort the good data from the bad, and hugely reduce the infiltration of bad data into good environments.
This is the signature question we ask in most of our interviews. Can you share your “5 Things I Wish Someone Told Me When I First Started” and why? Please share a story or an example for each.
1 . It’s not about what you do: it’s about the impact it has.
Make sure that everything you do has a clear outcome attached to it. Being busy for its own sake is often counter-productive.
So often it’s easy to get drawn into a regular flow of work and meetings and feeling busy doing a lot, only to be disappointed that people aren’t appreciating “how hard you work”. Turn that resentment into a positive: work on changing what needs to be changed, not just turning the wheel.
2 . The world cannot move as fast as you want it to.
Systemic and societal change takes time. Just because something is ‘obviously better’ doesn’t mean it can be adopted in an instant. I’ve worked on several extremely important and successful technologies in my time, but the time from conception to first version to adoption is way longer than you’d hope or think. With my work on hardware security modules (HSMs) or Trusted Execution Environments (TEEs) or Key Management Interoperability Protocol (KMIP) or now decentralized trust systems, all the innovation happens super quickly at the beginning, but then the road to use and broad adoption of those essentially unchanged technologies takes a decade or more.
Innovation, ironically, requires a lot of patience.
3 . It’s not (always) clever to be (too) clever
Innovation and progress require insight and breakthroughs. But when it comes time to market and position your new thing, or make competitive comparisons, or build an ecosystem, you have to strip it back to the core essentials.
I used to think that if only people would internalize and understand all of the exquisite technical details that they would see the light and do things in a different, better way. But that’s never the case. And if you step outside yourself for a moment and consider how you interact with any field other than your own, you’ll see that the best marketing doesn’t need to do any educating. You win by being clear, not (over-)clever.
4 . Fine words butter no parsnips
It’s amazing how many people — very famous and credible people — will look at your new idea or product and tell you how amazing it is and how you need to go do it right now. And it’s amazing how few of those people actually put their hands in their pockets and pay for it. Encouragement and optimism are essential for innovators but don’t get too carried away with praise words and love bombs. You’ll always find loads of people who love your idea, but it’s more important to find people who need it.
5 . Understand the true meaning of work-life balance
There is a caricature of the modern workforce that paints a picture of extreme self-analysis, pathological self-care and regular prophylactic days off. I’m not talking about that. But time spent on one’s personal health and relationships genuinely improves one’s performance in the workplace too. I work hard, and I work a lot of hours and weekends, but I can only do that effectively because I also cultivate my relationships and make time to go rowing and go to the gym and feel good about myself.
In this line of work, it’s easy to be glued to your keyboard writing one more article or fixing one more bug or calling one more investor, but you have to put boundaries on it or you end up flapping and ineffective.
If you could start a movement that would bring the most amount of good to the greatest amount of people, what would that be? You never know what your idea can trigger.
Movements in digital literacy are really important, I think. I’m lucky to be in that group of people who are old enough to remember a time before the internet, but young enough to have embraced it and made a career in tech. Unfortunately, as big tech takes more and more center stage in our lives and hidden tech runs more and more of our infrastructure and institutions, consumer tech has dumbed down all the interfaces and made the inner workings opaque to the people they serve.
This last point seems noble: usability is king, products should be intuitive. But it also means that it’s easy to slide into a situation where we work for the tech rather than the tech working for us. Ironically it has led to an impression that tech is complicated and computers are hard. They don’t have to be!
I really hate waste and inefficiency and a lot of what I do is targeted at doing simple things better, which in theory digital technology offers. But simple things touch a lot of people and so they require the support of a lot of people, and that’s the point where often apathy or fear take over, and the status-quo prevails. If we can demystify digital concepts and systems thinking, moving people first past fear, then to a pro-active demand for better services, we’ll all be much better off.
How can our readers further follow your work online?
I’m really bad at personal social media, but following at DataTrails on LinkedIn will give a lot of updates. Those with a deep tech bent are more than welcome to follow the IETF supply chain integrity, transparency and trust (SCITT) work at https://scitt.io/
Thank you so much for the time you spent on this. We greatly appreciate it and wish you continued success!
Defeating Deepfakes: Jon Geater of DataTrails on How We Can Identify Convincingly Real Fake Video… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.