Defeating Deepfakes: Nicos Vekiarides of Attestiv How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back
Your network is the key to your career. Network early and network often — it is so much easier finding opportunities through connections and you learn so much faster from talking to people and peers. For me, I became a believer when I saw resume building go from a necessity to a formality once I built a trusted network and the continuous learning from peers, former colleagues and new connections has been priceless.
Most of us are very impressed with the results produced by generative AI like ChatGPT, DALL-E and Midjourney. Their results are indeed very impressive. But all of us will be struggling with a huge problem in the near future. With the ability for AI to create convincingly real images, video, and text, how will we know what is real and what is fake, what is reality and what is not reality? See this NYT article for a recent example. This is not just a problem for the future; it is already a struggle today. Media organizations are struggling with a problem of fake people, people with AI-generated faces and AI-generated text, applying to do interviews. This problem will only get worse as AI gets more advanced. In this interview series, called “Defeating Deepfakes: How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back,” we are talking to thought leaders, business leaders, journalists, editors, and media publishers about how to identify fake text, fake images and fake video, and what all of us can do to push back against disinformation spread by deepfakes. As a part of this series we had the distinct pleasure of interviewing Nicos Vekiarides.
Nicos Vekiarides is the co-founder and CEO of Attestiv, a company providing industry-first, cloud-scale fraud protection against deepfakes and altered photos, videos and documents. Nicos has spent over 20 years as a tech innovator providing enterprise IT solutions, starting 2 successful companies and working for large public companies in enterprise IT and data protection. Nicos holds 9 technology patents, is a graduate of MIT and CMU and volunteers to help other aspiring entrepreneurs.
Thank you so much for joining us. Before we dive in, our readers would love to ‘get to know you’ a bit better. Can you share with us the “backstory” about how you got started in your career?
One summer, when my father worked as a schoolteacher, he managed to borrow an Apple II computer for the summer. Unlike other single purpose electronics, I realized the possibilities of a computer were endless with a bit of programming. That set my mind on engineering and technology. After college, I landed my first job, and even though it was for a failing company, it felt like my work had an immense impact and actually helped lead to an acquisition of our division. Once we got acquired and the euphoria wore off, it felt like my world started moving in slow motion. All I can recall over the course of a year is spending about half my time multiplayer gaming. One day, the company announced it was moving our offices and fired most of the staff except me. Even though the new office was only 20 minutes further away, something clicked inside that said, “I can’t spend the rest of my life here.” I joined two colleagues who were let go as part of the move, and worked on starting a company together. That was the start of my entrepreneurial journey.
Can you share the most interesting story that occurred to you in the course of your career?
One that left a big impression was being asked to pitch a product we had barely started working on to a large company. I was petrified at first that I would be called out on things I hadn’t thought through. As it turned out, my nightmare became reality and the concept I presented was beaten to a pulp in front of my eyes and a sizable audience. Once I managed to pull myself together, I was able to clearly record some customer requirements. It turns out we delivered the product the customer wanted several months later. It really left an impression on how important early prospect interactions are when building a new product or company — and how falling on your face early on can be relatively helpful.
It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?
In my early startup days, we were asked to bring equipment for an onsite demo for a prospective customer. The prior day, once we got the demo to work, our small team decided to pack it up and go out and celebrate.
We visited the customer the next day and, to our shock, nothing worked. Worse yet, we diagnosed our system and concluded it only worked by accident. Ready to panic, one of my colleagues somehow convinced us to instead verbalize how impressive the demo would have been. Tired and with nothing to lose, we gave it a shot. Well, after 20 minutes of telling a room full of blank stares to “picture this” and “imagine that”, we lost our ability to keep a straight face and, before long, everyone in the room was laughing, presumably with us. Even though we lost the deal and can still laugh about the crazy presentation in hindsight, we learned valuable lessons about declaring victory too soon, preparing for demos and Murphy’s Law.
What are some of the most interesting or exciting projects you are working on now?
Generative AI and the implication of altering reality have really given us important problems to solve that affect many people, industries and quite honestly, life as we know it. Ultimately, we are excited when we can help ensure what people are seeing or hearing is indeed real. Conversely, it feels great when you can stop a fraud or fake in its tracks.
We are now building breakthrough detection capabilities for higher quality fake photos and video than ever before, using AI models that are faster and more broadly applicable. In some sense, we are gearing up for a long battle, but enjoying being on the right side of this battle.
For the benefit of our readers, can you share why you are an authority about the topic of Deepfakes?
When we founded the company in 2018, Deepfakes were still perceived as a futuristic science project with questionable quality output and the realm of expert hackers. We’ve seen the iterations and improvements all the way to today where all it takes is a bit of typing to get high quality output and nearly anyone can create fakes. Having a head-start to work on this, we’ve developed means of detection that are generalized enough to work across most all AI generation frameworks.
Fast forward to today and there is a great deal of discourse on Deepfakes, from the recent Taylor Swift scandal to companies being swindled for $25M, and our years of experience position us well to take on the best of them.
Ok, thank you for that. Let’s now shift to the main parts of our interview. Let’s start with a basic set of definitions so that we are all on the same page. Can you help define what a “Deepfake” is? How is it different than a parody or satire?
A Deepfake is a generative AI fake photo or video usually bearing someone’s likeness. Often, but not always, Deepfakes have malicious intent and are used to misrepresent someone’s appearance or opinion. A malicious example might plug someone’s likeness into pornography, such as the recent Taylor Swift pornographic images, expressing disinformation, as we often see in political spheres or simply using somebody’s image or voice to perpetrate fraud through phishing or other means.
On the other hand, not all Deepfakes are bad. Creating videos using synthetic actors without hiring and paying real actors, so long as it’s done with appropriate permissions, can be mostly harmless. Enhancing or replicating videos of oneself can even be useful for marketing activities.
Parody and satire, on the other hand, are generally harmless because they are widely recognized as fake. When Deepfakes are used for satire but portrayed as real, it is easy to see how they can cross the line into malicious intent.
Can you help articulate to our readers why Deepfakes should be a serious concern right now, and why we should take measures to identify them?
So many aspects of our daily lives require photos, videos or voice, whether it’s chatting with someone online, sharing memories, answering the phone or watching the news. Many businesses likewise rely on photos and videos, whether it’s banks or insurance companies using photos and documents for underwriting assets, photos for insurance claims, or photos in online marketplaces for buyers and sellers. Clearly most of us have been trained to trust what we see and it’s a difficult process to untrain ourselves to be a bit more suspicious.
Imagine the havoc that can be wreaked on society and businesses if a lot of the digital media, effectively the currency that is relied on day-to-day, is replaced with something fake and fraudulent. The repercussions range from fraud and losses, reputational damage, potentially undermining public trust and even possibly starting wars — pretty scary stuff indeed.
Why would a person go to such lengths to create a deepfake? How exactly can malicious actors benefit from making them?
For starters, perpetrating fraud in ways that are difficult to track or catch are always attractive to a fraudster or hacker, versus more overt financial crimes. For example, so many of us see phishing attacks on a regular basis and that we have conditioned ourselves to ignore emails containing bills for items we never bought and thinly veiled requests for personal information from fake sources. In spite of that, there is still a huge amount of fraud that is perpetrated.
Now imagine what happens when a bad actor can disguise themselves in the audio or video of a trusted contact? Recently, a Hong Kong company had $25M swindled via a deepfake video pretending to be a CFO. Until people condition themselves to try multiple ways to authenticate their trusted contacts, a veritable free-for-all may ensue.
On a larger scale, those disseminating disinformation can now find a forum that uses voices and images of trusted people to do their dirty work.
Can you please share with our readers a few ways to identify fake images?
Deepfake images are getting better and in many cases are imperceptible to the human eye. There are many examples of fake photos on the internet that may be cartoonish, missing shadows, may have inconsistent level of details amongst elements or objects, and display poor rendering of human features, including hands or teeth. However, genAI frameworks are constantly improving and you can no longer rely on the visible elements alone. Ultimately, identifying the source of a photo and analyzing its content for believability may be the last human tool, when you do not have a more elaborate AI-based analysis framework to discern what’s real.
Similarly, can you please share with our readers a few ways to identify fake audio?
Like images, fake audio can sometimes present clues, such as unnatural intonations, choppiness or other content anomalies. However, the best way to identify if the audio is real or fake may be to verify the audio with the source whenever possible — effectively, performing a multi-factor authentication. If the audio is a news item, you can also verify by checking across multiple other trusted news sources.
Next, can you please share with our readers a few ways to identify fake text?
Typically, fake text that originates from ChatGPT and similar frameworks tends to draw from multiple internet sources and the verbiage is often directly lifted from sites such as Wikipedia. Often the content comes without citation, raising other copyright-related issues. As such, the content is often clinical-sounding, but because text can be smoothed over by humans or put into someone’s personal tone using AI, the problem of detection can be challenging to the point that even detection frameworks are rendered useless.
I’ve heard from many in education that the only way to get a verified original essay is to have kids complete it in class in front of an instructor. Some of the solutions suggested for this space include having fake text creators regulated and watermarked, but with open source, the door is always open for bad actors.
Finally, can you please share with our readers a few ways to identify fake video?
Fake video has traditionally been plagued by jerky movements, poor audio synchronization, eye movements that are unnatural or missing, and inconsistent lighting and shadows. Again, the technology of deepfake videos is moving at a rapid pace and a lot of the giveaways of yesterday and today are not guaranteed to be present in the near future. Without an AI machine analysis framework, analyzing the context and content of the video for believability is often the next best thing.
How can the public neutralize the threat posed by deepfakes? Is there anything we can do to push back?
There are suggestions floating around that don’t solve the problem entirely. These include
- Supporting legislative action to ban deepfakes and their malicious use by regulating entities that produce them. While this is a valiant cause, it may simply be too late, with generative AI frameworks readily available in open source and from commercial entities around the world, it would take a monumental effort to get this proverbial cat back in the bag at a universal level.
- Creating and using a set of standards for media authenticity that all media creators follow. Indeed, this is being done by organizations such as C2PA, but requires all media creators to play by the rules, which may take a long time to standardize and adopt or may never take effect universally.
For the general public, the best action may be awareness and being extra cautious when greeted by suspicious photos, videos, audio or text. Verify communications with multi-factor authentication and keep your eyes open for telltale signs of fakes. Also realize that if some media doesn’t seem logical or believable, it may not be real.
Finally, Deepfake detection frameworks are becoming available and more affordable that use both rules-based and AI-based analysis for vetting media. By standardizing these frameworks into existing systems, a lot of problems can be eliminated behind the scenes. As a provider of these types of solutions, I’ll put in my plug that now might be a great time to consider using them.
This is the signature question we ask in most of our interviews. Can you share your “5 Things I Wish Someone Told Me When I First Started” and why?
1 . Your network is the key to your career. Network early and network often — it is so much easier finding opportunities through connections and you learn so much faster from talking to people and peers. For me, I became a believer when I saw resume building go from a necessity to a formality once I built a trusted network and the continuous learning from peers, former colleagues and new connections has been priceless.
2 . You learn from your failures. Don’t spend too much time lamenting your failures. Instead take them as constructive advice, because as you climb the ladder of success, there are fewer and fewer opportunities to receive good and honest advice. As I described my experience from pitching a product that was nowhere near fully baked, falling on your face in front of a prospective customer might be the best thing that could happen to an entrepreneur as it helps steer your course toward a better solution.
3 . Always be authentic. That should be an easy one, but sometimes while learning from others, you may inadvertently go too far modeling yourself after others. I recall when I first enrolled in formal management training, I took an evaluation and answered questions based on what I thought my current and former managers would say and do. To my surprise, I scored lower than expected. Then I retook the evaluation and answered very authentically and scored much higher. It was a bit of a wake-up call that there is very little that is robotic about being a leader and success in one context is not necessarily a formula that can be reused elsewhere.
4 . Build a strong foundation because everything that can go wrong eventually might. By foundation, I mean intellectual, physical, social and spiritual because you never know what you might encounter. In a prior company, I had a company downturn, death of a parent, elder care issues for my other parent and my wife diagnosed with a serious illness, all at the same time. Although eventually the company was successful and we somehow pulled through, it was not an easy path and required a lot more strength and support than I ever thought possible.
5 . Don’t take yourself too seriously. Ultimately, nothing works out quite the way we anticipate so we either have to work with the opportunity that remains, walk away learning something, or have a good laugh. I guess I already talked about my demo of a product that just wouldn’t work in front of a room full of blank stares.
You are a person of enormous influence. If you could start a movement that would bring the most amount of good to the greatest amount of people, what would that be? You never know what your idea can trigger. 🙂
Having spent time in both startup and corporate environments, I’ve come to the conclusion that technology can be developed much faster in startup environments. However, it is most often the case that startups simply don’t succeed commercially or never reach the point of raising necessary capital, even with sound ideas and products. With that in mind, it would be great to see more corporate sponsored incubators, where ideas relevant to the corporations can either be taken in by the corporations or launched as their own companies. This happens in smaller circles, but it would be great to see this emerge as a viable alternative to standard VC-backed startups that gives successful concepts a much better chance of succeeding, allows large corporations to be more nimble, and gives entrepreneurs a broader range of opportunities.
How can our readers further follow your work online?
Visit www.attestiv.com or you can also follow me or Attestiv Inc. on LinkedIn
Thank you so much for the time you spent on this. We greatly appreciate it and wish you continued success!
Thank you!
Defeating Deepfakes: Nicos Vekiarides of Attestiv How We Can Identify Convincingly Real Fake Video… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.