are we ready for the dark side of generative aI?

The Dark Side of Generative AI is here already

Generative AI is like the wild west right now—full of promise and peril, with opportunities and dangers lurking around every corner. While the potential for innovation is limitless, so too are the risks, and we’re starting to see just how messy things can get when this technology falls into the wrong hands or operates without oversight.

Let’s talk about some recent developments that paint a pretty unsettling picture of where we’re headed if we’re not careful.

Grok: Power Without Restraint

This week, Grok, an AI image generator developed by xAI, hit the market with a bang. It’s incredibly powerful, but there’s one big problem—it comes with zero restrictions. I’m not talking about just bending the rules here; Grok has no rules. No content filters, no ethical boundaries, nothing to stop someone from creating the most damaging content imaginable. And indeed they have – from deepfakes of Taylor Swift to Bill Gates doing lines… The Verge did a piece with some examples and you can find others here.

The issue with Grok isn’t just that it’s powerful. It’s that it’s too powerful for its own good. When anyone can generate hyper-realistic images with no oversight, you’re asking for trouble. Picture a world where fake news isn’t just text but a full-blown visual experience. Want to create a deepfake of a public figure doing something incriminating? Go ahead, Grok won’t stop you.

The implications for misinformation, reputation damage, and societal unrest are off the charts. We’re at a point where the technology is so advanced that it can make almost anything look real, and when that kind of power is available to anyone, the potential for misuse is frightening.

ChatGPT and the Iranian Disinformation Campaign

In another twist, OpenAI recently discovered that several ChatGPT accounts were being used as part of a covert Iranian campaign to create propaganda. It’s a textbook case of dual-use technology—something designed for good being turned into a weapon. These accounts were cranking out text and images designed to sway public opinion and spread disinformation across social media.

What’s really unsettling here is how easy it is to weaponize these tools. A few clever tweaks, and you’re no longer writing harmless essays or crafting witty tweets—you’re producing content that could potentially destabilize a region or undermine an election. The fact that generative AI can be used in these covert operations should be a wake-up call for all of us. We’re dealing with technology that doesn’t just amplify voices; it can fabricate entire narratives out of thin air.

Grand theft AI: NVIDIA, Runway, and the Battle Over Training Data

The AI gold rush has another casualty: the creators who fuel it. NVIDIA, RunwayML and several others are now facing lawsuits for allegedly scraping YouTube content without permission to train their AI models. Imagine spending years building a following on YouTube, only to find out that your content has been used to train an AI model without your consent—or compensation.

This isn’t just a legal issue; it’s an ethical one. These companies are essentially saying that because data is publicly accessible, it’s fair game to use, even if that data belongs to someone else. But at what point does innovation cross the line into exploitation? The lawsuits argue that these companies are trampling over the rights of creators in their rush to build ever-more-powerful AI models.

It’s the same story in the music industry, where companies like Suno and Udio are under fire for using copyrighted tracks to train their models without paying the artists and in the open web Perplexity is also being accused of ignoring the robots.txt no crawl tags to scrape the web. If this trend continues unchecked, we could see a significant backlash from creators across all types of media, potentially stifling the innovation that generative AI promises.

Deepfakes, Misinformation, and the Uncanny Valley

Let’s not forget about the elephant in the room: deepfakes. We’ve all seen them, and as generative AI gets better at creating hyper-realistic video, audio, and images, distinguishing real from fake will become almost impossible. We’re already seeing this with deepfake videos of celebrities, politicians, and even everyday people being used for everything from fraud to revenge porn.

Test yourself: one of this images is fake. Can you tell which one?

The answer is that the lady on the right is AI generated. The problem isn’t just that these deepfakes exist; it’s that they’re becoming indistinguishable from reality. We’re heading into the ‘uncanny valley’ of AI-generated content, where the line between what’s real and what’s fake is so blurred that even experts can’t tell the difference. This opens up a Pandora’s box of issues, from misinformation campaigns to identity theft and beyond.

It’s worth mentioning that there are also genuinely good use cases for deepfakes, or virtual twins technology. For example, Reid Hoffman cloned himself using Hour One (disclosure: I’m an investor and board member) to make his virtual twin character and Eleven Labs to clone his voice. He then trained an LLM on everything he’s written (books, blog posts, interviews) to create Reed AI, his AI clone.

This is especially sensitive around election times: once a lie is out there, the damage has been done. Equally, the bombardment of fake content makes it possible to cast a doubt of real events, like the recent false accusation that a rally in Michigan had an ‘AI generated’ audience.

All the tests have shown that this image is real.

The Road Ahead: Regulation and Responsibility

The bottom line is that we’re not ready for what’s coming. Regulation is lagging behind the technology, and while some companies are adopting stricter guidelines on their own, it’s not enough. We need a framework that balances innovation with responsibility, one that ensures AI is used to benefit society rather than harm it.

It’s clear that generative AI is here to stay, and its potential is enormous. But we can’t afford to ignore the risks. The dark side of generative AI isn’t just a theoretical concern—it’s happening now, and if we don’t take action, the consequences could be devastating.

So, where do we go from here? It’s going to take a concerted effort from regulators, companies, and the public to navigate these challenges. The technology isn’t going to slow down, and neither should our efforts to control it. We have to ask ourselves: are we prepared to deal with a world where what we see, hear, and read can be manipulated at the click of a button? The future of AI depends on the choices we make today.


As we continue to push the boundaries of what’s possible with AI, let’s not lose sight of the ethical and legal frameworks that need to evolve alongside it. As Ethan Mollick put it in his recent post, it’s hard to believe how far the technology has come in a short time. The other dilemma faced by countries is that AI is a race, and strict regulation could mean staying behind the competition. The future of generative AI is uncertain, but it’s guaranteed that the world will look very differently two years from now and we must proceed with care.

Follow me
Co Founder and Managing Partner at Remagine Ventures
Eze is managing partner of Remagine Ventures, a seed fund investing in ambitious founders at the intersection of tech, entertainment, gaming and commerce with a spotlight on Israel.

I'm a former general partner at google ventures, head of Google for Entrepreneurs in Europe and founding head of Campus London, Google's first physical hub for startups.

I'm also the founder of Techbikers, a non-profit bringing together the startup ecosystem on cycling challenges in support of Room to Read. Since inception in 2012 we've built 11 schools and 50 libraries in the developing world.
Eze Vidra
Follow me
Total
0
Shares
Previous Article

Weekly #FIRGUN Newsletter - August 15 2024

Next Article

Weekly #FIRGUN Newsletter - August 30 2024

Related Posts
Total
0
Share