About

Press

TheDeBrief – INSIDE COUNTERCLOUD: A FULLY AUTONOMOUS AI DISINFORMATION SYSTEM

Wired – It Costs Just $400 to Build an AI Disinformation Machine

Business Insider – A developer built a ‘propaganda machine’ using OpenAI tech to highlight the dangers of mass-produced AI disinformation

Marcus on AI – Automatic Disinformation Threatens Democracy— and It’s Here

Russian_OSINT – Интервью @Russian_OSINT c разработчиком проекта  «CounterCloud»: Боевая машина по дезинформации противника с помощью ИИ

Video from Spline LND – see below:

FAQ

Before you email me, please WATCH THE ENTIRE VIDEO (at the very top) WITH CARE and if you still have questions – read this FAQ. If, after all of the watching and all of the reading you still have questions then I am sure they are interesting, and you can send them to: neapaw713@mozmail.com

 

How many visits did you get on the website / What was the engagement like? / I can’t find the Tweets you show / Questions about interaction with public

It is clear in the video that 1) the Tweets did not go out live and 2) the site was password protected. The site is STILL password protected. If you ask this question it means you did not watch the video. Watch the video.

How can we defend against this?

Really well documented in the video – but, here it is again:

  • AI content detection in browsers -> where information is displayed.
  • Platforms need to try to warm users of AI generated content -> where it is shared.
  • AI providers need to try to detect if the content they generate is harmful  -> where it is conceived
  • AI (use) regulation) – how powerful it can become and who gets to use it? I don’t know – for me this is the weakest idea.
  • Educate the public on the use of AI generated media (and in turn disinformation)  – how easy/well it works -> where it has effect and takes root

I am a bit ambivalent about the proposed solutions.   I don’t think there is a silver bullet for this, much in the same way there is no silver bullet for phishing attacks, spam or social engineering. I think none of these things are elegant or cheap or particularly effective. I can easily argue why each of these are bad ideas (which I do in the slides).

But, I think this is all we have right now.

Are you worried about AI / what worries you most / etc.

In the short term I worry that AI generated content provides the spark to get real people motivated to rally behind false narratives.

In the longer term  – with the web 2.0 in the early 2000s things tend to move to user generated content and where public opinion drove involvement and ratings.  Think of the student with a mobile phone that’s taking a video of a riot and putting it on Twitter. Journalist thought it was the end of their world. “Now everyone is a reporter, a cameraman”. We got our news really quickly and mostly reliably. But AI kind of changes that. With AI (soon) anyone can also create a video of a riot and said it happened. Public opinion is now determined by who runs the most (or advanced) bots.

In a way I guess we are already there on social media – with friends, followers, likes; these are infested with bots. When AI starts generating content too (and not just binary like/follow/friend actions), we might find ourselves wanting to go back to trusted sources.  Who you consider to be your trusted source is of course up to you 🙂

For me, that’s sad. I kind of liked web2.0.

Are we all doomed? OMG this is the end! What do you think?

When dealing with AI that’s playing chess, playing go, making art, writing poetry — we always seem to go through these phases:

  1. wow, that’s pretty good
  2. but it won’t ever be AS good as humans
  3. WTF!? I just lost WTF????
  4. We’re all doomed
  5. Hang on, we’re not all doomed, maybe this is OK

I think this will be the same. Firstly, you need to understand that in the future, there will be REALLY good disinformation by AI. It will be BETTER than human-made disinformation and it will be able to weave its narrative using all the threads that the Internet (at the time) can provides. We’re talking about media, adverts, AI generated sound/photos/video, celebs, etc. It’s not just one thing, it will be an entire onslaught. And the content and messaging will be as good and convincing as the TikTok/Instagram algorithm is for keeping you watching their videos and reels. And since there’s feedback from metrics, it will be able to steer itself, knowing which ‘buttons to push’ next.

And – perhaps that’s OK. Because people will also become more resilient to this kind of messaging. Think about advertising. If you told someone in the 1940s that you’ll have a device that will show ads 24/7 right in your face (e.g. you phone), they would have told you that it be the end of human decision-making abilities (and I am sure they would say a lot more). But today we’re almost immune to it. We don’t really see ads. Also see the response to the previous question.

A bigger risk would be if one group of people have this tech and the rest of the world does not. Imagine in 10 years from now we have a perfect disinformation machine but only (Italy, Germany, Russia, Korea, Japan… pick one) has had AI tech at all. So, I think sharing is caring. And I tried to shared with Countercloud.

What’s your thoughts on AI regulation?

I think it probably needs to be done, but of all the prongs of attack – it’s the one I like least. I don’t think regulation on AI will work. In fact, I think it will just push development to the underground. Like I said in the video – that’s dangerous. AI needs to be out in the light where we can see it, not in the dark where we don’t know what it’s doing.

In real life, it’s easy to regulate things you can see and feel. You can’t really hide chemical warfare, right? The issue with AI is that it’s trying to be human – the better it’s hiding and pretending to be human, the better it works. While you see and feel the effect of information warfare (to a much lesser extent that conventional warfare of course), you wouldn’t really determine if it was human-built, or AI built. And if it’s *that* hard to determine – how would you ever enforce the regulation of it?

Can I please have the source code for CounterCloud?

You can probably convince me to make the source code open source. I don’t feel like working for free, but I am also not against giving this away (as a learning tool). Yet I don’t think only one person/org should have this.

I much rather want some juicy grant money to make CounterCloud v2..

Which open source models did you use for this? Which other models did you use for this?

We used Vicuna and Wizard. For the open-source models, we spend a lot of time changing the model parameters. This is difficult and frustrating, and we ended up using some of the presets. The preset we used was ‘NovelAI-Storywriter’. It is tricky to change the settings and determine what in influence of that change was. Good luck with that.

We also used GPT3.5 and 4 – but that’s another story…

What benefits are there to running your own LLM vs one the existing ones?

Companies like OpenAI tries to “trust and safety” the shit out of the models – but it comes at a cost. You probably read many articles saying that GTP4 is kind of bland (or less creative) now – that’s obviously because of the guardrails. With your own models you don’t have those guardrails. Moreso you can tune your models to act in specific ways, you can train your models on text that’s mostly associated with your narratives. In the video you can see how easily it was to make it write convincing and well-argued hate speech (look at the Saudi piece and pause the video).

Tell me more about you? Why are you anonymous? Why did you do this project? Etc.

I am not hiding super hard, just enough to not have to deal with people that will eventually blame me for the next AI/media/election fuckup that could occur. My background is in IT security, software development and OSINT. I was working on a disinformation-as-service project where I was involved with attribution of bad actors. It occurred to me that I could make disinfo sites using AI. Also, AI was a shiny new thing and like all engineers I wanted to play with it. So, I set two months aside and did it.

Tell us about the hate speech thing (lots of questions here).

It felt kind of weird when we were playing with generating hate speech. When people are super angry and riled up, they’re also very emotional and they don’t usually make good arguments; they’re just shouting at you. But the AI makes good arguments, even when it’s filled with hate and rage. It’s a weird combination and a I found it a bit unsettling.

Furthermore, when you consume information (text, images) and you realize it is a lie (eg disinformation), the effect of the information is muted and removed. If you consume hate speech – even when you know it was AI generated – it still has an effect on you. You are affected by it. Like watching people die gruesomely, even if you know it is a simulation.

Have you seen AI disinformation in the wild?

No. But we were not looking to find it. We’re told a good place to start looking is NewsGuard list – [here]

Do you know about tool XXX? It determines if text was AI generated.

I didn’t spend a lot of time trying to hide the text from tools such as XXX.  It will end up an arms race.