|
|
Dear Colleagues,
As always, we hope that these weekly links focused on what is currently happening in the information environment prove to be helpful to you and the good work you do.
Dark Reading reports on a phishing/disinformation campaign - Operation Texonto - which was spreading around the last months of 2023. Russia linked actors were found to be the source of these campaigns. Working in two ways, Operation Texonto involved a phishing campaign where employees of both a Ukrainian defense company and an EU agency were sent emails saying their Microsoft account had a problem and they had to log on via web to fix it. Of course, the link to the web fix went right back to the phishing operation. In addition to phishing, these same actors also sent disinformation emails to Ukrainian targets. The email messages had traditional Russian propaganda style messages such as saying food and drug shortages abound and that this war could not be won. The interesting thing about this operation is that one actor perpetrated both phishing and propaganda campaigns, where normally a threat actor sticks to one or the other. The author ends the article by saying "not to trust any information on the internet." That may make most of our jobs a little harder.
Looking at disinformation campaigns - the Eurasian Times describes the investment China is making in disinformation campaigns toward Taiwan. China purportedly funds its information operations' components with $10Billion annually, showing the priority they give information operations. In a 2015 paper, China stated the need to fight "informationized warfare" and they have used the term "information confrontation system" to discuss information warfare. The author points out that their naming it as a 'system' gives us insight into how their information campaigns are deployed- in layers. At this point some of the messaging has moved from attacking morale into direct messaging warning of dire consequences should there be a war. An important point that the author, and the people of Taiwan make is that in a democracy it is difficult to stop disinformation without infringing on the free speech rights of others at the same time. In addition to acting quickly to disseminate true information, Taiwan has been strengthening their civil society to withstand and respond to disinformation attacks.
Finally, an article with insight into how private industry is thinking about countering deepfake disinformation narratives. The advice is that an agency should be prepared ahead of time for such a disinformation attack. Looking at the possibilities, evaluate:
- What’s the worst that could happen?
- What disinformation could someone proliferate that would damage the reputation of your company or client?
After looking at the possibilities, the next action would be to create a plan for when such an event happens:
- Identify key stakeholders
- Establish communication channels
- Outline steps to swiftly counteract false narratives
- Employ AI experts in your team
This article is a good reminder that public and private entities don't completely work in silos; we work together and often face the same challenges.
Stay safe, stay healthy, and stay diligent,
Dr. Ryan Maness
Rebecca Lorentz
DoD ISRC at The Naval Postgraduate School
Compiled and summarized by Rebecca Lorentz. Please email tips or contributions to DoDISRC@nps.edu.
|
|
 Photo: Peter Treanor Via Alamy Stock Photo
Russia-linked threat actors employed both PysOps and spear-phishing to target users over several months at the end of 2023 in a multiwave campaign aimed at spreading misinformation in Ukraine and stealing Microsoft 365 credentials across Europe.
|
|
 Photo: Encyclopedia Brittanica
Taiwan has become prey to such disinformation campaigns, mainly from China and sometimes Chinese-backed global sources.
In a highly techri-dven society such as Taiwan, it becomes a cumbersome task to check the facts of the information.
 Photo: PR News
Public relations is a messy business. Even when you don’t screw up, somehow you do. That is even more true now in the emerging AI era of deepfakes, when an image of your likeness can be used against you, even if it wasn’t really you.
|
|
|
|