Partial Patching Still Provides Strong Protection Against APTs

Analysis has surfaced what many would consider a surprising insight: Organizations that always update to the newest versions of all of their software have roughly the same risk of being compromised in cyber-espionage campaigns as those that apply only specific updates after a vulnerability is disclosed.

A quantitative look at data from 350 advanced persistent threat (APT) campaigns between 2008 and 2020 by researchers from University of Trento, Italy, shows that organizations with a purely reactive software update strategy had roughly the same risk exposure to advanced cyberattacks as those that keep up to date on everything. This is despite the fact that the subjects deployed only 12% of the updates that organizations that always updated immediately did.

The data shows that the same holds true for organizations that might apply updates to patch vulnerabilities based on information they have received in advance — for example, by paying for information about zero-days. Even these entities do not have a significant advantage over those that patch only on a reactive basis when it comes to breach risk, the study shows.

Why Reactive Patching Might Be OK for APTs
Though this flies in the fact of conventional wisdom, the study results reflect two realities: 1) APTs tend to be reactionary themselves, and 2) time-to-patch metrics matter.

In analyzing some 350 campaigns dating back to 2008 (including information on vulnerabilities exploited, attack vectors, and affected software products), researchers found that APTs targeted publicly disclosed vulnerabilities more often than they did zero-days, overall. They also tended to frequently share or target the same known vulnerabilities in their campaigns.

In all, the researchers identified 86 different APT groups exploiting a total of 118 unique vulnerabilities in their campaigns between 2008 and 2020. Just eight of these threat groups used exclusive vulnerabilities in their campaigns: Stealth Falcon, APT17, Equation, Dragonfly, Elderwood, FIN8, DarkHydrus, and Rancor.

That means there’s an opportunity for IT teams to prioritize those bugs that are known to be APT favorites, in order to eliminate most of the risk of compromise.

Risk Remains Roughly the Same
Organizations that can apply software updates as soon as they’re released naturally still face the lowest odds of being compromised, the study showed. However, the need to do regression testing before applying an update means that entities often take far longer to update their software. It’s here that the researchers found little difference in risk exposure between those that apply all software updates, those that apply on a reactive basis, and those that update based on information they might have received in advance of others.

After all, the advantage of receiving vulnerability information in advance goes away completely the longer an organization takes to act upon the information.

For example, organizations that applied all software updates within one month of the updates being released were roughly at between five and six times higher risk of being compromised than organizations that updated immediately. That number was lower than (but not significantly so) for those that patched on a reactive basis (roughly between five-and-a-half to seven times higher risk); and those acting upon advance information (approximately between five and seven times higher).

The researchers found that organizations which acted on a reactive basis deployed far fewer updates than those that applied all updates. “Waiting to update when a CVE is published presents eight times fewer updates,” the researchers said. “Thus, if an enterprise cannot keep up with the updates and needs to wait before deploying them, it can consider being simply reactive [as an alternative].”

A Critical Issue
The issue of patch prioritization has become increasingly critical for resource- and time-strapped IT departments and security organizations. The growing use of open source components — many with vulnerabilities in them — has only exacerbated the problem. A study that Skybox Research Lab conducted last year showed a total of 20,175 vulnerabilities were disclosed in 2021. Another study by Kenna Security showed that nearly 95% of all enterprise assets contain at least one exploitable vulnerability. The trend has heightened interest in risk-based patch prioritization and pushed the US Cybersecurity and Infrastructure Agency to publish a catalog of exploited vulnerabilities so organizations know on which ones to focus first.

For its part, the University of Trento study specifically focused on the effectiveness and cost of different software update strategies for five widely used enterprise software products: Office, Acrobat Reader, Air, JRE, and Flash Player for the Windows OS environment.

“In summary, for the broadly used products we analyzed, if you cannot keep updating always and immediately (e.g., because you must do regression testing before deploying an update), then being purely reactive on the publicly known vulnerable releases has the same risk profile than updating with a delay, but costs significantly less,” the researchers said.

Chatbot Army Deployed in Latest DHL Shipping Phish

Phishing emails intended to look like a DHL communications are now coming loaded with a new twist — a version of a chatbot that helps drive targets to malicious links, according to a new report.

That is to say, it behaves like a chatbot, but behind the scenes, the scripts are pre-programed to respond with stock phrases based on a victim’s answer, according to researchers at Trustwave who reported the phishing campaign tactic. But the effect is the same — targets think they’re talking to a live DHL representative.

After clicking, the victim’s browser opens a PDF file with another link asking the person to “Fix delivery,” the Trustwave team reported. The chatbot will ask the victim to confirm a delivery address and tracking number, and it will even present a fake CAPTCHA to make everything seem legitimate. Eventually, the target will be asked to enter in login credentials and credit card information, which is promptly harvested.

Because chatbots are widely used by brands to interact with customers online, end users aren’t suspicious of interacting with them, the Trustwave team added — making this a perfect social-engineering ploy.

“This is what the perpetrators of this phishing campaign are trying to capitalize on,” the chatbot phishing report added. “Aside from spoofing the target brand on the phishing email and website, the chatbot-like component [is what] slowly lures the victim to the actual phishing pages.”

Keep up with the latest cybersecurity threats, newly-discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

Quantum Key Distribution for a Post-Quantum World

The emergence of quantum computing and its ability to solve computations with incredible speed by harnessing the fundamental properties of quantum mechanics could revolutionize our world. But what does this quantum future mean for data security?

As quantum computing evolves from the test lab to the real world, this unprecedented new form of computing power has massive implications for current forms of encryption and public-key cryptography (PKC), such as Rivest–Shamir–Aleman (RSA) and elliptic curve cryptography (ECC). Against the processing capabilities of quantum computing, which can analyze vast sets of data orders of magnitude faster than current digital computers, these forms of encryption will essentially become vulnerable to bad actors.

In the coming post-quantum future, cryptography solutions built on the rules of quantum physics are essential to ensure that sensitive digital information is distributed safely and securely across the forthcoming quantum Internet. One of the pillars of this more secure quantum computing future is called quantum key distribution (QKD), which uses basic properties of physics to derive encryption keys for secure encryption between two locations simultaneously.

Tapping the Power of Photons

At the physical level, the data bits sent during key exchanges for today’s common encryption techniques, such as RSA and ECC, are encoded using large pulses of photons or changes in voltages. With QKD, everything is encoded on a single photon, relying on quantum mechanical properties that allow detection and prevent successful eavesdropping. Quantum objects exist in a state of superposition where the value for a property of the object can be described as a set of probabilities for different values.

The transmission of the encoded photons occurs over what’s known as the quantum channel. A separate channel, referred to as the classical channel, established between the two endpoints handles clock synchronization, key sifting, or other data exchange; this channel could be any conventional data communication channel.

Multiple Varieties of QKD

A number of implementations and protocols for QKD are emerging as the technology evolves. For example, discrete variable QKD (DV-QKD) is used in many commercial QKD systems today. A DV-QKD system consists of two endpoints: a sender and a receiver. The quantum connection between these endpoints could be free space or dark fiber. In this case, the sender encodes a bit value, 0 or 1, on a single photon by controlling the phase or polarization of the photon. A separate data connection between the two endpoints is used to communicate information about the quantum measurements and timing.

While initial QKD implementations consisted of separate dedicated fibers for the quantum and data channels, new versions can use separate wavelengths for each channel on the same fiber, leading to more cost-effective deployments and efficiencies.

Other implementations include continuous variable QKD (CV-QKD) and entanglement. With CV-QKD, the sender applies a random source of data to modulate the position and momentum quantum states of the transmission. Entanglement QKD, meanwhile, leverages quantum phenomena where two quantum particles are generated in a way in which they share quantum properties; no matter how far apart they may later separate, a measurement of a property on each will result in the same values.

Challenges Ahead for QKD

Distance remains a constraint on implementing QKD over fiber because the individual photons being transmitted will be absorbed over distance. The laser strength is attenuated to create the individual photons, and standard telecom equipment cannot be used to repeat or strengthen the signal. In general, between 60 miles and 90 miles is the practical limit.

Methods to extend the distance include trusted exchange, twin field QKD, and quantum repeaters.

  • Trusted exchanges act as a repeater — receiving the optical signals, converting them to digital, and then converting them back to optical. Trusted exchanges must be secured to prevent an intruder from reading the transmission while it is in digital form.
  • Twin field QKD adds a midpoint node that receives signals from both endpoint nodes, increasing the distance between endpoints to potentially hundreds of miles.
  • Quantum repeaters could eventually break the distance barriers of QKD over fiber, providing a function similar to repeaters in telecommunications today: to amplify or regenerate data signals so they can be transferred from one terminal to another.

Advancements in single photon sources and low-noise detectors will further improve the viable distances for QKD.

What’s Next for QKD

QKD has significant value in a quantum world due to its ability to enable symmetric key sharing between endpoints and identify when eavesdropping on the quantum channel is occurring. Before it can be broadly implemented by carriers, however, QKD must be supportable in a carrier environment, providing the availability and reliability their customers expect.

For example, disruption of the quantum channel can result in the loss of real-time key material; however, having a secure key storage associated with QKD allows key material to continue to be distributed while investigation of quantum channel outage is occurring. This also means that approaches and capabilities to troubleshoot and manage QKD equipment and services must be developed.

Since QKD relies on quantum mechanics, the observing state will impact the quantum system, and this in itself poses challenges to troubleshooting and management. As the technology continues to evolve and improve, QKD implementations on smaller mobile devices such as drones may eventually become possible. No matter how QKD evolves, it looks to be a promising solution for securing communications on the quantum Internet.

Microsoft Rushes a Fix After May Patch Tuesday Breaks Authentication

If you updated servers running Active Directory Certificate Services and Window domain controllers responsible for certificate-based authentication with Microsoft’s May 10 Patch Tuesday update, you may need a re-do. 

The company said the original patch for CVE-2022-26931 and CVE-2022-26923 was intended to stop certificate spoofing via privilege escalation, but an unintended consequence of the fix was a rash of authentication errors. So, it rushed a new patch, available as of Thursday.

After installing the original Patch Tuesday updates, several Reddit users complained of certificate-authentication errors in r/sysadmin subreddit Patch Tuesday Megathread for May 10. 

“My [Network Policy Server] NPS policies (with certificate auth) have been failing to work since the update, stating ‘Authentication failed due to a user credentials mismatch,'” Reddit user RiceeeChrispies wrote. “Either the user name provided does not map to an existing account, or the password was incorrect.”

Microsoft added that once the update is installed, it won’t be necessary to renew client-authentication certificates. 

“Renewal is not required,” Microsoft said in its statement acknowledging the authentication errors. “The CA will ship in Compatibility Mode. If you want a strong mapping using the ObjectSID extension, you will need a new certificate.”

Keep up with the latest cybersecurity threats, newly-discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

Authentication Is Static, Yet Attackers Are Dynamic: Filling the Critical Gap

Identity is the new currency, and digital adversaries are chasing wealth. According to Verizon’s “Data Breach Investigations Report,” 61% of data breaches can be traced back to compromised credentials. Why? Breaking into systems with legitimate user credentials often enables attackers to move undetected across a network for intelligence gathering, data theft, extortion, and more.

Access control is foundational to defending systems, but like any tool, it has its limits. Motivated attackers try to find ways around the edges of access control systems to gain access to accounts. Many companies have invested in anti-fraud technologies to detect and mitigate these types of attacks against high-value targets, such as login and payment flows.

However, fraudsters’ tactics can work equally as well in areas beyond login and payment flows. Therefore, we see persistent attackers who now target “identity construction” systems like provisioning, device enrollment, password reset, and other account management systems.

Because these identity provider systems establish the basis for all access control, they are now attracting dedicated attention from cybercriminals. For example, LockBit, Avaddon, DarkSide, Conti, and BlackByte ransomware groups are all utilizinginitial access brokers
to purchase access to vulnerable organizations on Dark Web forums. IABs have grown in popularity within the last couple of years and are significantly lowering the barriers to entering the world of cybercrime.

An Uptick in Identity-Related Attacks
Recent attacks and extortion attempts on major third-party software like Okta and Microsoft are clear examples of the damage that can be done when compromised credentials are used to carry out account takeover (ATO) attacks. The Lapsus$ ransomware group conducted all of their ATO activity using stolen credentials that were obtained using unconventional and sophisticated means. Recent news suggests that the group continues buying compromised account credentials until it finds one with source code access.

While all online accounts are vulnerable to ATO fraud, bad actors tend to target accounts they consider highly valuable, like bank accounts and retail accounts with stored payment information. Bad actors typically will use automated tools such as botnets and machine learning (ML) to engage in massive and ongoing attacks against consumer-facing websites. With automated tools, they commit ATO fraud using techniques such as credential stuffing and brute-force attacks, as shown by Lapsus$.

However, fraudsters don’t always use automated tools for ATO fraud. They can gain access through phishing, call-center scams, man-in-the-middle (MITM) attacks, and Dark Web marketplaces. Some have even been known to employ human labor (“click farms”) to manually enter login credentials so that the attacks go undetected by tools that look for automated login attempts. Nevertheless, ATO is now the weapon of choice for many fraudsters, perhaps accelerated by the pandemic, with attempted ATO fraud rising 282% between 2019 and 2020.

Identity-based fraud can be extremely difficult to detect considering the advanced tactics and randomness of different crime groups. Most of the breaches we hear about in the news are a result of businesses relying on automated access control tools rather than tracking user accounts to detect unusual behavior quickly.

Access Control Layers Are Not Enough
Historically, access control implements authentication and authorization services to verify identity. Authentication focuses on who a user is. Authorization focuses on what they should be allowed to do.

These types of access control layers are a good first defense against identity-based fraud, but as made evident in recent attacks like Okta and Microsoft, fraudsters can bypass these tools fairly easily. There must be a second line of defense in the form of a detection system that learns and adapts. Therefore, companies should consider going beyond who a user is and what they are allowed to do, and ensure your identity system monitors and learns from what the user is actually doing.

The Need for a More Dynamic System
Many of the techniques that cybercriminals use lie at the intersection of security and usability. Simply looking at either security or usability misses the point. If we look only at how the security protocol should work, we miss the point of how users will realistically use it. And if we only think about how to make it easy to use, we miss how to keep the bad people out. The protection layer from access control establishes the “allowed/not allowed” decision, but it should be backstopped by another layer of detection that observes and learns based on how the system is used and attempts at misuse. This second layer’s job includes identifying the tactics used to takeover accounts through brute force, redirection, tampering, and other means.

As mentioned above, authentication is a static set of something you know, something you are, and something you have. But in a war against attackers that are dynamic, a static “shield” doesn’t do much for the sake of defense. To address this gap, a robust learning system is required to identify and block dynamically changing attacker tactics.

Companies are investing in identity graph technologies for many authentication and high-value flows. Identity graphs are a real-time prevention technique that collects data on more than a billion identities, including personas and behavior patterns, so that security teams can quickly identify unusual behavior from user accounts. [Note: The author’s company is one of a number using identity graph technology.] With this type of real-time, data-driven approach, teams can identify behavior and activities generated from automated tools like bots and ML algorithms and can detect unusual behavior before it causes any damage, such as theft or fraudulent purchases.

To succeed against dynamic cybercriminals, organizations must go multiple steps further and build a learning system that evolves over time to keep up with attacker tactics. Identity graph technologies can help organizations recognize attacker tactics across the whole identity life cycle, including provisioning and account maintenance. These techniques can ebb and flow with the sophisticated threat landscape we’re witnessing today.

New Open Source Project Brings Consistent Identity Access to Multicloud

Multicloud is a reality for many organizations – whether by design or accident. And when applications and data are deployed across multiple cloud environments, creating and managing consistent identity access policies become a challenge.

Hexa is a new open source project from identity orchestration company Strata Identity to unify disparate cloud identity systems and allow consistent policies. Since each cloud provider has its own tool and policy format, Hexa relies on IDQL, a common policy format for defining identity access policies, Strata says.

Each cloud provider relies on proprietary identity systems and its own policy languages to create and manage identity and access on their platform. Most security engineers tend to be well-versed in one, maybe two, of the public clouds, but rarely more than that. In the era of multicloud, however, security engineers need to be able to create, read, and manage policies across multiple environments and be able to keep up with changing tools and new capabilities. IDQL is the universal declarative policy language that can translate policies into the individual provider’s proprietary format, says Gerry Gebel, Strata Identity’s head of standards. Hexa is the reference software built on top of the IDQL policy language and handles the tasks of discovering, translating, and orchestration policies across cloud environments, he says.

“Hexa is the open source reference software that brings IDQL to life and makes it operational in the real world,” Gebel says.

Case for Managing Cloud Identities

In a recent Dark Reading Report on the state of cloud computing, just 19% of respondents say their organization works with only one cloud provider, while 43% say they work with two to three providers. There are many reasons why organizations may be juggling multiple cloud providers. Organizations may require multicloud for redundancy and resiliency – such as one provider experiencing an outage – or to meet regulatory requirements about where the data could be stored. In some organizations, cloud infrastructure may have been originally set up without IT’s awareness, which is why the provider and policies may not be in sync with others.

Regardless of the reasons that led to multicloud, identity and access needs to be consistent and managed. In a report from Palo Alto Networks, Unit42 researchers analyzed more than 680,000 identities across 18,000 cloud accounts and over 200 different organizations, and found 99% of cloud users, roles, services, and resources were granted excessive permissions. Not only were the permissions excessive, they were also left unused for 60 days, the report found.

Misconfigured identities are behind 65% of detected cloud security incidents, Unit42 said. Threat actors can abuse these identities and move laterally through the cloud environment or expand the pool of systems they can target.

A Universal Policy Language

Each cloud provider has its own identity system, and each application has to be hard-coded to work with that identity system. If the application is to work on multiple cloud platforms, traditionally the application would have to be modified for each one. Hexa, however, has been designed to use IDQL to bring multiple identity systems to work together as a unified whole and not have to make changes to the applications, according to Strata Identity. For policy discovery, Hexa abstracts identity and access policies from cloud platforms, authorization systems, data resources, and zero trust networks.

Strata Identity set up an example multi-regional banking application to demonstrates Hexa and its policy discovery management capabilities, Gebel says. The US region in this scenario deploys the application on Google Cloud Platform using App Engine and the other two regions rely on Kubernetes. Hexa connects to the Google Cloud instance to discover the resources and associated policies, and then converts the policies into IDQL. The analyst can make changes to the policies, and then use Hexa to translate the new policies back into GCP format and push the changes on to the platform, he says.

IDQL and Hexa were created by some of the co-authors of Security Assertion Markup Language (SAML), the cross-platform standard for single sign-on which lets users move across cloud platforms and web applications without re-entering their credentials. However, Gebel notes that IDQL should not be viewed as a replacement for modern standards such as the Open Policy Agent (OPA), but “are complementary to them.”

“Just as Kubernetes transformed computing by allowing applications to transparently move from one machine to another, IDQL enables access policies to move freely between proprietary identity systems,” Eric Olden, CEO of Strata Identity and one of the co-authors of the SAML standard, said in a release. “IDQL and Hexa eliminate identity silos in the cloud and on-premises, by creating an intelligent, distributed identity system with one brain.”

More Than 1,000 Cybersecurity Career Pursuers Complete the (ISC)² Entry-Level Cybersecurity Certification Pilot Exam

ALEXANDRIA, Va., May 19, 2022 /PRNewswire/ — (ISC)² – the world’s largest nonprofit association of certified cybersecurity professionals – today announced that more than 1,000 cybersecurity career hopefuls have taken their first step toward a professional certification by completing the (ISC)2 entry-level cybersecurity certification pilot exam since the program launched on Jan. 31.

(ISC)2 created the certification to support and nurture a new generation of cybersecurity professionals entering the field – from recent university graduates to career changers to IT professionals – seeking to validate their security skills. The certification will provide employers with assurance that newcomers have the foundational knowledge, skills and abilities to succeed in entry- and junior-level roles.

“The outstanding response to our pilot program shows the pent-up need for this certification. The (ISC)2 entry-level cybersecurity certification satisfies a void the industry has been struggling to fill for years, and promises to be among the fastest-growing, in-demand cybersecurity certifications,” said Clar Rosso, CEO, (ISC)2. “We are facing a global cybersecurity workforce gap of more than 2.7 million people. We can only close that gap if we increase pathways into the field and make a cybersecurity career more accessible to more people. Candidates who pass our exam will show employers that they can contribute to their organizations’ missions and have the aptitude to learn and grow on the job.”

How the Exam Works
The (ISC)2 entry-level cybersecurity certification pilot exam evaluates candidates across five domains – security principles, business continuity (BC), disaster recovery (DR) and incident response concepts, access controls concepts, network security and security operations. A pilot exam outline is available that contains more details on the content within each domain.

Candidates who pass the (ISC)2 entry-level cybersecurity certification pilot exam become full members of (ISC)2, with access to continuing education, thought leadership, peer support, industry events and other professional development opportunities. Membership supports cybersecurity practitioners in their immediate and future careers as they gain experience and work towards more advanced and specialized certifications, such as the globally renowned (ISC)2 CISSP.

Learn more about the (ISC)2 entry-level cybersecurity certification pilot program, here. There will be no disruption to exam registration and administration throughout the pilot program and any time prior to the launch of the official certification program.

Exam-Prep Resources Available
To prepare for the (ISC)2 entry-level cybersecurity certification pilot exam, candidates can access a series of domain review sessions delivered in online instructor-led or online self-paced formats. All courses review the main themes and areas of expertise covered by the pilot certification exam outline. The courses also include sample questions, helping candidates focus on what to expect during the pilot exam. Candidates will receive a certificate of completion for the review session. Learn more here.

About (ISC)²
(ISC)² is an international nonprofit membership association focused on inspiring a safe and secure cyber world. Best known for the acclaimed Certified Information Systems Security Professional (CISSP®) certification, (ISC)² offers a portfolio of credentials that are part of a holistic, pragmatic approach to security. Our membership, more than 168,000 strong, is made up of certified cyber, information, software and infrastructure security professionals who are making a difference and helping to advance the industry. Our vision is supported by our commitment to educate and reach the general public through our charitable foundation – The enter for Cyber Safety and Education™. For more information on (ISC)², visit, follow us on Twitter or connect with us on Facebook and LinkedIn.

© 2022 (ISC)² Inc., (ISC)², CISSP, SSCP, CCSP, CAP, CSSLP, HCISPP, CISSP-ISSAP, CISSP-ISSEP, CISSP-ISSMP and CBK are registered marks of (ISC)², Inc.

Deadbolt Ransomware Targeting QNAP NAS Devices

Network-attached storage provider QNAP is warning that its NAS devices are under active attack by the so-called Deadbolt ransomware. 

QNAP NAS device models affected are primarily TS-x51 series and TS-x53 series using QTS 4.3.6 and QTS 4.4.1, according to the company.  

“QNAP urges all NAS users to check and update QTS to the latest version as soon as possible, and avoid exposing their NAS to the Internet,” the company advised

Pro-Russian Information Operations Escalate in Ukraine War

In March, in the middle of Russia’s invasion of Ukraine, a video surfaced that showed Ukraine’s President Volodymyr Zelensky announcing his country’s surrender to the Russian forces. Another story the same month said he had committed suicide in the military bunker in Kyiv where he had been directing his country’s fight against Russia, apparently because of Ukrainian military failures.

The video was a sophisticated deepfake of Zelensky generated by artificial intelligence. The story of his suicide was a completely concocted report from a group set up to spread fabricated narratives aligned with Russian interests. Both are examples of what Mandiant on Thursday described as systematic, targeted, and organized cyber-enabled information operations (IO) that has targeted Ukraine’s population and audiences in other regions of the world since the war began in February.

Many of the actors behind these campaigns are previously known Russian, Belarusian, and other pro-Russian groups. Their goal is threefold, according to Mandiant: to demoralize Ukrainians; to cause division between the beleaguered nation and its allies; and to foster a positive perception of Russia internationally. Also in the fray are actors from Iran and China that are opportunistically using the war to advance their own anti-US and anti-West narratives.

Success Hard to Gauge
The success of these information operations is hard to gauge given its scope, says Alden Wahlstrom, a senior analyst at Mandiant. “With the Russia-aligned activity, we’ve observed multiple instances in which the Ukrainian government has appeared to rapidly engage with and issue counter-messaging to disinformation narratives promoted by [information] operations,” he says. But the sheer scale and tempo of operations has made the task challenging, Wahlstrom says. “One concern when looking at this activity in aggregate is that it helps to build an atmosphere of fear and uncertainty among the population in which individuals potentially question the validity of legitimate sources of information.”

Mandiant’s analysis shows several known groups are behind the information operations activity in Ukraine. Among them is APT28, a threat group that the US government and others have attributed to a unit of the Russian General Staff’s Main Intelligence Directorate (GRU). Mandiant observed members of APT28 using Telegram channels previously associated with the GRU to promote content designed to demoralize Ukrainians and weaken support from allies.

The Belarus-based operator of Ghostwriter, a long-running disinformation campaign in Europe is another actor that is active in Ukraine. In April, Mandiant observed the threat actor using what appeared to be a previously compromised website and likely compromised or threat actor-controlled social media accounts to publish and promote fake content aimed at fomenting distrust between Ukraine and Poland, its ally.

In the weeks leading up the Russia’s invasion of Ukraine and in the months since then, Mandiant also observed an information campaign tracked as “Secondary Infektion” targeting audiences in Ukraine with fake narratives about the war. It was Secondary Infektion, for instance, that was responsible for the fake report about Zelensky’s suicide. The same group also promoted stories about operatives from Ukraine’s Azov Regiment — a unit that Russia has labeled as being comprised of Nazis — apparently seeking vengeance on Zelensky for allegedly letting Ukrainian soldiers die in Mariupol.

The group was often observed using forged documents, pamphlets, screenshots, and other fake source materials to support its fake content.

False Narratives to Sow Fear and Confusion
Mandiant said it observed several other operatives engaged in a wide range of similar information operations in Ukraine often using bot-generated social media accounts and fake personas to promote a variety of Russia-aligned narratives. This has included fake content about growing resentment in Poland over refugees from Ukraine and Polish criminal gangs harvesting organs from Ukrainians fleeing into their country.

Often the information operations have coincided with other disruptive and destructive cyber activity, according to Mandiant. For example, the content about Zelensky’s alleged surrender to Russia broke the same time that threat actors hit a Ukrainian organization with a disk-wiping malware tool that was scheduled to execute three hours before a Zelensky speech to the UN.

Wahlstrom says Mandiant has not been able to definitively link the information operations to the concurrent destructive attacks. 

“However, this limited pattern of overlap is worth paying attention to and may suggest that the actors behind the information operations are at least linked to groups with more extensive capabilities,” he says. The coordinated attacks also suggest a full spectrum of actors and tactics are being employed in operations targeting Ukraine, Wahlstrom says.

For the most part, the information operations activity in Ukraine that the various groups are engaged in appear consistent with what they have engaged in previously. But one notable evolution is the prominence of dual-purpose information ops, says Sam Riddell, an analyst at Mandiant. “Popular pro-Russian ‘hacktivist’  activity and coordinated ‘grassroots’ campaigns have pursued specific influence objectives while simultaneously attempting to create the impression of broad popular support for the Kremlin,” he says.

The conflict in Ukraine has also shown how rapidly information operation assets and infrastructure can be repurposed for the theme of the day, he says. “At the onset of the war, a whole ecosystem of pro-Russian IO assets was able to quickly flip a switch and engage in wartime IO at high volumes,” he says. “For defenders, this means that disrupting assets before significant global events break out is paramount.”

Mandiant’s report coincided with another one from Nisos this week that shed light on a Internet of Things botnet, tracked as “Fronton,” that apparently was developed a few years ago at the direction of the Federal Security Service of the Russian Federation (FSB). The botnet’s primary purpose, according to Fronton, is to serve as a platform for creating and distributing fake content and disinformation on a global scale. It includes what Nisos described as a Web-based dashboard called SANA for formulating and deploying trending social media events on a mass scale. 

Nisos’ report on Fronton is based on a review of documents that were publicly leaked after a hacktivist group called Digital Revolution broke into systems belonging to a subcontractor who developed the botnet for FSB.

Vincas Ciziunas, research principal at Nisos, says there is no evidence of Fronton or SANA being used in the current conflict between Russia and Ukraine. But presumably the FSB has some use for the technology, Ciziunas adds. “We only have demo footage and documentation,” he says. But the FSB did appear to create a fake network of Kazakh users on the Russian social media platform V Kontakte, and they did have some fake content related to a squirrel statue in a Kazakhstan city that appears to later have become the basis for a BBC report.

“The conversation related to the statue led to a BBC report,” Ciziunas says. “We did not directly identify any of the social media postings mentioned in the BBC article as having been made by the platform.”

DoJ Won't Charge 'Good Faith' Security Researchers

The US Department of Justice announced this week that it has revised a policy that explicitly states it will not charge security researchers with violations of the Computer Fraud and Abuse Act (CFAA).

The new guidance recognizes “good faith” security research done to promote safety and not carried out in a way that causes harm. The new policy, effective immediately, replaces the previous CFAA charging policy from 2014, the DOJ said. 

“Computer security research is a key driver of improved cybersecurity,” said Deputy Attorney General Lisa O. Monaco in a statement about the new DOJ policy. “The department has never been interested in prosecuting good-faith computer security research as a crime, and today’s announcement promotes cybersecurity by providing clarity for good-faith security researchers who root out vulnerabilities for the common good.”

Keep up with the latest cybersecurity threats, newly-discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.