Sports data for ransom – it’s not all just fun and games anymore

However, change lay just around the corner. With wireless communication standards beginning to proliferate in the early 2000s, the missing element was the transformation and integration of personal communications and computing. From there, data-driven sports tech could go fully commercial.

Integration – enter the era of smartphones

In the year 2000, mobile phones began to connect to the nascent 3G network. With the 1st generation iPhone released January 9th, 2007 – followed by the first Android device in September 2008 – data-driven sports technology and consumers’ appetite for social sharing were on a collision course.

The introduction of smartphones allowed user access to multiple service types as well as other devices. This included devices with other communications standards such as Bluetooth and ANT+, which are popularly used with heart rate monitors and speed sensors. With these protocols, small or clumsy dedicated devices could be paired to smartphones with substantially better user interfaces, more processing power and internet access – further connecting them to social media, emails and servers.

A boom of data

The age of Big Data was (also) upon us, and it seemed that sports data would remain a small component of the infinite data stream unleashed from a diversity of new forms of tracking and analysis. However, for millions, human curiosity latched on to sports data as interesting, motivational and, social.

When devices that could couple heart rate, cadence (rate bike pedals are turned or steps taken per min.), speed, altitude and precise geolocation met social media, a new industry exploded. The sports data-verse opened by SRM, led other device manufacturers like Garmin, followed by FitBit, Apple, Samsung and Wahoo – to name a few – to provide the (data) fodder for users to engage with their data via sports apps like Strava, Zwift, and other platforms where they could record, analyze, share, congratulate, cajole and battle over who is fastest or fittest anywhere in the world. This combination proved addictive.

For context, Strava claimed 50 million members in February 2020, adding a million more every month, and members uploading “more than 1 billion activities in the last 13 months”. Essentially, athletes upload data gathered on sports computers (+sensors) or via watches from Apple or Samsung, subsequently uploading their results along with location data to platforms like Strava.

You know you are addicted when they take it away

As we can all attest, social media users can be obsessive, very possibly matched or outdone by athletes – whether amateur or professional. For sure, cyclists, triathletes and hikers using Strava and similar platforms, alongside hardware by Garmin, Apple, or devices like Wahoo’s ELEMNET bike computer, accumulate massive amounts of data that foster their own data addictions.

So, when the links between sensor technologies and social spaces built up around websites like Garmin Connect get broken, users get upset! Cyclists won’t have to imagine too hard how users of relative newcomer, Zwift – a virtual cycling paradise, might feel if access, in-app avatars or data got cryptolocked.

Figure 2. A Zwift user with “smart” trainer and TV
Source: https://news.zwift.com/en-WW/media_kits/

The runaway success of platforms like Zwift, a virtual turbo trainer game that enables riders to join other cyclists in a virtual environment by linking a bicycle turbo/resistance trainer to a computer, smartphone or smart TV, demonstrate the stakes. During the coronavirus lockdown stakes have risen quickly with Zwift’s user numbers massively boosted and even pro cyclists moving to adopt the platform in absence of outdoor racing. Looking at numbers of concurrent users on a given day (Peak Zwift), on Jan 21st, 2020, Zwift recorded 16,512; by April 5th, this had grown to 34,940 concurrent users. Sports + Data is strutting its stuff!

Ransomware hurts in new ways

Recently, the wider sports data boom stumbled when it was reported that market leading GPS and fitness tracking vendor Garmin suffered a major security breach. “Garmin was the victim of a cyberattack that encrypted some of our systems on July 23rd, 2020. As a result, many of our online services were interrupted including website functions, customer support, customer facing applications, and company communications. We immediately began to assess the nature of the attack and started remediation,” reads the company’s announcement.

Subsequently, it was established that a ransomware attack took place impacting its systems. Forensics shows with high likelihood that the malware in question is WastedLocker, in this case wielded by the organized crime group known as Evil Corp.

For users, the multi-day day outage prevented them from logging data and thus posting it. However, other recent ransomware incidents have demonstrated that cybercriminals not only deny access to data, but actually steal it via doxing and random data leaking, then moving to auctioning stolen data on dedicated underground sites, and even forming “cartels” to attract more buyers.

Reports around Garmin’s incident certainly don’t confirm this, but post-WastedLocker, this industry should reassess risk, and the value of users’ sports data, including personally identifiable information and location, and based on employment of devices enabling multiplatform integration. Under these circumstances, the value of “sports data” quickly takes on a level of seriousness akin to health data.

With this incident, a new way for cybercriminals to pressure businesses into paying ransoms has unfolded. As such, we can imagine that many other companies could fall prey to similar patterns of abuse. Fitness center franchises, personal trainers, physical therapists and their natural overlap with healthcare providers all offer a negative synergy.

Alternatively, and outside of sports, we can imagine the knock-on effects of malware attacks on the recent explosion of food delivery services. Often on a tight budget and employing both location data and customer databases with personally identifiable information, these types of businesses could also be prone to ransomed data and, in some cases, may have lower levels of cybersecurity maturity than the service providers focused on sports data.

Shifting security to a higher gear

If you are an athlete, be mindful of how your data, as well as device and service integration, can open you up to new threat vectors. If you know or suspect that your data has been compromised in a data breach, be diligent to proactively request your service provider offer an identity theft protection monitoring service. In the case of Garmin’s recent troubles, users seem to be in the clear with regards to the encryption status of their data. However, in cases where the likelihood of your personal sports stats and data being in criminal hands is high, you should be on the lookout for targeted phishing attacks or attempts at identity fraud in the near future.

Suggested further reading:

Simple steps to protect yourself against identity theft
Blackbaud data breach: What you should know (especially the “What’s missing” section)
Privacy of fitness tracking apps in the spotlight after soldiers’ exercise routes shared online
Polar Flow app exposes geolocation data of soldiers and secret agents

Zoom makes 2FA available for all its users

Zoom now supports phone calls, text messages and authentication apps as forms of two-factor authentication  

Zoom is rolling out support for two-factor authentication (2FA) across its web, desktop, and mobile applications, allowing users to double down on the security of their accounts with an extra layer of protection. 

For context, 2FA systems require users to pass authentication challenges that need responses from two different factors. There are three classic authentication factors that are commonly used – something you know like a password or PIN code, something you have such as physical keys  or authentication apps, and something you are — this includes biometrics like fingerprints or retina scans. 

The videoconferencing platform announced the new security feature in a blog stating: “Zoom’s enhanced Two-Factor Authentication (2FA) makes it easier for admins and organizations to protect their users and prevent security breaches right from our own platform.” In a statement provided to The Verge, the company confirmed that it is making the feature available to all its users across the board, including those using its free plan. 

Zoom also described the ways users can authenticate themselves while signing into their accounts, “With Zoom’s 2FA, users have the option to use authentication apps that support Time-Based One-Time Password (TOTP) protocol (such as Google Authenticator, Microsoft Authenticator, and FreeOTP), or have Zoom send a code via SMS or phone call, as the second factor of the account authentication process.” 

RELATED READING: Privacy watchdogs urge videoconferencing services to boost privacy protections

While using SMS text messages as a form of two-factor authentication is better than not using one at all, it’s preferable to opt for one of the supported authentication apps, especially since it makes it more difficult for cybercriminals to access your account even if you become a target of a SIM swapping attack. 

The video communication company also allows users to use recovery codes to sign into their accounts in the event that their device gets lost or stolen. You can check out the whole process of activating 2FA as well as using recovery codes on the platform’s help center 

With the COVID-19 pandemic surging outside forcing a lot of companies to transition to remote working, Zoom, and other videoconferencing and communication services have enjoyed a boost in popularity. However, the company has also been in the spotlight due to the privacy and security issues it had been experiencing after users flocked to its platform in large numbers. If you’re a Zoom user, you should also check out our article on getting your Zoom security settings right 

Portland passes the strictest facial recognition technology ban yet in the US

Oregon’s largest city aims to be a trailblazer when it comes to facial recognition legislation 

On Wednesday, The Portland City Council passed what could be considered one of the strictest facial recognition bans in the United States. The legislation bans both city government agencies and private businesses from using the technology on the city’s grounds. 

While bans on the public use of facial recognition have been previously passed by other cities, Portland is the first to bar private use of this technology. As stated by Portland City Council Commissioner Jo Ann Hardesty, quoted by OneZero: “I believe what we’re passing is model legislation that the rest of the country will be emulating as soon as we have completed our work here. 

The bill that was passed unanimously by the city’s legislative body comprises two ordinances.The first, which bans the public use of facial recognition technology, came into effect immediately after the bill was passed. The ordinance also gives all the city bureaus 90 days to complete an assessment on their use of facial recognitionMeanwhile the  second ordinance is aimed at blocking use of the technology by “private entities in places of public accommodation” and will be effective starting January 1st 2021. 

Specifically, places like hotels, restaurants, movie theaters, educational institutions, barbershops and others will be prohibited from using facial recognition technology. Venues violating the ban could be compelled to pay a fine of US$1,000 for each day of violation. 

The ordinances also plot out some exceptions where facial recognition can be used. Examples include means of verification for unlocking smartphones, automated face detection used by social media apps for tagging someone, and for city bureaus and agencies to obscure and redact faces to protect privacy when images are released outside the city. 

Although the topic of using facial recognition is a contentious issue, especially from the privacy versus security point of view, the number of cities banning the use of the surveillance technology has been slowly growing. San Francisco became the first US city to ban the technology, with other US cities following in its footsteps including Oakland, Cambridge, and Berkeley. Preceding Portland, Boston was the most recent city to join the ranks and barring city officials from using the technology and procuring facial surveillance from third parties.   

Who is calling? CDRThief targets Linux VoIP softswitches

ESET researchers have discovered and analyzed malware that targets Voice over IP (VoIP) softswitches

This new malware that we have discovered and named CDRThief is designed to target a very specific VoIP platform, used by two China-produced softswitches (software switches): Linknat VOS2009 and VOS3000. A softswitch is a core element of a VoIP network that provides call control, billing, and management. These softswitches are software-based solutions that run on standard Linux servers.

The primary goal of the malware is to exfiltrate various private data from a compromised softswitch, including call detail records (CDR). CDRs contain metadata about VoIP calls such as caller and callee IP addresses, starting time of the call, call duration, calling fee, etc.

To steal this metadata, the malware queries internal MySQL databases used by the softswitch. Thus, attackers demonstrate a good understanding of the internal architecture of the targeted platform.

We noticed this malware in one of our sample sharing feeds, and as entirely new Linux malware is a rarity, it caught our attention. What was even more interesting was that it quickly became apparent that this malware targeted a specific Linux VoIP platform. Its ELF binary was produced by the Go compiler with the debug symbols left unmodified, which is always helpful for the analysis.

To hide malicious functionality from basic static analysis, the authors encrypted all suspicious-looking strings with XXTEA and the key fhu84ygf8643, and then base64 encoded them. Figure 1 shows some of the code the malware uses to decrypt these strings at runtime.

Figure 1. The routine used to decrypt the binary’s strings

To access internal data stored in the MySQL database, the malware reads credentials from Linknat VOS2009 and VOS3000 configuration files that it attempts to locate in the following paths:

  • /usr/kunshi/vos2009/server/etc/server_db_config.xml
  • /usr/kunshi/vos3000/server/etc/server_db_config.xml
  • /home/kunshi/vos2009/server/etc/server_db_config.xml
  • /home/kunshi/vos3000/server/etc/server_db_config.xml
  • /home/kunshi/vos2009/etc/server_db_config.xml
  • /home/kunshi/vos3000/etc/server_db_config.xml
  • /usr/kunshi/vos2009/server/etc/serverdbconfig.xml
  • /usr/kunshi/vos3000/server/etc/serverdbconfig.xml

Interestingly, the password from the configuration file is stored encrypted. However, Linux/CDRThief malware is still able to read and decrypt it. Thus, the attackers demonstrate deep knowledge of the targeted platform, since the algorithm and encryption keys used are not documented as far as we can tell. It means that the attackers had to reverse engineer platform binaries or otherwise obtain information about the AES encryption algorithm and key used in the Linknat code.

As seen in Figure 2, CDRThief communicates with C&C servers using JSON over HTTP.

Figure 2. Captured network communication of the Linux/CDRThief malware

There are multiple functions in Linux/CDRThief’s code used for communication with C&C servers. Table 1 contains the original names of these functions used by the malware authors.

Table 1. Functions used for communication with C&C

Function name C&C path Purpose
main.pingNet /dataswop/a Checks if C&C is alive
main.getToken /dataswop/API/b Obtains token
main.heartbeat /dataswop/API/gojvxs Main C&C loop, called every three minutes
main.baseInfo /dataswop/API/gojvxs Exfiltrates basic information about compromised Linknat system:
·        MAC address
·        cat /proc/version
·        whoami
·        cat /etc/redhat-release
·        UUID from /bin/ibus_10.mo (or / home/kunshi/base/ibus_10.mo )
main.upVersion /dataswop/Download/updateGoGoGoGoGo Updates itself to the latest version
main.pushLog /dataswop/API/gojvxs Uploads malware error log
main.load /dataswop/API/gojvxs Exfiltrates various information about the platform:
·        SELECT SUM(TABLE_ROWS) FROM information_schema.TABLES WHERE table_name LIKE ‘e_cdr_%
·        cat /etc/motd
·        username, encrypted password, IP address of the database
·        ACCESS_UUID from server.conf
·        VOS software version
main.syslogCall /dataswop/API/gojvxs Exfiltrates data from e_syslog tables
main.gatewaymapping /dataswop/API/gojvxs Exfiltrates data from e_gatewaymapping tables
main.cdr /dataswop/API/gojvxs Exfiltrates data from e_cdr tables

In order to exfiltrate data from the platform, Linux/CDRThief executes SQL queries directly to the MySQL database. Mainly, the malware is interested in three tables:

  • e_syslog – contains log of system events
  • e_gatewaymapping – contains information about VoIP gateways (see Figure 3)
  • e_cdr – contains call data records (metadata of calls)

Figure 3. Disassembled code of the function that initializes an SQL query

Data to be exfiltrated from the e_syslog, e_gatewaymapping, and e_cdr tables is compressed and then encrypted with a hardcoded RSA-1024 public key before exfiltration. Thus, only the malware authors or operators can decrypt the exfiltrated data.

Based on the described functionality, we can say that the malware’s primary focus is on collecting data from the database. Unlike other backdoors, Linux/CDRThief does not have support for shell command execution or exfiltrating specific files from the compromised softswitch’s disk. However, these functions could be introduced in an updated version.

The malware can be deployed to any location on the disk under any file name. It’s unknown what type of persistence is used for starting the malicious binary at each boot. However, it should be noted that once the malware is started, it attempts to launch a legitimate binary present on the Linknat VOS2009/VOS3000 platform using the following command:

exec -a ‘/home/kunshi/callservice/bin/callservice -r /home/kunshi/.run/callservice.pid’

This suggests that the malicious binary might somehow be inserted into a regular boot chain of the platform in order to achieve persistence and possibly masquerading as a component of the Linknat softswitch software.

At the time of writing we do not know how the malware is deployed onto compromised devices. We speculate that attackers might obtain access to the device using a brute-force attack or by exploiting a vulnerability. Such vulnerabilities in VOS2009/VOS3000 have been reported publicly in the past.

We analyzed Linux/CDRThief malware, which has a unique purpose to target specific VoIP softswitches. We rarely see VoIP softswitches targeted by threat actors; this makes the Linux/CDRThief malware interesting.

It’s hard to know the ultimate goal of attackers who use this malware. However, since this malware exfiltrates sensitive information, including call metadata, it seems reasonable to assume that the malware is used for cyberespionage. Another possible goal for attackers using this malware is VoIP fraud. Since the attackers obtain information about activity of VoIP softswitches and their gateways, this information could be used to perform International Revenue Share Fraud (IRSF).

For any inquiries, or to make sample submissions related to the subject, contact us at threatintel@eset.com.

ESET detection name

Linux/CDRThief.A

File based mutexes

/dev/shm/.bin
/dev/shm/.linux

Files created during malware update

/dev/shm/callservice
/dev/shm/sys.png

Hashes

CC373D633A16817F7D21372C56955923C9DDA825
8E2624DA4D209ABD3364D90F7BC08230F84510DB (UPX packed)
FC7CCABB239AD6FD22472E5B7BB6A5773B7A3DAC
8532E858EB24AE38632091D2D790A1299B7BBC87 (Corrupted)
82F51F098B85995C966135E9E7F63D1D8DC97589 (UPX packed)

C&C

http://119.29.173[.]65
http://129.211.157[.]244
http://129.226.134[.]180
http://150.109.79[.]136
http://34.94.199[.]142
http://35.236.173[.]187
http://update[.]callercore[.]com

Exfiltration encryption key (RSA)

—–BEGIN PUBLIC KEY—–MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCQ3k3GgS3FX4pI7s9x0krBYqbMcSaw4BPY91Ln
tt5/X8s9l0BC6PUTbQcUzs6PPXhKKTx8ph5CYQqdWynxOLJah0FMMRYxS8d0HX+Qx9eWUeKRHm2E
AtZQjdHxqTJ9EBpHYWV4RrWmeoOsWAOisvedlb23O0E55e8rrGGrZLhPbwIDAQAB
—–END PUBLIC KEY—–

Note: This table was built using version 7 of the MITRE ATT&CK framework.

Tactic ID Name Description
Defense Evasion T1027 Obfuscated Files or Information Linux/CDRThief contains obfuscates strings in the payload.
T1027.002 Obfuscated Files or Information: Software Packing Some Linux/CDRThief samples are packed with UPX.
Credential Access T1552.001 Unsecured Credentials: Credentials In Files Linux/CDRThief reads credentials for MySQL database from a configuration file.
Discovery T1082 System Information Discovery Linux/CDRThief obtains detailed information about the compromised computer.
Collection T1560.003 Archive Collected Data: Archive via Custom Method Linux/CDRThief compresses stolen data with gzip before exfiltration.
Command and Control T1071.001 Application Layer Protocol: Web Protocols Linux/CDRThief uses HTTP for communication with C&C server.
Exfiltration T1041 Exfiltration Over C2 Channel Linux/CDRThief exfiltrates data to the C&C server.

UK University suffers cyberattack, ransomware gang claims responsibility 

The cyber-incident takes most of the university’s systems offline and officials estimate that the institution will take weeks to recover

While students are slowly preparing to return to their universities and colleges after a prolonged absence due to the COVID-19 pandemicNewcastle University in England has been left reeling from a cybersecurity incident that has affected almost all its systems. 

The university first became aware of the cyber-incident disrupting its networks and IT systems on Sunday, August 30th, and deployed a full incident response plan to evaluate the extent of the issue and stabilize the situation. 

Although Newcastle University only stated that it suffered a cyberattack without identifying a culprit: according to BleepingComputer, the DoppelPaymer ransomware gang is claiming credit for the attack, sharing 750Kb (sic) of stolen data on their website as proof. 

Due to the early stage of the investigation, officials did not disclose whether any personal information was compromised. They however insisted that the university takes the security of its systems seriously and that it responded quickly to the situation. 

Moreoverthey confirmed that there was no evidence that the university payroll data had been compromised, adding that their online payment system has not been affected either, since it is managed offsite by the university’s payment provider. 

The incident response also brings issues itself. “All University systems – with the exceptions of those listed in the communications (Office365 – including email and Teams, Canvas and Zoom) are either unavailable or available but with limitations. Access may cease at any point,” officials said  on the dedicated incident webpage. 

University officials also warned that many of its IT systems will not be working and those that currently are operational may be taken offline without prior notice, staff may also lose access to their accounts without notice and devices may be removed if they have been impacted by the incident. The university also went on to recommend that students and staff should transfer any essential or critical data to their OneDrives.   

An update from the University Executive Board to the staff has revealed that the ongoing IT issues have forced teams at the Faculty of Medical Sciences to register over 1,000 returning medical students manually over the weekend, before they were set to return on Monday. 

Newcastle University’s IT service (NUIT) is working to recover its systems while aiding the Police and the National Crime Agency in their investigation. The UK’s Internet Commissioner’s Office has been notified as well. 

Universities falling victim to cyberattacks are not an unusual occurrence, since besides handling the personal data of employees and students, they tend to work on highly valuable research.  In 2019 a malware infestation led to a curious password retrieval process, where 38,000 people were forced to pick up their passwords in person. 

Photo caption: Newcastle University

Lead‑offering business booming as usual!

…but there are no conferences or exhibitions???

Being a regular presenter and visitor at conferences and exhibitions, it is not unusual for me to get unsolicited emails with offers to acquire the “verified” list of visitors or attendees, with function and contact details. Even for conferences and exhibitions I do not attend and often do not even know exist!

Let’s not revisit the GDPR issues where private data has been sold, and over the last two years you must have read enough articles about GDPR non-compliance. The phenomenon of these offerings continues during the COVID-19 period, despite conference after conference and exhibition after exhibition postponed, cancelled or going virtual. Most likely the lists of contact details now being offered are from past events – at best from the previous year.

For example, the world’s largest mobile phone showcase, the Mobile World Congress Barcelona, better known by its abbreviation MWC, was scheduled for 24–27 February 2020.

However, on 12 February 2020 GSMA, the organizer of MWC decided to cancel the show: “With due regard to the safe and healthy environment in Barcelona and the host country today, the GSMA has cancelled MWC Barcelona 2020 because the global concern regarding the coronavirus outbreak, travel concern and other circumstances, make it impossible for the GSMA to hold the event.”

An event of this magnitude – in the last few years typically over 100,000 people attend – being cancelled for a valid reason at such a short notice creates a logistical nightmare, not only for the organizers, but also for exhibitors, presenters and delegates. However, one thing was obvious: nobody would be there. Nevertheless, the fine folk in the “leads” business continued as if nothing had happened: for example, nine days after the event was canceled, and just three days before its originally scheduled start, I received spam offering 95,890 lies… ummm, I mean supposed contact info, about those who would be attending.


 

One could still argue that – as the event had not happened yet – this was an oversight.

However, two months after the event was scheduled, and almost three months after it had been cancelled, a version of the visitors list was still offered.


 

Besides the “follow-up” being from someone never heard from before, the mentioned discount must also have been in the number of lies, uhhh… visitors (16,579).

InfoSec World 2020, rather than cancelling, went virtual, as seen in these tweets:


 

Again, despite the in-person conference being canceled, I received several offers of attendee lists. If we look at all the different messages I received about InfoSec World 2020, some interesting artifacts are obvious:


 

One interesting question remaining is how these scammers and leeches – this business is clearly far from clean as it depends on sending email spam as its main sales method – obtain their information. Besides a business where they share amongst themselves, a self-generation process, we actually give away a lot of information about ourselves. Now I can hear many of you say: “I don’t do that! I am careful with my details!” But all it takes for these people is an email address to start. How many of you habitually click “Yes” to these requests from Outlook?


 

To be honest, one wonders why Outlook does not by default disable this feature. First of all, it can be considered a tracking feature (invading your privacy) that you have received and/or read a message. It also confirms to the sender that the email address is active and monitored, which may trigger more spam to come. But it can be misleading too. The recipient may not have read the message, is in a hurry, the above window pops up and clicks on Yes, or is typing already and hits the enter-key. Note that the default selected answer is “Yes”.

It is not difficult at all to disable that “problem”. For Outlook, it is done via the Options:


 

But, also looking at how many of my regular contacts do set their Out-Of-Office status messages, with details on their function, other contact details, etc. – here’s a real-life example, obfuscated for obvious reasons:

I’m currently out of the office until [%DATE%], 2020 with limited or delayed email possibilities.

If you have urgent press or media related questions please try to contact me on my mobile phone ([%CELLPHONE_NUMBER%]) or via Social Media or alternatively try to contact [%OTHER_CONTACT_NAME%]([%OTHER_CONTACT_EMAIL_ADDRESS%]).

All emails received on this account will always be kept confidential for security reasons.

Regards,
[%FIRST_NAME%]


[%FULL_NAME%]
[%TITLE/FUNCTION%] – [%COMPANYNAME%]
[%CELLPHONE_NUMBER%] – [%SOCIAL_MEDIA_HANDLE%]

[%COMANYNAME%] – [%COMPANY_ADDRESS%]

[%COMPANY_URLS%]

Despite having the best of intentions, this is giving away a lot of information: not only yours, but it gives starting information on an alternative contact. A gold mine for the aforementioned scammers.

One thing you should never do is take the scammers’ “advice”, typically presented as a footnote to their emails. The top 25 recommendations these scammers have suggested to me in the last year are:

  • If this is not relevant, please reply with “Not Relevant” in the subject line
  • If you are not interested in receiving our mails reply in subject line leave out or remove.
  • If you do not wish to hear from us again, please respond back with “opt out” and we will honour your request.
  • If you do not wish to receive further mail please reply with “Unsubscribe” in your subject line
  • If you do not wish to receive future emails from us, please reply as “opt-out”
  • If you do not wish to receive future emails from us, please reply as opt-out
  • If you don’t want further emails, please Unsub
  • If you don’t want future correspondence type “NR” in subject line
  • If you don’t want to receive further emails please revert with “Take Out” in the subject
  • If you don’t wish to receive email from us please reply back with Opt Out
  • If you don’t wish to receive emails from us reply back with LEAVE OUT
  • If you don’t wish to receive further any email please reply us with sub line ‘leave out
  • If you don’t wish to receive emails from us reply back with “Unsubscribe”.
  • If you don’t wish to receive our newsletters, reply back with “UN-SUBSCRIBE“ in subject line.
  • If you’re not interested in mailing please reply with “Leave Out” in the subject line.
  • If you’re not interested please reply subject line as “Take OFF”.
  • Instead of reporting this email as spam, kindly reply “Leave-Out” or “Unsubscribe” and we will make sure that you do not receive another email from our company.
  • Note: you were specifically sent this email based upon your company profile, if you do not wish to receive future emails from us, please reply as “No Requirements”.
  • Note: You were specifically sent this email based upon your company profile. If for some reason this was sent in error or you wish not to receive any further messages from us please reply with subject line as “Exclude”
  • To discontinue receiving email from us, reply as “Exclude”
  • To remove from this mailing: reply with subject line as “leave out.”
  • To remove, kindly respond with “Abolish”.
  • To remove, kindly respond with “Cancel”.
  • To unsubscribe from receiving future emails please send LEAVE OUT
  • To unsubscribe, send us an email with the subject ‘unsubscribe’

Note the sometimes extreme similarity; we suspect this is a pathetic attempt to avoid spam filters.

By replying, you confirm your email address is valid and you may end up in more “verified email address” databases and add to the problem. And, of course, that is, if you can even read those lines at the bottom of these email messages, as rather often they are in an extremely small font or in a color very similar to, or the same  as, the background color.

Sometimes, for your “convenience”, the advice includes hyperlinked text. For example:


 

Needless to say, you should never click on such links. In these specific cases, they are tracking links, not only confirming the validity of your email address, but also exposing more details.


 

Other links in scammers’ emails may point to their websites, which could be another method of tracking valid email addresses.


 

With so many conferences going virtual (and many for free), the lead-offering business has adopted and now offers what are presumably fake virtual attendees lists.


 

Interesting is that ESET was not a (virtual) exhibitor at the Black Hat 2020 Virtual Conference, but even if we had been, we would have collected information from the visitors to our “booth” ourselves. In any event, it is very unlikely that conference organizers, in the days of GDPR, are willing to share so many details.

We actually tested that with a conference ESET has a long relationship with and that is going virtual this year, asking if we could get (even just) the attendees email addresses.


 

’nuff said!

The lead-offering business keeps itself booming and as long as it can find data freely (or we even hand it to them, making it (semi)verified), this will never stop. We should stop supplying them with the free information (or better, stop validating information) by volunteering data in our Out-Of-Office notifications. Most, if not all, email clients have options to use different OOO messages for contacts inside your organization and out, and for contacts in your address book and not. It makes sense to have as little information as possible in the latter groups’ OOO messages.

It is utopian to imagine we can put a halt to this business. Even if we become more sensitive and stricter about sharing (or confirming) our identifiable details, the people in the lead-offering business can continue making up their lists or (re)using old(er) details. Nevertheless, it is never too late to start taking this more seriously, so why not start right now?

TikTok Family Pairing: Curate your children’s content and more

With TikTok being all the rage especially with teens, we look at a feature that gives parents greater control over how their children interact with the app

In our previous article, we looked at how TikTok users can protect themselves using the available security and privacy options. However, besides adults, the platform is very popular with a younger audience, especially teenagers and tweens. This may cause some parents to worry about what their progeny are up to on the app, especially since dangers may sometimes be lurking in the shadows in the form of unsavory characters or dubious content that shouldn’t be viewed until their children reach a certain age.

That’s why earlier this year TikTok introduced a feature called Family Pairing. Although previously parents were able to set restrictions as well, they had to do it directly from their children’s devices. The new feature is more convenient and gives parents a larger degree of control and oversight.

How can parents control how their kids use TikTok?

Setting up Family Pairing is quite straightforward. You have to tap on “Me” and then on the three dots, which takes you to the Settings and Privacy menu. If you scroll down, you should see the Family Pairing section. After tapping on it, you’re able to choose whether you are the parent or the teen.


 

Teens’ options are limited: they can only scan the QR code from their parents’ app, by which they relinquish some of the control of the app. Parents, on the other hand, can now curate the app and restrict or allow certain features.


 

Using Screen Time Management, parents can set a limit on how much time their kids spend engaging with the app. The is quite handy since many teens already spend an overwhelming amount of time on their phones, browsing through social media, playing games, and incessantly sharing every little detail of their lives.


 

Another thing parents should be wary of is the content – not everything that appears on TikTok is suitable for the eyes of kids and teenagers. That’s why, in a bid to keep the platform family-friendly, the app allows parents to set up Restricted Mode. This will filter away content “that may not be suitable for all audiences”, as the social media app puts it. If you’re not sure what that means, an example would be adult performers using the platform to reach a wider audience. Both Screen Time Management and Restricted Mode are available for everyone in the Digital Wellbeing menu.

RELATED READING: 3 things to discuss with your kids before they join social media

The third option afforded to parents is restricting who can communicate with their children via direct messaging. They can prevent certain people from messaging their children or opt for turning messaging off altogether. As far as inappropriate messages go, TikTok doesn’t allow anyone to send images or videos via direct message, and additionally, earlier this year, the platform started automatically disabling direct messages to registered users who are under the age of 16.

Final thoughts

While some may argue that this infringes on children’s privacy, Gen Z is the first generation to be brought up in the modern digital world with different types of risks, unlike those their parents and grandparents have faced. So, easing them into social media with a bit of guidance is better than leaving them to their own devices. This way they can learn about the risks associated with social media and be raised to become responsible, upstanding netizens.

On that note, if you want to teach them about the privacy and security settings on TikTok you can refer to our previous article.

To learn more about more dangers faced by children online as well as about how not only technology can help, head over to Safer Kids Online.

Announcing new reward amounts for abuse risk researchers

Thanks to your work, we have identified more than 750 previously unknown product abuse risks, preventing abuse in Google products and protecting our users. Collaboration to address abuse is important, and we are committed to supporting research on this growing challenge. To take it one step further, and as of today, we are announcing increased reward amounts for reports focusing on potential attacks in the product abuse space.
The nature of product abuse is constantly changing. Why? The technology (product and protection) is changing, the actors are changing, and the field is growing. Within this dynamic environment, we are particularly interested in research that protects users’ privacy, ensures the integrity of our technologies, as well as prevents financial fraud or other harms at scale.
Research in the product abuse space helps us deliver trusted and safe experiences to our users. Martin Vigo’s research on Google Meet’s dial-in feature is one great example of an 31337 report that allowed us to better protect users against bad actors. His research provided insight on how an attacker could attempt to find Meet Phone Numbers/Pin, which enabled us to launch further protections to ensure that Meet would provide a secure technology connecting us while we’re apart.
New Reward Amounts for Abuse Risks
What’s new? Based on the great submissions that we received in the past as well as feedback from our Bug Hunters, we increased the highest reward by 166% from $5,000 to $13,337. Research with medium to high impact and probability will now be eligible for payment up to $5,000.
What did not change? Identification of new product abuse risks remains the primary goal of the program. Reports that qualify for a reward are those that will result in changes to the product code, as opposed to removal of individual pieces of abusive content. The final reward amount for a given abuse risk report also remains  at the discretion of the reward panel. When evaluating the impact of an abuse risk, the panels look at both the severity of the issue as well as the number of impacted users.
What’s next? We plan to expand the scope of Vulnerability Research Grants to support research preventing abuse risks. Stay tuned for more information!
Starting today the new rewards take effect. Any reports that were submitted before September 1, 2020 will be rewarded based on the previous rewards table.
We look forward to working closely together with the researcher community to prevent abuse of Google products and ensure user safety.
Happy bug hunting!



]]>

It has been two years since we officially expanded the scope of Google’s Vulnerability Reward Program (VRP) to include the identification of product abuse risks.
Thanks to your work, we have identified more than 750 previously unknown product abuse risks, preventing abuse in Google products and protecting our users. Collaboration to address abuse is important, and we are committed to supporting research on this growing challenge. To take it one step further, and as of today, we are announcing increased reward amounts for reports focusing on potential attacks in the product abuse space.
The nature of product abuse is constantly changing. Why? The technology (product and protection) is changing, the actors are changing, and the field is growing. Within this dynamic environment, we are particularly interested in research that protects users’ privacy, ensures the integrity of our technologies, as well as prevents financial fraud or other harms at scale.
Research in the product abuse space helps us deliver trusted and safe experiences to our users. Martin Vigo’s research on Google Meet’s dial-in feature is one great example of an 31337 report that allowed us to better protect users against bad actors. His research provided insight on how an attacker could attempt to find Meet Phone Numbers/Pin, which enabled us to launch further protections to ensure that Meet would provide a secure technology connecting us while we’re apart.
New Reward Amounts for Abuse Risks
What’s new? Based on the great submissions that we received in the past as well as feedback from our Bug Hunters, we increased the highest reward by 166% from $5,000 to $13,337. Research with medium to high impact and probability will now be eligible for payment up to $5,000.
What did not change? Identification of new product abuse risks remains the primary goal of the program. Reports that qualify for a reward are those that will result in changes to the product code, as opposed to removal of individual pieces of abusive content. The final reward amount for a given abuse risk report also remains  at the discretion of the reward panel. When evaluating the impact of an abuse risk, the panels look at both the severity of the issue as well as the number of impacted users.
What’s next? We plan to expand the scope of Vulnerability Research Grants to support research preventing abuse risks. Stay tuned for more information!
Starting today the new rewards take effect. Any reports that were submitted before September 1, 2020 will be rewarded based on the previous rewards table.
We look forward to working closely together with the researcher community to prevent abuse of Google products and ensure user safety.
Happy bug hunting!

Pixel 4a is the first device to go through ioXt at launch

Trust is very important when it comes to the relationship between a user and their smartphone. While phone functionality and design can enhance the user experience, security is fundamental and foundational to our relationship with our phones.There are multiple ways to build trust around the security capabilities that a device provides and we continue to invest in verifiable ways to do just that.

Pixel 4a ioXt certification

Today we are happy to announce that the Pixel 4/4 XL and the newly launched Pixel 4a are the first Android smartphones to go through ioXt certification against the Android Profile.

The Internet of Secure Things Alliance (ioXt) manages a security compliance assessment program for connected devices. ioXt has over 200 members across various industries, including Google, Amazon, Facebook, T-Mobile, Comcast, Zigbee Alliance, Z-Wave Alliance, Legrand, Resideo, Schneider Electric, and many others. With so many companies involved, ioXt covers a wide range of device types, including smart lighting, smart speakers, webcams, and Android smartphones.

The core focus of ioXt is “to set security standards that bring security, upgradability and transparency to the market and directly into the hands of consumers.” This is accomplished by assessing devices against a baseline set of requirements and relying on publicly available evidence. The goal of ioXt’s approach is to enable users, enterprises, regulators, and other stakeholders to understand the security in connected products to drive better awareness towards how these products are protecting the security and privacy of users.

ioXt’s baseline security requirements are tailored for product classes, and the ioXt Android Profile enables smartphone manufacturers to differentiate security capabilities, including biometric authentication strength, security update frequency, length of security support lifetime commitment, vulnerability disclosure program quality, and preloaded app risk minimization.

We believe that using a widely known industry consortium standard for Pixel certification provides increased trust in the security claims we make to our users. NCC Group has published an audit report that can be downloaded here. The report documents the evaluation of Pixel 4/4 XL and Pixel 4a against the ioXt Android Profile.

Security by Default is one of the most important criteria used in the ioXt Android profile. Security by Default rates devices by cumulatively scoring the risk for all preloads on a particular device. For this particular measurement, we worked with a team of university experts from the University of Cambridge, University of Strathclyde, and Johannes Kepler University in Linz to create a formula that considers the risk of platform signed apps, pregranted permissions on preloaded apps, and apps communicating using cleartext traffic.

Screenshot of the presentation of the Android Device Security Database at the Android Security Symposium 2020

In partnership with those teams, Google created Uraniborg, an open source tool that collects necessary attributes from the device and runs it through this formula to come up with a raw score. NCC Group leveraged Uraniborg to conduct the assessment for the ioXt Security by Default category.

As part of our ongoing certification efforts, we look forward to submitting future Pixel smartphones through the ioXt standard, and we encourage the Android device ecosystem to participate in similar transparency efforts for their devices.

Acknowledgements: This post leveraged contributions from Sudhi Herle, Billy Lau and Sam Schumacher

Towards native security defenses for the web ecosystem

Trusted Types
Main article: web.dev/trusted-types by Krzysztof Kotowicz

JavaScript functions used by developers to build web applications often rely on parsing arbitrary structure out of strings. A string which seems to contain data can be turned directly into code when passed to a common API, such as innerHTML. This is the root cause of most DOM-based XSS vulnerabilities.

Trusted Types make JavaScript code safe-by-default by restricting risky operations, such as generating HTML or creating scripts, to require a special object – a Trusted Type. The browser will ensure that any use of dangerous DOM functions is allowed only if the right object is provided to the function. As long as an application produces these objects safely in a central Trusted Types policy, it will be free of DOM-based XSS bugs.

You can enable Trusted Types by setting the following response header:

We have recently launched Trusted Types for all users of My Google Activity and are working with dozens of product teams across Google as well as JavaScript framework owners to make their code support this important safety mechanism.

Trusted Types are supported in Chrome 83 and other Chromium-based browsers, and a polyfill is available for other user agents.

Content Security Policy based on script nonces
Main article: Reshaping web defenses with strict Content Security Policy

Content Security Policy (CSP) allows developers to require every <script> on the page to contain a secret value unknown to attackers. The script nonce attribute, set to an unpredictable number for every page load, acts as a guarantee that a given script is under the control of the application: even if part of the page is injected by an attacker, the browser will refuse to execute any injected script which doesn’t identify itself with the correct nonce. This mitigates the impact of any server-side injection bugs, such as reflected XSS and stored XSS.

CSP can be enabled by setting the following HTTP response header:

This header requires all scripts in your HTML templating system to include a nonce attribute with a value matching the one in the response header:

Our CSP Evaluator tool can help you configure a strong policy. To help deploy a production-quality CSP in your application, check out this presentation and the documentation on csp.withgoogle.com.

Since the initial launch of CSP at Google, we have deployed strong policies on 75% of outgoing traffic from our applications, including in our flagship products such as GMail and Google Docs & Drive. CSP has mitigated the exploitation of over 30 high-risk XSS flaws across Google in the past two years.

Nonce-based CSP is supported in Chrome, Firefox, Microsoft Edge and other Chromium-based browsers. Partial support for this variant of CSP is also available in Safari.

Isolation Capabilities


Many kinds of web flaws are exploited by an attacker’s site forcing an unwanted interaction with another web application. Preventing these issues requires browsers to offer new mechanisms to allow applications to restrict such behaviors. Fetch Metadata Request Headers enable building server-side restrictions when processing incoming HTTP requests; the Cross-Origin Opener Policy is a client-side mechanism which protects the application’s windows from unwanted DOM interactions.

Fetch Metadata Request Headers
Main article: web.dev/fetch-metadata by Lukas Weichselbaum

A common cause of web security problems is that applications don’t receive information about the source of a given HTTP request, and thus aren’t able to distinguish benign self-initiated web traffic from unwanted requests sent by other websites. This leads to vulnerabilities such as cross-site request forgery (CSRF) and web-based information leaks (XS-leaks).

Fetch Metadata headers, which the browser attaches to outgoing HTTP requests, solve this problem by providing the application with trustworthy information about the provenance of requests sent to the server: the source of the request, its type (for example, whether it’s a navigation or resource request), and other security-relevant metadata.

By checking the values of these new HTTP headers (Sec-Fetch-Site, Sec-Fetch-Mode and Sec-Fetch-Dest), applications can build flexible server-side logic to reject untrusted requests, similar to the following:

We provided a detailed explanation of this logic and adoption considerations at web.dev/fetch-metadata. Importantly, Fetch Metadata can both complement and facilitate the adoption of Cross-Origin Resource Policy which offers client-side protection against unexpected subresource loads; this header is described in detail at resourcepolicy.fyi.

At Google, we’ve enabled restrictions using Fetch Metadata headers in several major products such as Google Photos, and are following up with a large-scale rollout across our application ecosystem.

Fetch Metadata headers are currently sent by Chrome and Chromium-based browsers and are available in development versions of Firefox.

Cross-Origin Opener Policy
Main article: web.dev/coop-coep by Eiji Kitamura

By default, the web permits some interactions with browser windows belonging to another application: any site can open a pop-up to your webmail client and send it messages via the postMessage API, navigate it to another URL, or obtain information about its frames. All of these capabilities can lead to information leak vulnerabilities:

Cross-Origin Opener Policy (COOP) allows you to lock down your application to prevent such interactions. To enable COOP in your application, set the following HTTP response header:

If your application opens other sites as pop-ups, you may need to set the header value to same-origin-allow-popups instead; see this document for details.

We are currently testing Cross-Origin Opener Policy in several Google applications, and we’re looking forward to enabling it broadly in the coming months.

COOP is available starting in Chrome 83 and in Firefox 79.

The Future


Creating a strong and vibrant web requires developers to be able to guarantee the safety of their users’ data. Adding security mechanisms to the web platform – building them directly into browsers – is an important step forward for the ecosystem: browsers can help developers understand and control aspects of their sites which affect their security posture. As users update to recent versions of their favorite browsers, they will gain protections from many of the security flaws that have affected web applications in the past.

While the security features described in this post are not a panacea, they offer fundamental building blocks that help developers build secure web applications. We’re excited about the continued deployment of these mechanisms across Google, and we’re looking forward to collaborating with browser makers and the web standards community to improve them in the future.

For more information about web security mechanisms and the bugs they prevent, see the Securing Web Apps with Modern Platform Features Google I/O talk (video).


]]>

With the recent launch of Chrome 83, and the upcoming release of Mozilla Firefox 79, web developers are gaining powerful new security mechanisms to protect their applications from common web vulnerabilities. In this post we share how our Information Security Engineering team is deploying Trusted Types, Content Security Policy, Fetch Metadata Request Headers and the Cross-Origin Opener Policy across Google to help guide and inspire other developers to similarly adopt these features to protect their applications.

History

Since the advent of modern web applications, such as email clients or document editors accessible in your browser, developers have been dealing with common web vulnerabilities which may allow user data to fall prey to attackers. While the web platform provides robust isolation for the underlying operating system, the isolation between web applications themselves is a different story. Issues such as XSS, CSRF and cross-site leaks have become unfortunate facets of web development, affecting almost every website at some point in time.

These vulnerabilities are unintended consequences of some of the web’s most wonderful characteristics: composability, openness, and ease of development. Simply put, the original vision of the web as a mesh of interconnected documents did not anticipate the creation of a vibrant ecosystem of web applications handling private data for billions of people across the globe. Consequently, the security capabilities of the web platform meant to help developers safeguard their users’ data have evolved slowly and provided only partial protections from common flaws.

Web developers have traditionally compensated for the platform’s shortcomings by building additional security engineering tools and processes to protect their applications from common flaws; such infrastructure has often proven costly to develop and maintain. As the web continues to change to offer developers more impressive capabilities, and web applications become more critical to our lives, we find ourselves in increasing need of more powerful, all-encompassing security mechanisms built directly into the web platform.

Over the past two years, browser makers and security engineers from Google and other companies have collaborated on the design and implementation of several major security features to defend against common web flaws. These mechanisms, which we focus on in this post, protect against injections and offer isolation capabilities, addressing two major, long-standing sources of insecurity on the web.

Injection Vulnerabilities

In the design of systems, mixing code and data is one of the canonical security anti-patterns, causing software vulnerabilities as far back as in the 1980s. It is the root cause of vulnerabilities such as SQL injection and command injection, allowing the compromise of databases and application servers.

On the web, application code has historically been intertwined with page data. HTML markup such as <script> elements or event handler attributes (onclick or onload) allow JavaScript execution; even the familiar URL can carry code and result in script execution when navigating to a javascript: link. While sometimes convenient, the upshot of this design is that – unless the application takes care to protect itself – data used to compose an HTML page can easily inject unwanted scripts and take control of the application in the user’s browser.

Addressing this problem in a principled manner requires allowing the application to separate its data from code; this can be done by enabling two new security features: Trusted Types and Content Security Policy based on script nonces.

Trusted Types
Main article: web.dev/trusted-types by Krzysztof Kotowicz

JavaScript functions used by developers to build web applications often rely on parsing arbitrary structure out of strings. A string which seems to contain data can be turned directly into code when passed to a common API, such as innerHTML. This is the root cause of most DOM-based XSS vulnerabilities.

Trusted Types make JavaScript code safe-by-default by restricting risky operations, such as generating HTML or creating scripts, to require a special object – a Trusted Type. The browser will ensure that any use of dangerous DOM functions is allowed only if the right object is provided to the function. As long as an application produces these objects safely in a central Trusted Types policy, it will be free of DOM-based XSS bugs.

You can enable Trusted Types by setting the following response header:

We have recently launched Trusted Types for all users of My Google Activity and are working with dozens of product teams across Google as well as JavaScript framework owners to make their code support this important safety mechanism.

Trusted Types are supported in Chrome 83 and other Chromium-based browsers, and a polyfill is available for other user agents.

Content Security Policy based on script nonces
Main article: Reshaping web defenses with strict Content Security Policy

Content Security Policy (CSP) allows developers to require every <script> on the page to contain a secret value unknown to attackers. The script nonce attribute, set to an unpredictable number for every page load, acts as a guarantee that a given script is under the control of the application: even if part of the page is injected by an attacker, the browser will refuse to execute any injected script which doesn’t identify itself with the correct nonce. This mitigates the impact of any server-side injection bugs, such as reflected XSS and stored XSS.

CSP can be enabled by setting the following HTTP response header:

This header requires all scripts in your HTML templating system to include a nonce attribute with a value matching the one in the response header:

Our CSP Evaluator tool can help you configure a strong policy. To help deploy a production-quality CSP in your application, check out this presentation and the documentation on csp.withgoogle.com.

Since the initial launch of CSP at Google, we have deployed strong policies on 75% of outgoing traffic from our applications, including in our flagship products such as GMail and Google Docs & Drive. CSP has mitigated the exploitation of over 30 high-risk XSS flaws across Google in the past two years.

Nonce-based CSP is supported in Chrome, Firefox, Microsoft Edge and other Chromium-based browsers. Partial support for this variant of CSP is also available in Safari.

Isolation Capabilities

Many kinds of web flaws are exploited by an attacker’s site forcing an unwanted interaction with another web application. Preventing these issues requires browsers to offer new mechanisms to allow applications to restrict such behaviors. Fetch Metadata Request Headers enable building server-side restrictions when processing incoming HTTP requests; the Cross-Origin Opener Policy is a client-side mechanism which protects the application’s windows from unwanted DOM interactions.

Fetch Metadata Request Headers
Main article: web.dev/fetch-metadata by Lukas Weichselbaum

A common cause of web security problems is that applications don’t receive information about the source of a given HTTP request, and thus aren’t able to distinguish benign self-initiated web traffic from unwanted requests sent by other websites. This leads to vulnerabilities such as cross-site request forgery (CSRF) and web-based information leaks (XS-leaks).

Fetch Metadata headers, which the browser attaches to outgoing HTTP requests, solve this problem by providing the application with trustworthy information about the provenance of requests sent to the server: the source of the request, its type (for example, whether it’s a navigation or resource request), and other security-relevant metadata.

By checking the values of these new HTTP headers (Sec-Fetch-Site, Sec-Fetch-Mode and Sec-Fetch-Dest), applications can build flexible server-side logic to reject untrusted requests, similar to the following:

We provided a detailed explanation of this logic and adoption considerations at web.dev/fetch-metadata. Importantly, Fetch Metadata can both complement and facilitate the adoption of Cross-Origin Resource Policy which offers client-side protection against unexpected subresource loads; this header is described in detail at resourcepolicy.fyi.

At Google, we’ve enabled restrictions using Fetch Metadata headers in several major products such as Google Photos, and are following up with a large-scale rollout across our application ecosystem.

Fetch Metadata headers are currently sent by Chrome and Chromium-based browsers and are available in development versions of Firefox.

Cross-Origin Opener Policy
Main article: web.dev/coop-coep by Eiji Kitamura

By default, the web permits some interactions with browser windows belonging to another application: any site can open a pop-up to your webmail client and send it messages via the postMessage API, navigate it to another URL, or obtain information about its frames. All of these capabilities can lead to information leak vulnerabilities:

Cross-Origin Opener Policy (COOP) allows you to lock down your application to prevent such interactions. To enable COOP in your application, set the following HTTP response header:

If your application opens other sites as pop-ups, you may need to set the header value to same-origin-allow-popups instead; see this document for details.

We are currently testing Cross-Origin Opener Policy in several Google applications, and we’re looking forward to enabling it broadly in the coming months.

COOP is available starting in Chrome 83 and in Firefox 79.

The Future

Creating a strong and vibrant web requires developers to be able to guarantee the safety of their users’ data. Adding security mechanisms to the web platform – building them directly into browsers – is an important step forward for the ecosystem: browsers can help developers understand and control aspects of their sites which affect their security posture. As users update to recent versions of their favorite browsers, they will gain protections from many of the security flaws that have affected web applications in the past.

While the security features described in this post are not a panacea, they offer fundamental building blocks that help developers build secure web applications. We’re excited about the continued deployment of these mechanisms across Google, and we’re looking forward to collaborating with browser makers and the web standards community to improve them in the future.

For more information about web security mechanisms and the bugs they prevent, see the Securing Web Apps with Modern Platform Features Google I/O talk (video).