The Multipath TCP standard is good but it can bring security issues
Get a powerful Linux Dual-Core dedicated server for less than $2.67 a day!Tweet Share on Twitter.
July 31, 2014
The popular Multipath TCP (MPTCP) standard does speed up internet speeds and throughput, but it's now proven that it could also potentially break security solutions including intrusion detection and data leak prevention, says security researcher Catherine Pearce.
To be sure, MPTCP technology is kind of an update to the core communications backbone of the internet that will allow the TCP (Transmission Control Protocol) to use multiple paths and network providers to improve speed, redundancy and resource utilization.
MPTCP was already used by Apple's Siri for iOS but would not be more widely deployed in mobile devices anytime soon since it broke current network designs and could lead to expensive data bills.
So far, no traffic inspection systems has been found that could correlate and reassemble data in the way needed to analyse MPTCP.
With MPTCP, it becomes much harder for a single network provider to undetectably alter or sniff your traffic unless they collaborate with the other providers you are using for that connection.
There is also some uncertainty concerning incoming and outgoing connections meaning MPTCP-confused firewalls may think incoming connections were outbound.
In September of 2013, Cisco noted that MPTCP could create some disruptions of service to protocol inspection products including firewalls and intrusion prevention systems.
And a month earlier, Michigan State University and IBM researchers found inherent side-channel design flaws in MPTCP that could allow internet services providers to infer sensitive data on competitors including traffic congestion, end-to-end TCP throughput, packet loss, and some network latency.
"Specifically, an attacker can infer the throughput of unmonitored paths with up to 90 percent accuracy and within measurements interval of less than two minutes by exploiting coupling among MPTCP subflows," four researchers wrote in the paper Cross-Path Inference Attacks on Multipath TCP.
"These attacks are fundamental to MPTCP and may constitute a barrier to its wide-scale adoption," the paper added.
In other internet security news
Google is trying as best as it possibly can to migrate its Chrome browser away from the buggy OpenSSL cryptography library toward 'Boring SSL', its homegrown solution, but swapping out the crypto code is proving a lot more complex than it would seem at first glance.
And Google isn't alone in trying to patch things up with Open SSL. But Google engineer David Benjamin did post a revision to the Chromium source code version control system this week with a seemingly simple aim: "Switch to BoringSSL."
But in his revision notes, he observed that this wasn't the first time this particular change has been tried.
As it relates to the Heartbleed bug, the first time the BoringSSL code was added to the Chromium tree was on July 16, but it was quickly pulled out again after developers found it broke the build of the WebView component for Android.
The next attempt was on July 17, but that one caused issues with WebRTC on Android, so the Chromium code was reverted to its former state again.
The third time wasn't a charm, either, to say the least. This time, the BoringSSL code was added back in on July 18 but was reverted after just a few hours, again because of build issues.
So yes, Open SSL is buggy, and it appears to be getting worse. Google would like to replace OpenSSL with its own 'BoringSSL' in its Chrome browser for some obvious reasons, not least of which because of Heartbleed and other security vulnerabilities that have recently cropped up in OpenSSL's infamously hard-to-maintain code base.
In response to the Heartbleed debacle, a group of OpenBSD developers created an OpenSSL fork called LibreSSL that's designed to be a drop-in replacement for the OpenSSL library.
However, that project is still in its early phases, and Google says it has long relied on its own, custom patches to the code, many of which are "a little too experimental" to be included in LibreSSL.
So that's why Google developed BoringSSL, a new fork that was announced by the company by one of its developers, Adam Langley.
BoringSSL incorporates the security patches that the Chocolate Factory has been adding to the OpenSSL code for its own use, while drawing from the latest clean-up work the LibreSSL crew has been doing to make the source code tree more manageable.
But unfortunately, it seems that even Google's own custom code base can be difficult to incorporate into other projects – which just goes to illustrate how hard it can be for developers to work with the original OpenSSL code.
In other internet security news
Amazon Web Services' share of cloud-hosted malware atacks has more than doubled in the last six months, and is taking the IT industry by surprise. The general percecption is that AWS isn't ready for prime time.
That's according to NTT subsidiary Solutionary, which demonstrated its findings in its Q2 2014 Security Engineering Research Team (SERT) report published July 15 of this week.
Internet security researchers said that, out of the top ten ISPs and hosting providers surveyed, the proportion of malware-hosting websites served from Amazon infrastructure more than doubled from 16 percent in Q4 2013 to 41 percent in Q2 2014.
During the same period, hacker attacks on some European hosting companies grew from 10 to 13 percent; from 9 to 12 percent on Akamai; and from 6 to 9 percent on Google.
And this isn't the first time that Amazon's Cloud has been used by miscreants to host large amounts of malware-– Solutionary made the same claims in its Q4 2013 SERT file, and Kaspersky researchers discovered in 2011 that Amazon Web Services was playing host to the notorious SpyEye malware.
Part of the reason must be Amazon's scale and popularity as a cloud service, along with its Bezos-backed low prices. This means any wannabe hacker can buy server images from crooks and deploy them on AWS to build a network of malware-spreading websites.
"Overall, cloud instances of web services are extremely simple to provision on Amazon, GoDaddy, and all the majors," noted Solutionary security manager Chad Kahl.
"When you start going into the underground forums, they don't just sell a Zeus malware package, they'll sell you an entire command-and-control infrastructure and a phishing website to set up, and a drive-by-download website to set up.
"You go to them and it's CaaS (crime-as-a-service)" he explained. "It's truly script kiddies on a major scale."
Another reason why large providers may be having trouble stomping out amateur hackers on their service is that the criminals are moving rapidly between different clouds, Kahl said. "A lot of the malware operators bounce in between hosting providers, internet service providers and proxy hosts in different countries, and that's only part of the issue."
Worse, digital fingerprints of the viruses, Trojans and other software bugs hosted in public clouds are known and circulated in the infosec world, and can be used to identify malicious binaries, Kahl added.
"The question is, can these providers put the infrastructure in to scan everything?" he asked. Amazon and Google may be scrimping when it comes to investing in the tools needed to efficiently check the signatures of hosted files against databases of known evil binaries, he said.
"When we're talking about someone as big as Amazon or Google it would be a significant investment both in architecture and in time to go through and monitor everything as it's being put up, regular scans – to detect everything and take down these groups," the researcher said.
However, some companies are making good moves, such as Microsoft which has a number of malware-splatting initiatives.
Similarly, Google's new Project Zero team is tasked with hunting down security vulnerabilities in software before they are discovered and capitalized on by crooks.
As for Amazon, a spokesperson told us-- "AWS employs a number of mitigation techniques, both manual and automated, to prevent such misuse of these services.
"We have also added automatic systems in place that detect and block some attacks before they leave our infrastructure. Our terms of usage are clear and when we find misuse we take action quickly and shut it down. Companies that do see malicious activity originating from AWS should contact us immediately," he added.
In other internet security news
The U.S. National Institute of Standards and Technology (NIST) has been asked to hire more cryptography experts in order that it can confidently tell the NSA to abandon the idea.
A report from NIST's Visiting Committee on Advanced Technology (VCAT), which scrutinizes and advises the institute has criticized NIST for being too dependant on the NSA's cryptography expertise (or lack thereof).
VCAT cited the adoption and backing of the use of the buggy Dual EC DRBG algorithm, an NSA-sanctioned random number generator that was later found to be flawed.
To be sure, random numbers are crucial in cryptography, as they thwart an eavesdropper attempting to decrypt intercepted enciphered data.
The report was launched in the wake of allegations from whistleblower Edward Snowden that the NSA deliberately weakened Dual EC DRBG and other algorithms for surveillance purposes.
Despite having been warned about those insecurities several years ago, the report also reveals that NIST – which is part of the U.S. Department of Commerce – relied heavily on input from the NSA in maintaining the security standard.
VCAT members believe that to guard itself from such scandals in the future, NIST will need to become more transparent and better engage with the security community as a whole.
According to the VCAT report, a lack of qualified personnel was a key shortfall for the NIST. Without enough experts on hand, the institute was unable to spot and address the security vulnerabilities in the Dual EC DRBG and the SP 800-90 standard.
To remedy the problem, the steering committee is recommending that NIST hire additional staff versed in cryptography as well as reaching out to academic institutions and security vendors when building and analyzing encryption standards.
Additionally, it was also determined that NIST will need to sever its ties with the NSA for good. "NIST may seek the advice of the NSA on cryptographic matters but it must be in a position to assess it and reject it when warranted," the report suggests.
"This may be accomplished by NIST itself or by engaging the cryptographic community during the development and review of any particular standard," the report added.
And the report goes on to suggest other transparency measures as well, including the utilization of open competitions to build new standards and maintaining better documentation on how standards are developed.
NIST added that it would also continue to study the advisory board's findings ahead of releasing a new cryptographic standards report and some new guidelines regarding the development process by the end of 2014.
In other internet security news
Google is warning its users that bogus SSL certificates have been issued by India's National Information Centre (NIC).
Those certificates can be used by servers to masquerade as legitimate Google websites when they're not, and then eavesdrop or tamper with users' encrypted communications.
The internet connection would appear to be secure when in fact it's not. According Google's security team, it noticed unauthorized certificates for several Google domains that popped up last Wednesday and then traced them back to India's NIC.
What's troubling about this is that the issuer holds several intermediate CA certificates that are trusted by the Indian Controller of Certifying Authorities (India CCA) and also some Western companies.
"The India CCA certificates are included in the Microsoft Root Store and thus are trusted by the vast majority of programs running on Windows, including Internet Explorer and Chrome. Firefox is not affected because it uses its own root store that doesn't include these certificates," said Google security engineer Adam Langley.
"However, we are not aware of any other root stores that include the India CCA certificates, thus Chrome on other operating systems, Chrome OS, Android, iOS and OS X are not affected. Additionally, Chrome on Windows would not have accepted the certificates for Google sites because of public-key pinning, although mis-issued certificates for other sites may exist," Langley added.
Google engineers alerted both Indian agencies and Microsoft about the security issue, and the bogus certificates were revoked a day later. In the meantime, Google has revoked all the certificates using Chrome's CRL Set function and says its products are in the clear.Tweet Share on Twitter.
Source: Michigan State University.
Get a powerful Linux Dual-Core dedicated server for less than $2.67 a day!
You can link to the Internet Security web site as much as you like.