Oracle admits there's a bug in its new in-memory database option
Get a powerful Linux Dual-Core dedicated server for less than $2.67 a day!Tweet Share on Twitter.
July 31, 2014
It sure took a while, but Oracle has finally admitted that there is a bug in its new in-memory database option.
The bug is to be reported by the system as being in use when in fact it's not, although the actual risk it poses still remains unclear at this time. Nevertheless, system admins and DB managers are treating this cautiously.
To be sure, database professional Kevin Closson was the first to notice that executing a simple set of PL/SQL commands can seemingly activate Oracle 12c's In-Memory feature, even when that shouldn't be possible.
Yesterday, Oracle product manager Maria Colgan acknowledged that Closson's results could be reproduced, that he had in fact located a bug, and that it will be patched soon. But what does that really mean for Oracle customers?
Here's a recap of the problem, as briefly as we can put it. With the Oracle 184.108.40.206 patch release installed, the database's INMEMORY_QUERY configuration option is enabled by default, just as all new features since Oracle 11g have been enabled by default.
But the INMEMORY_SIZE parameter is set to zero, meaning that no space has been allocated to store the new in-memory tables.
According to Oracle, when it's configured like that, the In-Memory Option is considered disabled, simply because it's not actually usable.
The in-memory tables can't actually be created because there's nowhere to put them in the first place.
Additionally, changing the INMEMORY_SIZE parameter to anything other than zero requires restarting the database instance, making it extremely unlikely that a database administrator could enable the feature accidentally.
But that's not what Closson found. In his various tests, all he needed to do was to create a new table specifying the INMEMORY property and his database reported that the In-Memory Option had been enabled and was in use, even though INMEMORY_SIZE was still set to the default of zero.
Furthermore, Closson observed that merely setting INMEMORY_SIZE to some amount of memory isn't enough to make the In-Memory feature report as being in use. You have to assign the INMEMORY property to a table – and doing so apparently marks the feature as "in use" regardless of the value of INMEMORY_SIZE.
That's not good, Closson figures, because the In-Memory option isn't free-- far from it in fact. And at a reported $23,000 per Sparc CPU, "enabling" it by accident could be a serious mistake.
Closson's series of blog posts on the subject sparked a real firestorm of online comment over the next few days, with Oracle mostly denying its allegations.
But when another Oracle user managed to reproduce Closson's results, this time with an ALTER TABLE command, Colgan finally admitted that the observed behavior from the Oracle database wasn't normal.
"Recording that the In-Memory option is in use in this case is a bug and we will fix it in the first patchset update coming in October 2014," Colgan wrote in a comment yesterday.
Note, however, that Colgan didn't say that there was a bug that allowed the In-Memory feature to be activated by mistake.
Instead, she simply said that when a DBA performs Closson's series of steps, for the database to record that the In-Memory feature is in use is considered a bug.
That means the In-Memory Option isn't really in use, even when you repeat Closson's procedure. And whether Oracle would really try to charge a customer for the In-Memory Option based on that erroneous reporting is anybody's guess.
Oracle wouldn't respond to our request for clarification on the matter. We will keep you updated.
In other internet security news
The popular Multipath TCP (MPTCP) standard does speed up internet speeds and throughput, but it's now proven that it could also potentially break security solutions including intrusion detection and data leak prevention, says security researcher Catherine Pearce.
To be sure, MPTCP technology is kind of an update to the core communications backbone of the internet that will allow the TCP (Transmission Control Protocol) to use multiple paths and network providers to improve speed, redundancy and resource utilization.
MPTCP was already used by Apple's Siri for iOS but would not be more widely deployed in mobile devices anytime soon since it broke current network designs and could lead to expensive data bills.
So far, no traffic inspection systems has been found that could correlate and reassemble data in the way needed to analyse MPTCP.
With MPTCP, it becomes much harder for a single network provider to undetectably alter or sniff your traffic unless they collaborate with the other providers you are using for that connection.
There is also some uncertainty concerning incoming and outgoing connections meaning MPTCP-confused firewalls may think incoming connections were outbound.
In September of 2013, Cisco noted that MPTCP could create some disruptions of service to protocol inspection products including firewalls and intrusion prevention systems.
And a month earlier, Michigan State University and IBM researchers found inherent side-channel design flaws in MPTCP that could allow internet services providers to infer sensitive data on competitors including traffic congestion, end-to-end TCP throughput, packet loss, and some network latency.
"Specifically, an attacker can infer the throughput of unmonitored paths with up to 90 percent accuracy and within measurements interval of less than two minutes by exploiting coupling among MPTCP subflows," four researchers wrote in the paper Cross-Path Inference Attacks on Multipath TCP.
"These attacks are fundamental to MPTCP and may constitute a barrier to its wide-scale adoption," the paper added.
In other internet security news
Google is trying as best as it possibly can to migrate its Chrome browser away from the buggy OpenSSL cryptography library toward 'Boring SSL', its homegrown solution, but swapping out the crypto code is proving a lot more complex than it would seem at first glance.
And Google isn't alone in trying to patch things up with Open SSL. But Google engineer David Benjamin did post a revision to the Chromium source code version control system this week with a seemingly simple aim: "Switch to BoringSSL."
But in his revision notes, he observed that this wasn't the first time this particular change has been tried.
As it relates to the Heartbleed bug, the first time the BoringSSL code was added to the Chromium tree was on July 16, but it was quickly pulled out again after developers found it broke the build of the WebView component for Android.
The next attempt was on July 17, but that one caused issues with WebRTC on Android, so the Chromium code was reverted to its former state again.
The third time wasn't a charm, either, to say the least. This time, the BoringSSL code was added back in on July 18 but was reverted after just a few hours, again because of build issues.
So yes, Open SSL is buggy, and it appears to be getting worse. Google would like to replace OpenSSL with its own 'BoringSSL' in its Chrome browser for some obvious reasons, not least of which because of Heartbleed and other security vulnerabilities that have recently cropped up in OpenSSL's infamously hard-to-maintain code base.
In response to the Heartbleed debacle, a group of OpenBSD developers created an OpenSSL fork called LibreSSL that's designed to be a drop-in replacement for the OpenSSL library.
However, that project is still in its early phases, and Google says it has long relied on its own, custom patches to the code, many of which are "a little too experimental" to be included in LibreSSL.
So that's why Google developed BoringSSL, a new fork that was announced by the company by one of its developers, Adam Langley.
BoringSSL incorporates the security patches that the Chocolate Factory has been adding to the OpenSSL code for its own use, while drawing from the latest clean-up work the LibreSSL crew has been doing to make the source code tree more manageable.
But unfortunately, it seems that even Google's own custom code base can be difficult to incorporate into other projects – which just goes to illustrate how hard it can be for developers to work with the original OpenSSL code.
In other internet security news
Amazon Web Services' share of cloud-hosted malware atacks has more than doubled in the last six months, and is taking the IT industry by surprise. The general percecption is that AWS isn't ready for prime time.
That's according to NTT subsidiary Solutionary, which demonstrated its findings in its Q2 2014 Security Engineering Research Team (SERT) report published July 15 of this week.
Internet security researchers said that, out of the top ten ISPs and hosting providers surveyed, the proportion of malware-hosting websites served from Amazon infrastructure more than doubled from 16 percent in Q4 2013 to 41 percent in Q2 2014.
During the same period, hacker attacks on some European hosting companies grew from 10 to 13 percent; from 9 to 12 percent on Akamai; and from 6 to 9 percent on Google.
And this isn't the first time that Amazon's Cloud has been used by miscreants to host large amounts of malware-– Solutionary made the same claims in its Q4 2013 SERT file, and Kaspersky researchers discovered in 2011 that Amazon Web Services was playing host to the notorious SpyEye malware.
Part of the reason must be Amazon's scale and popularity as a cloud service, along with its Bezos-backed low prices. This means any wannabe hacker can buy server images from crooks and deploy them on AWS to build a network of malware-spreading websites.
"Overall, cloud instances of web services are extremely simple to provision on Amazon, GoDaddy, and all the majors," noted Solutionary security manager Chad Kahl.
"When you start going into the underground forums, they don't just sell a Zeus malware package, they'll sell you an entire command-and-control infrastructure and a phishing website to set up, and a drive-by-download website to set up.
"You go to them and it's CaaS (crime-as-a-service)" he explained. "It's truly script kiddies on a major scale."
Another reason why large providers may be having trouble stomping out amateur hackers on their service is that the criminals are moving rapidly between different clouds, Kahl said. "A lot of the malware operators bounce in between hosting providers, internet service providers and proxy hosts in different countries, and that's only part of the issue."
Worse, digital fingerprints of the viruses, Trojans and other software bugs hosted in public clouds are known and circulated in the infosec world, and can be used to identify malicious binaries, Kahl added.
"The question is, can these providers put the infrastructure in to scan everything?" he asked. Amazon and Google may be scrimping when it comes to investing in the tools needed to efficiently check the signatures of hosted files against databases of known evil binaries, he said.
"When we're talking about someone as big as Amazon or Google it would be a significant investment both in architecture and in time to go through and monitor everything as it's being put up, regular scans – to detect everything and take down these groups," the researcher said.
However, some companies are making good moves, such as Microsoft which has a number of malware-splatting initiatives.
Similarly, Google's new Project Zero team is tasked with hunting down security vulnerabilities in software before they are discovered and capitalized on by crooks.
As for Amazon, a spokesperson told us-- "AWS employs a number of mitigation techniques, both manual and automated, to prevent such misuse of these services.
"We have also added automatic systems in place that detect and block some attacks before they leave our infrastructure. Our terms of usage are clear and when we find misuse we take action quickly and shut it down. Companies that do see malicious activity originating from AWS should contact us immediately," he added.
In other internet security news
The U.S. National Institute of Standards and Technology (NIST) has been asked to hire more cryptography experts in order that it can confidently tell the NSA to abandon the idea.
A report from NIST's Visiting Committee on Advanced Technology (VCAT), which scrutinizes and advises the institute has criticized NIST for being too dependant on the NSA's cryptography expertise (or lack thereof).
VCAT cited the adoption and backing of the use of the buggy Dual EC DRBG algorithm, an NSA-sanctioned random number generator that was later found to be flawed.
To be sure, random numbers are crucial in cryptography, as they thwart an eavesdropper attempting to decrypt intercepted enciphered data.
The report was launched in the wake of allegations from whistleblower Edward Snowden that the NSA deliberately weakened Dual EC DRBG and other algorithms for surveillance purposes.Tweet Share on Twitter.
Get a powerful Linux Dual-Core dedicated server for less than $2.67 a day!
You can link to the Internet Security web site as much as you like.