Edward Snowden used a common web crawler app to scrape NSA's systems
Get a powerful Linux Dual-Core dedicated server for less than $2.67 a day!Tweet Share on Twitter.
February 9, 2014
According to some reports, unnamed officials told The New York Times that Edward Snowden used a common web crawler program to scrape NSA's systems and steal secret and classified documents.
It's a revelation that raises even more questions about the efficiency of the agency's internal security measures. The software in question was not named by the officials, but it's apparently similar to Googlebot, the program the search giant created several years ago to index new Web pages.
Snowden also used a program called "wget" which Chelsea Manning used to download the batches of secret files that were published by WikiLeaks several years back.
The crawler can be programmed with various search phrases. It then travels automatically from web page to web page, following links, and going ever deeper in search of relevant and secret documents.
"One of the many questions I have is, while people can access individual messages related to their specific job, shouldn't this system have caught someone downloading 500,000 messages and asked him, 'What are you doing?'" said Senator-Elect Mark Kirk (R-Ill.).
Similar questions are currently being asked of the NSA's overall systems as well. And it's a weighty issue given that, as the Times notes, the NSA is also charged with maintaining U.S. cybersecurity against foreign adversaries that are supposedly using far more sophisticated methods than Snowden apparently did.
A presidential directive made in response to the 2010 Manning/WikiLeaks incident required U.S. government facilities to install updated anti-leak software.
But the facility in Hawaii where Snowden worked as an NSA contractor reportedly hadn't updated the "insider threat" program simply because the outpost's network didn't yet have enough power to run it properly.
Today's Times story says it's not known if Snowden got lucky in landing at the Hawaii facility, or if he sought it out. NSA officials told the Times that Snowden would've been caught if he'd been working at the agency's headquarters in Fort Meade, Md.
Agency culture was a factor as well, the Times reports. "Once you are inside, the assumption is that you are supposed to be there, like in most organizations," Richard Bejtlich, chief security strategist for Silicon Valley computer security firm FireEye, told the paper.
"But that doesn't explain why they weren't more vigilant about excessive activity in the system," he added. The Times said, "The NSA declined to comment on its investigation or the security changes it has made since the Snowden disclosures. Other intelligence officials familiar with the findings of the investigations under way -- there are at least four -- were granted anonymity to discuss the investigations."
And Snowden told the paper in a statement-- "It's ironic that officials are giving classified information to journalists in an effort to discredit me for giving classified information to journalists. The difference is that I did so to inform the public about the government's actions, and they're doing so to misinform the public about mine."
The Times reported earlier that the CIA suspected Snowden of trying to get his hands on classified files when he worked for the agency in 2009, but Snowden says that report was inaccurate.
In other internet security news
Ask any good network manager or IT system admin that fully understand the risks of their business and they will tell you that one of the best ways of opening up the proverbial 'can of worms' is to first publish an innocuous app that works well, and then push out an update with something major changed with the app's initial permissions, says Ryan Smith.
And Smith should know-- he's the lead internet security and threat engineer at Mojave Networks, a company that provides advanced mobile security for customers around the globe ranging from medium-size enterprises to Fortune 500 companies and the U.S. Army.
To protect against the sort of threat mentioned above, system admins need to look more closely at the permissions set on mobile app updates. "It makes sense if you are not comfortable with the permissions to simply uninstall the app, added Smith."
In today's modern world, laptops, smartphones and tablets have become really ubiquitous and so convenient that users often download apps and automatically approve permissions without giving them any thought.
Such routine behavior exposes personal and other sensitive data that users store on their prized devices to increasing risk.
Such a blind trust is just what app makers count on. Android users, especially, are complacent about synchronizing apps on multiple devices.
Even worse is the practice of linking bank and social networking accounts with cloud storage so that a conduit is always open that connects our data to phones, tablets and computers.
Personal and sensititive corporate information can leak from mobile phones and tablets through the apps we install. Many of the apps we use mine our contact lists, locations and personal information that the app makers sell or use for marketing campaigns.
"For some mobile app developers, gathering and selling user information is half of the business model," says Rick Sizemore, Director of the cloud computing practice at Alsbridge, a benchmarking firm that analyzes complex enterprise IT systems.
The potential for hacking sensitive and corporate information is much, much higher when portable devices such as iPhones, Blackberries and tablets are used in enterprise settings.
Overall, encrypting the data is helpful, but many workplaces lack adequate IT support to make smartphones and tablets more secure, according to Sizemore.
In any case, "those measures will not prevent individual apps from collecting personal information and sending it to the vendors' server. That situation is what we call the Wild, Wild West of mobile devices. With Google Android devices, it's even more of a Wild West situation," he added.
Users need to realize and fully understand that with many of the apps on their devices, all the information they store on the devices, including every place they go is collected and sent by more than one app, warned Sizemore.
That is often the real motivation in offering free aps that otherwise have no money stream as a payout. The app designers gather all that information and then sell it to the highest bidders and marketing list wholesalers across the globe.
And one big difference between Android and other mobile operating systems is the trust factor with installed applications. Android trusts users to accept what they install. It is up to the user to decide which permissions to give to each application, explained Ryan Smith at Mojave Networks.
"Android also gives users the option of downloading from a third-party site. Unlike Apple, Google is not being the arbiter in deciding what applications you can or cannot install the way Apple has done," Smith said, and that's a potential security issue.
This isn't necessarily a security vulnerability in Android, but it is something that the user has to be aware of and look out for, he added.
Depending on the mobile apps that users download, the risks of malware and virus attacks can run the whole spectrum. There is some Android malware, but it tends to be more prevalent in regions outside the U.S. and Europe, Smith said.
The security issues are more isolated within China and other parts of Asia and Russia. Attacks are not exclusive to those regions, but Android malware is not prevalent in other areas, Smith explained.
Similar to attacks on computer systems, some mobile operating systems pose more complex targets than others. This brings into play the popularity factor.
To be sure, Android is a popular target in the mobile world, just like Microsoft is an easy target in the PC world. They both are the predominant operating systems in their categories, according to Jack Walsh, mobility program manager at ICSA Labs, an independent division of Verizon Wireless.
"The malware writers are going to spend most of their time where they can get the best returns. Attackers are going to be able to exploit any mobile operating system. It's just that right now, they are concentrating more on Android," Walsh said.
With Microsoft being the latest platform entry for mobile users, the jury is still out on whether Windows apps that run on that platform will be safer or more prone to attacks, offered Smith.
Apple is better at carefully analyzing its app files in its popular Apple App store. It is easier to fool the system with an Android APK (application package file), Sizemore added.
With smartphones and tablets, even more risk comes from insecure apps than vulnerabilities in the mobile operating system itself. In the case of Android, its granular permission structure puts each application into a sandbox. This keeps every app separate from all other running apps in terms of privileges, explained Smith.
"In some newer versions, some mobile apps provide encryption, so they provide somewhat better security. It simply boils down to trusting the applications you want to install," he said.
The majority of attacks on mobile devices are fraudulent banking apps. Once they get slipped into app stores, you can not tell them apart from the real apps, according to Walsh, and that's a real security concern.
"The ultimate goal is to get these apps into consumers' hands. When the user inputs account information, instead of being transmitted to the proper bank, they go to fraudulent servers located in foreigh countries such as China, Russia and Brazil", he said.
Then beyond malware, there are additional security risks that fall into the gray area zone. These risks do not result from bad intentions or from apps being written by bad people, Smith added.
"But nevertheless, those app weaknesses do pose a level of risk to the businesses, however. This may not be as great of a risk as malware, but it still poses a risk, and system admins need to look out for them", he said.
And adware is becoming really a lot more prevalent with mobile devices due to the information that apps collect every day. This can include the location of the user, the phone the person is using, and some of the information on the phone about the user, warned Smith.
And it's important to mention that the overzealous mining of users' data is a matter of vendor ethics. Legitimate vendors are less likely to broker mined personal information inappropriately, noted Smith.
One tool mobile device users can employ is a first-rate security scanner application, suggested Mojave Networks's Smith, who recommended looking around for a security app that would provide a fine granular approach to classifying threats, especially in the case of Android devices.
"You need more than a security app that just says that the scan showed no malware. You need one that categorizes the level of the risks and where it lies on the device," he said.
At the end of the day, when it comes to security, due diligence goes a very long way when selecting a mobile application, added ICSA Labs' Walsh.
You need to be certain that the app developer is legitimate, trustworthy and has a good history of quality app development, he urged.
Another good pro-active and prevention step would be to check if the mobile app developer follows testing and certification industry best practices. This is particularly critical when an employer deploys mobile devices to a workforce.
Perhaps one of the most significant word of advice for mobile device users is to not download apps from lesser known app stores, added Walsh. Measure the risks and decide if you want to take that level of risk.
"I'm not trying to say that every single source is disreputable. I'm just saying that it is probably in your best interest to just look on Amazon or Google. But even when you do that, be aware of what you are downloading," he cautioned.
Making it harder on the bad guys is also a critical self-protection step. For example, use a PIN number of more than six digits. Having a PIN is better than not having a PIN at all, obviously.
Even better is having a drawn figure or pattern, added Sizemore. "Users should encrypt their data so a password is needed to decrypt it," he said. "And whatever you do, do *not* use the same decryption code for every instance," he added.
In other internet security news
The recent simulation of a cyber attack sponsored by the Bank of England was designed to test how well financial companies in London and elsewhere in Britain were able to handle major hacker attempts. Now we're told that the simulation has uncovered serious and very critical communication issues with the infrastructure.
The program was dubbed Waking Shark II and took place in November 2013. It was meant to test how investment banks and financial institutions held up together under a sustained assault by several hackers.
The overall results were an improvement on those from the original Waking Shark I exercise, which took place in 2011, while still offering plenty of scope for some improvement, according to an official report on the exercise from the Bank of England.
"The exercise successfully demonstrated cross-sector communications and coordination through the CMBCG (Cross Market Business Continuity Group), information sharing through the use of the CISP (Cyber Security Information Security Partnership) platform and enabled all participants to better understand the requirements of the Financial Authorities in Britain," the report concludes.
The report added that the banks' overall communication systems were severely hampered by the lack of an overall clearing house for cyber threat information.
"Consideration will be given to the identification of a single coordination body from the industry to manage communications across the segment during an incident," the report recommends.
The simulated attacks took place over four hours. Other problems were identified and were designed to reflect a three day attack involving DoS (denial of service) and malware elements, and included some confusion about the Financial Services Authority.
For example, attacked banks were criticised for not calling the police, a breach of agreed procedures. The Bank of England outlined the scenario played out during the simulated attacks which, contrary to earlier reports, didn't test the cyber resilience of high street banks.
The simulation was based on a concerted cyber-attack against the U.K. financial sector by a hostile nation state with the aim of causing significant disruption and dislocation within the wholesale market and supporting infrastructure.
Although the impacts caused by the cyber-attacks would have had an international as well as a U.K. dimension, for the purposes of the exercise, the scope of the simulated attacks was restricted to the management of the impacts in the U.K.
The program was set over a three-day period the last day of which happened to coincide with “Triple Witching” (when options and futures contracts for stock index and stock options all expire on the same Friday).
The three-day period was broken into different phases, playing out various technical and business impacts from the scenario. The simulation examined how financial companies and banks would manage their response to the cyber-attacks both on a technical level (in particular information-sharing amongst the firms via the CISP tool), and from a business perspective.
Elements of the cyberwar exercise included DDoS (distributed denial of service attacks) causing the firms’ global websites and certain other internet-facing systems to be unresponsive or intermittently available, as well as APT and PC wipe attacks that penetrated the firms’ networks for disruptive and destructive purposes.
All this had knock-on effects on trading and reconciliation systems. At least on paper, this looks to be fairly challenging, yet the exercise was criticised by some banks as not being challenging enough.
Some participants wanted a greater emphasis on cyber-espionage and malware in future exercises. There were also calls to involve telecom service providers such as British Telecom in the exercise.
Adrian Culley, technical consultant at anti-botnet firm Damballa and formerly of Scotland Yard’s Computer Crime Unit, said that banks had a long way to go before their malware protections were up to scratch.
“Financial Institutions in the United Kingdom have real active infection inside their networks now, Culley said. "Caphaw is an example of one such very prevalent Advanced Attack, but there are many others as well, and that's disturbing."
"Despite Waking Shark II, there appears to be a real disconnect between Business Secretary Vince Cable's very timely warning, and banks actually holding accessible, actionable intelligence.
How they are planning to ever respond decisively without such intelligence? Those bodies are part of U.K. Critical National Infrastructure, and both active attacks, and the threat of an attack, are very real.
Get a powerful Linux Dual-Core dedicated server for less than $2.67 a day!Tweet Share on Twitter.
Source: The New York Times.
You can link to the Internet Security web site as much as you like.