Hacker Says iPhone 3GS Encryption Is ‘Useless’ for Businesses


(By: Wired.com)
* By Brian X. Chen Email Author
* July 23, 2009 |
* 3:20 pm |
* Categories: Phones

Apple claims that hundreds of thousands of iPhones are being used by corporations and government agencies. What it won’t tell you is that the supposedly enterprise-friendly encryption included with the iPhone 3GS is so weak it can be cracked in two minutes with a few pieces of readily available freeware.

“It is kind of like storing all your secret messages right next to the secret decoder ring,” said Jonathan Zdziarski, an iPhone developer and a hacker who teaches forensics courses on recovering data from iPhones. “I don’t think any of us [developers] have ever seen encryption implemented so poorly before, which is why it’s hard to describe why it’s such a big threat to security.”

With its easy-to-use interface and wealth of applications available for download, the iPhone may be the most attractive smartphone yet for business use. Many companies seem to agree: In Apple’s quarterly earnings conference call Tuesday, Apple chief operating officer Tim Cook said almost 20 percent of Fortune 100 companies have purchased 10,000 or more iPhones apiece; multiple corporations and government organizations have purchased 25,000 iPhones each; and the iPhone has been approved in more than 300 higher education institutions.

But contrary to Apple’s claim that the new iPhone 3GS is more enterprise friendly, the new iPhone 3GS’ encryption feature is “broken” when it comes to protecting sensitive information such as credit card numbers and social-security digits, Zdziarski said.

Zdziarski said it’s just as easy to access a user’s private information on an iPhone 3GS as it was on the previous generation iPhone 3G or first generation iPhone, both of which didn’t feature encryption. If a thief got his hands on an iPhone, a little bit of free software is all that’s needed to tap into all of the user’s content. Live data can be extracted in as little as two minutes, and an entire raw disk image can be made in about 45 minutes, Zdziarski said.

Wondering where the encryption comes into play? It doesn’t. Strangely, once one begins extracting data from an iPhone 3GS, the iPhone begins to decrypt the data on its own, he said.

To steal an iPhone’s disk image, hackers can use popular jailbreaking tools such as Red Sn0w and Purple Ra1n to install a custom kernel on the phone. Then, the thief can install an Secure Shell (SSH) client to port the iPhone’s raw disk image across SSH onto a computer.

To demonstrate the technique, Zdziarski established a screenshare with Wired.com, and he was able to tap into an iPhone 3GS’ data with a few easy steps. The encryption did not pose any hindrance.

Nonetheless, professionals using the iPhone for business don’t seem to care, or know, about the device’s encryption weakness.

“We’re seeing growing interest with the release of iPhone 3.0 and the iPhone 3GS due in part to the new hardware encryption and improved security policies,” Cook said during Apple’s earnings call. “The phone is particularly doing well with small businesses and large organizations.”

Clearly, the gigantic offering of iPhone applications is luring these business groups. Quickoffice Mobile, for example, enables users to access and edit Microsoft Word or Excel files on their iPhone. For handling transactions, merchants can use apps such as Accept Credit Cards to process a credit card on an iPhone anywhere with a Wi-Fi or cellular connection.

Several employees of Halton Company, an industrial equipment provider, are using iPhones for work, according to Lance Kidd, chief information officer of the company. He said the large number of applications available for the iPhone make it worthy of risk-taking.

“Your organization has to be culturally ready to accept a certain degree of risk,” Kidd said. “I can say we’ve secured everything as tight as a button, but that won’t be true…. Our culture is such that our general manager is saying, ‘I’m willing to take the risk for the value of the applications.’”

Kidd noted that Halton employees are not using iPhones for holding confidential customer information, but rather for basic tasks such as e-mailing and engaging with clients via social networking sites such as Facebook and Twitter. Halton also plans to code apps strictly for use at the company, Kidd said.

According to Kidd, a security expert performed an evaluation of Halton, and he said it was possible for any hacker to find an infiltration no matter the level of security. Therefore, Halton has measures in place to respond to an information security threat rather than attempt to avoid it.

“It’s like business continuity,” Kidd said. “You prepare for disasters. You prepare for if there’s an earthquake and the building breaks down, and you prepare for if there’s a crack in [information] security.”

But Zdziarski stands firm that the iPhone’s software versatility isn’t worth the risk for use in the workforce. He said sensitive information is bound to appear in e-mails or anything that can be contained on the iPhone’s disk, which can be easily extracted by thieves thanks to the new handset’s shoddy encryption.

Zdziarski said it’s up to the app developers to add an extra level of security to their apps because Apple’s encryption feature is so poor.

“If they’re relying on Apple’s security, then their application is going to be terribly insecure,” he said. “Apple may be technically correct that [the iPhone 3GS] has an encryption piece in it, but it’s entirely useless toward security.”

He added that the ability for the iPhone to self-erase itself remotely using Apple’s MobileMe service isn’t very helpful, either: Any reasonably intelligent criminal would remove the SIM card to prevent the remote-wipe command from coming through. (In a past Wired.com report, Zdziarski said the iPhone’s remote-wiping ability pales in comparison to Research In Motion’s BlackBerry, which can self-delete automatically after the phone has been inactive on the network for a preset amount of time.)

On top of that, the iPhone isn’t well protected in general usability, said John Casasanta, founder of iPhone development company Tap Tap Tap. He said though Apple’s approval process scans for malicious code, a developer could easily tweak the app to send a user’s personal data, such as his contacts list, over the network without his knowing.

“Apple can see if something is blatantly doing something malicious in the approval process, but it wouldn’t be very hard to do something behind the scenes,” Casasanta said.

Evidently, it isn’t difficult to sneak unauthorized content into the App Store. In May, Wired.com reported on an exploit demonstrated by the iPhone app Lyrics. Apple initially rejected the app because it contained profane words, and then Lyrics’ developer snuck the profanity into the app with a hidden Easter egg. Apple then approved the application.

Zdziarski added that there are other weaknesses with the iPhone: Pressing the Home button, and even zooming in on a screen, automatically creates a screenshot temporarily stored in the iPhone’s memory, which can be accessed later. And then there’s the keyboard cache: key strokes logged in a file on the phone, which can contain information such as credit card numbers or confidential messages typed in Safari. Cached keyboard text can be recovered from a device dating back a year or more, Zdziarski said.

Though Apple has declined to comment on iPhone security issues, the company has more or less admitted iPhones are vulnerable to security threats, because an emergency measure exists. In August 2008, Apple CEO Steve Jobs acknowledged the existence of a remote kill switch for iPhone apps, meaning if a malicious app made its way onto iPhones, Apple could trigger a command to delete the app from users’ devices. There is no evidence that the kill switch has ever been used.

So, what kind of business should you do with an iPhone if the device is not very secure? Zdziarski said there are some business-savvy apps that have managed to integrate better security (such as secure data fields to prevent key-stroke logging of credit card numbers, for example), but he warned companies to be cautious about investing too much trust in the iPhone and the apps available for it.

“We’re going to have to go with the old imperative of ‘Trust no one,’” he said. “And unfortunately part of that is, don’t trust Apple.”

Es cuestion de tiempo

Como muchas cosas, (aunque muchas cosas no lo son) es sólo cuestion de tiempo para que las aguas comiencen a aclarse. Y no sólo las aguas del mar sino también de las aguas profesionales, emocionales y personales.

Si bien no es una garantía que con el paso de los años todas las cosas se iran dando, cuando se tiene un sistema de hacer las cosas y se pretende llegar a objetivos ya planteados, se debe tener paciencia, no dejar de caminar en esa dirección y esperar que el fruto del trabajo y de la constancia comience a florecer. Sobre todo los grandes proyectos, toman tiempo en poder dar frutos, dado que la consecución de ciertos eventos es indispensable.

Aunque es importante tener en cuenta que todos los días debemos poner trabajo y una dosis de valor único, lo más importante es no rendirse, medir los avances y mantenerse concentrado en el objetivo.

Asi que con una meta, un buen plan de acción definido, y una medición constante de hacia donde vamos, es sólo es cuestión de tiempo.

3 Reasons Why U.S. Cybersecurity Sucks


* By Michael Tanji Email Author
* July 14, 2009 |
* 8:44 am |
* Categories: Info War

Good news, cybersecurity nerds: You ain’t running out of work, anytime soon. As last week’s cyber panic about North Korea showed, when there isn’t a teenager-simple denial-of-service attack that delays your access to a government website, there is a voracious hype machine that feeds on the tiniest slivers of data – both significant and trivial – and expels massive quantities of fear and misinformation. And where there’s cyber fear, there’s cybersecurity work to be done.

It’s sad that this sham is allowed to continue unabated. But worse still, it’s dangerous. Despite the expenditure of tens of billions of dollars and countless studies on what needs to happen (not to mention all the offices, centers and commands, that are supposed to implement those reports), we’re still largely screwed when it comes to threats of the online variety.

The problem is multifaceted, but can be broken down into three meta-categories:

1. Bullshit. It’s the North Koreans! It’s the Chinese! It’s the Ruskies out to steal our essence! The one thing you can be sure of is that very few people know who is behind any cyberattack. Code analysis helps to a degree (”Hey, there are some Chinese characters in here!”) but code-reuse is not exactly an unknown phenomenon online. There is no serious attribution methodology, so to some extent everyone is guessing.

2. Ineptitude. There are a lot of people working on cybersecurity issues, a lot of people “managing” these issues, but not a lot of people leading on these issues. Cybersecurity doesn’t lack for brainpower; it lacks the vision, the juice and the intestinal fortitude to realize the vision. When your focus is billets and resources and dollars and org charts (read: management) it’s easy to see why cybersecurity fails. Why? Cyber doesn’t kill, it doesn’t maim, it rarely has negative impact on any scale and when it does it is almost always a readily recoverable event. Managers don’t deal with the nebulous, intangible and anything that involves “maybe” very well.

3. Complexity. The people at Verizon look on bemused when the military talks of achieving information-space dominance, when with the flick of a switch, a technician in overalls and a tool belt can render our digital military might inert. Attack and defense tools are built for computer-based warfare, but planetwide more people access the net with phones than desktops. There has yet to be a study that has looked at these problems in a truly comprehensive manner (read: not dominated by geezers who have other people read and respond to their e-mail). Mostly they’re focused on legacy futures, which is cool if you’re not interested in forward progress.

Cybersecurity is a real problem. It has been since computers were invented and connected to one another, but we’re no better off today than we were then. It is not as if we don’t have any lessons learned to draw from. We are in fact worse off because of the extent of our inter-connectedness, and that says a lot more about those who purport to be about enhancing cybersecurity than it does those who are out to subvert it.

[Photo: USAF]

(By: Wired.com)

Miedo a decir

Es tanta la responsabilidad que cae sobre nuestras palabras, que es posible que llegue un punto en que preferimos callar. No es sólo una cuestión de no saber que decir, sino más bien el saber exactamente lo que se quiere decir, pero la posible consecuencia de decirlo, nos lleva a la autocensura.

A pesar de la libertad oficial que tenemos hoy en día, las limitantes no oficiales a la libertad de expresión, pueden resultar practicamente igual de castrantes que las utilizadas con anterioridad en los regímenes totalitarios, la inevitable y triste consecuencia, es que los individuos se convierten en candados a sí mismos y muchas ideas, quedan atrapadas en las entrañas enfermas de una persona que prefiere guardarse las cosas.

Que implicaciones tendría que un padre expresara abiertamente que su hijo frustró sus sueños de ser músico, o un saceerdote que siente deseos sexuales homosexuales, o una niña que siente deseos de matar a su hermanito por celos? Sin duda actos tan humanos como las explosivas emociones mismas, pero sin juzgar lo correcto o lo incorrecto de un deseo, la necesidad de expresión se vuelve algo tan necesario que el estriñimiento intelectual o expresivo se puede volver una terrible migraña en los sesos del corazón.

Tratemos de entender las palabras como el transporte de las emociones humanas y no sólo como hechos dados que deben concretizarse, dejemos que ese transporte pueda también sacar la basura que vive y se pudre adentro de nosotros.

Liberemos el exceso de equipaje y abramos más la boca.

The Next Hacking Frontier: Your Brain?

(By: Wired.com)


* By Hadley Leggett Email Author
* July 9, 2009
* 12:59 pm
* Categories: Biotech, Brains and Behavior, Ethics

bci

Hackers who commandeer your computer are bad enough. Now scientists worry that someday, they’ll try to take over your brain.

In the past year, researchers have developed technology that makes it possible to use thoughts to operate a computer, maneuver a wheelchair or even use Twitter — all without lifting a finger. But as neural devices become more complicated — and go wireless — some scientists say the risks of “brain hacking” should be taken seriously.

“Neural devices are innovating at an extremely rapid rate and hold tremendous promise for the future,” said computer security expert Tadayoshi Kohno of the University of Washington. “But if we don’t start paying attention to security, we’re worried that we might find ourselves in five or 10 years saying we’ve made a big mistake.”

Hackers tap into personal computers all the time — but what would happen if they focused their nefarious energy on neural devices, such as the deep-brain stimulators currently used to treat Parkinson’s and depression, or electrode systems for controlling prosthetic limbs? According to Kohno and his colleagues, who published their concerns July 1 in Neurosurgical Focus, most current devices carry few security risks. But as neural engineering becomes more complex and more widespread, the potential for security breaches will mushroom.

For example, the next generation of implantable devices to control prosthetic limbs will likely include wireless controls that allow physicians to remotely adjust settings on the machine. If neural engineers don’t build in security features such as encryption and access control, an attacker could hijack the device and take over the robotic limb.

“It’s very hard to design complex systems that don’t have bugs,” Kohno said. “As these medical devices start to become more and more complicated, it gets easier and easier for people to overlook a bug that could become a very serious risk. It might border on science fiction today, but so did going to the moon 50 years ago.”

Some might question why anyone would want to hack into someone else’s brain, but the researchers say there’s a precedent for using computers to cause neurological harm. In November 2007 and March 2008, malicious programmers vandalized epilepsy support websites by putting up flashing animations, which caused seizures in some photo-sensitive patients.

“It happened on two separate occasions,” said computer science graduate student Tamara Denning, a co-author on the paper. “It’s evidence that people will be malicious and try to compromise peoples’ health using computers, especially if neural devices become more widespread.”

In some cases, patients might even want to hack into their own neural device. Unlike devices to control prosthetic limbs, which still use wires, many deep brain stimulators already rely on wireless signals. Hacking into these devices could enable patients to “self-prescribe” elevated moods or pain relief by increasing the activity of the brain’s reward centers.

Despite the risks, Kohno said, most new devices aren’t created with security in mind. Neural engineers carefully consider the safety and reliability of new equipment, and neuroethicists focus on whether a new device fits ethical guidelines. But until now, few groups have considered how neural devices might be hijacked to perform unintended actions. This is the first time an academic paper has addressed the topic of “neurosecurity,” a term the group coined to describe their field.

“The security and privacy issues somehow seem to slip by,” Kohno said. “I would not be surprised if most people working in this space have never thought about security.”

Kevin Otto, a bioengineer who studies brain-machine interfaces at Purdue Universty, said he was initially skeptical of the research. “When I first picked up the paper, I don’t know if I agreed that it was an issue. But the paper gives a very compelling argument that this is important, and that this is the time to have neural engineers collaborate with security developers.”

It’s never too early to start thinking about security issues, said neural engineer Justin Williams of the University of Wisconsin, who was not involved in the research. But he stressed that the kinds of devices available today are not susceptible to attack, and that fear of future risks shouldn’t impede progress in the field. “These kinds of security issues have to proceed in lockstep with the technology,” Williams said.

History provides plenty of examples of why it’s important to think about security before it becomes a problem, Kohno said. Perhaps the best example is the internet, which was originally conceived as a research project and didn’t take security into account.

“Because the internet was not originally designed with security in mind,” the researchers wrote, “it is incredibly challenging — if not impossible — to retrofit the existing internet infrastructure to meet all of today’s security goals.” Kohno and his colleagues hope to avoid such problems in the neural device world, by getting the community to discuss potential security problems before they become a reality.

“The first thing is to ask ourselves is, ‘Could there be a security and privacy problem?’” Kohno said. “Asking ‘Is there a problem?’ gets you 90 percent there, and that’s the most important thing.”

Via Mind Hacks