How to Access Pandora From Anywhere in the World

(By: Wired How-To Wiki)

Pandora and other U.S.-based streaming music services have long since slammed the doors on its international listeners. Its not as if the internet radio stations had a choice. Rather, the xenophobic restrictions are the result of U.S. and international copyright laws and restrictions.

Pandora is reportedly working out international licensing deals. As Tim Westerberg writes in a message to international users, «the pace of global licensing is hard to predict, but we have the ultimate goal of being able to offer our service everywhere.»

So how are intercontinental rockers going to dance to Pandora’s sweet tunes in the mean time?

Well one solution is to mask your identity by using a proxy server. Pandora blocks users by non-U.S. IP addresses. If you connect to a server in the U.S. and use it as an internet providing middle-man between your PC and Pandora, Pandora won’t know the difference. To Pandora’s servers, you look like you are the middle-man. Proxies make it possible to rock out to Pandora from anywhere in the world.

Using a proxy to access restrictive web applications on foreign soil isn’t just for Pandora. There are web proxies for every state in the United States and almost any country around the world. Hiding behind a proxy’s IP means access to foreign music stores and other web sites normally blocked to your area. You can even simulate foreign access, or out-of-LAN access, to your own web projects by utilizing proxies.

So how do you use a proxy? Put on your dancing shoes and let’s dig into the wonderful world of proxies.

This article is a wiki. If you have some behind the scenes way to access Pandora, log in and add instructions.

Antes del Mundial, habrá dos cortes a la lista del Tri – Futbol – mediotiempo.com

No quiso adelantar en qué porcentaje ya tiene definida la convocatoria de los jugadores que representarán a México en el Mundial de Sudáfrica 2010, pero este domingo Javier Aguirre sí se animó a decir cómo irá depurando la lista final, la cual tendrá dos cortes para separar jugadores.

En el arranque de la tercer convocatoria del año, con la que el representativo mexicano se prepara para el duelo de preparación ante Islandia, el «Vasco» Aguirre también detalló los elementos que no deben tener los jugadores para ser considerados a representar a México en el Mundial.

«Los 18 que yo llame la semana que viene no tienen asegurado ir al Mundial, no, ninguno de esos 18. Quiero llamar a 18 porque hay tres compromisos el 7, el 10 y el 13 (de mayo) y tengo que llevar a 18 jugadores.

«A esos 18 se incorporará la gente de Europa que ya terminan el día 16 de mayo (…) yo espero para el día 16 ya tener a algunos europeos, con lo cual el viaje a Europa el día 18 ya dejaré a tres o quizá cuatro en México, así está el negocio.

«Antes jugaremos contra Inglaterra, contra Holanda, el tercer partido está ahí (Portugal) y al final contra Italia. Antes de Italia tengo que dar la lista de 23 jugadores, con lo cual serán dos o uno fuera», detalló el estratega.

Eso sí, dijo que será hasta el 1 de junio cuando se pueda decir que el Tri tiene definido una lista de jugadores para participar en el Mundial de Sudáfrica.

Antes del Mundial, habrá dos cortes a la lista del Tri – Futbol – mediotiempo.com

Lev Yilmaz

The «Tales of Mere Existence» series began in 2002 as a series of animated comics that were shown at film festivals. Each video in the «Tales of Mere Existence» series shows a series of static cartoons, which appear gradually as if being drawn by an invisible hand. Lev got the idea for the technique from the movie “The Mystery Of Picasso” (1956), which similarly showed Picasso’s paintings appearing from the other side. Lev writes, draws, films, edits, and narrates the «Tales» videos, which touch on mostly mundane aspects of life and have been described as «bleakly hilarious.» In 2003, Lev began to sell DVDs that contained some of his short comic films. Along with the DVD came the first print version of the «Tales of Mere Existence». Over the next six years, Lev would publish three more books, and build a fan base of thousands. His first official book, «Sunny Side Down», was published by Simon & Schuster in 2009.

Make Your Office Eco-Friendly

(By: Wired.com)

 According to a 2009 Gallup poll, the average employed adult works between 35-44 hours a week.

That’s a huge amount of time to spend in the office, and yet the same people who change all their light bulbs to CFLs rarely give a second thought to getting their to-go coffee in a paper cup.

It only takes minutes to make your workspace a little more environmentally friendly. Here are a few tips to get you started.

Mercedes Benz This article is part of a wiki anyone can edit. Got extra advice for greening up your desk at work? Log in and contribute.

1. Turn off the equipment when you leave for the day. Yes, even though Billy from accounting likes to stay late. An action as simple as turning off a 75-watt desktop monitor when you leave can save up to as much as 750 pounds of carbon emissions a year. A power strip can make turning all the equipment off easier at the end of the day, including coffee makers and microwaves. Just make sure the printer is powered down properly, as printers need to seal their cartridges before shutting off.

2. Buy green materials. Switching to recycled printer paper could save thousands of innocent trees a year — and that’s not including paper towels, toilet paper, water cups and all the other products that make working in an office a comfortable enterprise. Many offices have already stopped buying the formerly ubiquitous bottled water. Talk to your office manager about stocking recycled printer paper or replacing the break room cookies with locally grown fruit. And nix printing out separate agendas for everyone at the morning meeting — slides or e-mailed agendas work just fine.

3. Green your duds. If you’re not lucky enough to work in an office where jeans and Chuck Taylors are de rigeur, you already know that great thing about office clothes is that they’re not supposed to be particularly trendy. Consider buying your crisply pressed trousers and blouses from thrift or consignment stores. Also, avoid dry-cleaning. Most dry cleaners use a chemical known as perchloroethylene, which is dangerous for both you and the environment. «Perc» is a known carcinogen that erodes the ozone layer and can easily contaminate groundwater. Most materials, like silk and wool, can be hand-washed. If you must go to a dry cleaner, look for one that uses green cleaning techniques, such as liquid carbon.

4. Telecommute. E-mailing, instant-messaging and videoconferencing have made working from home easier than ever before. Take advantage of it! Getting off the road even one day a week significantly reduces the amount of gasoline you burn, and you can even use the time you save on the trip to have an extra cup of coffee in your reusable ceramic mug. If telecommuting isn’t a possibility for you, consider asking your boss about instituting a commuter credit program for use on public transportation, or putting up a bulletin board for carpooling.

5. Reusable Cups. Avoid using styrofoam cups for anything. Use a mug for coffee and a water bottle for water. If you recycle at home, you can recycle, reduce and reuse at the office too.

6. Recycle Everything. You can recycle everything from your paper and plastics that might come from the vending machines at work, or paper that might otherwise get thrown away. Some companies will even take your old office furniture to recycle the desks and chairs. You could even donate the furniture to a school near by to not only help the community, but also increase the tax write off for the company. Here is a site to reference: Planet Green.

I, Google

(By:Wired.com)

Google’s announcement that it intends to build and test superfast fiber-optic broadband networks in a few communities around the United States has a few locations pulling out all the stops to be chosen with some attention-getting stunts that scream to the search giant: “Pick me! Pick me!”

Some cities have (temporarily) renamed themselves with some sort of Google-ish name. Others have seen that and raised, promising to include “Google” in the name of every newborn. And the negative ads are starting to coming out, with some candidates exposing the shortcomings of others.

I understand the high temperature that this particular fever brings. Ultrahigh-speed broadband from a company such as Google would be massive. The thought of having internet speeds one hundred times faster than my current service is pretty damn appealing. So it didn’t really surprise me when my own hometown of Sarasota, Florida, joined the ranks of Duluth, Minnesota, or Topeka, Kansas, or Buffalo, New York, and others in some sort of publicity stunt to draw the attention of Google.

It’s not unfounded. Sarasota has a lot to offer and of course, I’m biased. The whole thing is a great use of social media, with hundreds of Facebook groups popping up and towns making their own viral videos. Trust me, if you’ve ever been to Central Florida along the coast you wouldn’t think that the populace here would even know what a viral video is, much less be able to comprehend making one.

So far, the most popular stunts have been Topeka renaming their town Google for one month, and Duluth making a set of tongue-in-cheek videos with their mayor proclaiming that every first-born child will be named either Google Fiber or Googlette Fiber. Sarasota has made their own video, showing that Duluth is very cold and Topeka doesn’t have much of a view, while Sarasota is paradise (Tip to Sarasota: Put the video on Google-owned YouTube instead of only Facebook. Just sayin’).

Sarasota currently ranks fifth on the list, while Grand Rapids, Michigan, leads the pack so far with over 20,000 votes. Sarasota’s stunt is renaming the popular park, City Island, to Google Island. Unoriginal at best, but the point is the same.

What is the point? The point is that Google has most likely already chosen a destination for their ultrahigh-speed broadband testing grounds. It’s going to be a town with fiber already in the ground, and it’s going to be a town that has something that will really test the broadband. Google is going to need the right kind of technology and industry in order to truly test their network. A hundred times faster than our current internet speeds is fast, really fast. As individuals, do we really need that type of speed in our homes? Well, of course we do. What kind of question is that?

A great side effect of Google Fiber even thinking about coming into a market such as Sarasota, where Comcast or Verizon are the only options, is that it will prompt both of those companies to adjust their current behavior when it comes to high-speed broadband. Comcast has mentioned restricting broadband, while Verizon is still limited to certain areas. Both these activities will have to change in order to compete. Comcast will have to keep unrestricted broadband, while Verizon might want to think about expanding past the highway, not to mention competitive pricing. Google plans on creating an open-access provider network; that is, they’ll provide their service to independent ISPs who will then sell to you, similar to how the phone system operates.

Which again, filters back to price point. Ultrahigh-speed broadband? The future is here, and its name is Google Fiber. Of course, this could be just the start of Google Skynet for all we know. I for one, welcome my new Google overlords to Sarasota, and I’ll let you all know how superfast and awesome their broadband is.

Towns have until March 26 to nominate themselves through Google’s RFI site.

Read More http://www.wired.com/geekdad/2010/03/i-have-renamed-my-house-google/#ixzz0hoRVtDkq

The Top 10 Movies That Should Never, Ever Be Converted to 3-D

(By: Wired.com)

Photo by Alan Levine; used under Creative Commons Attribution license.

Photo by Alan Levine; used under Creative Commons Attribution license.

Why, really, did the 3-D movie trend start? Does anybody remember, before the trend began, thinking “You know the problem with movies? They’re too two-dimensional?” We didn’t think so. 3-D is so entrenched in the movie industry now that commercials for the upcoming remake of Clash of the Titans actually point out that it is “also in 2-D” — as though that wasn’t the norm.

Now there’s talk of re-releasing classic films, converted to 3-D. You really would think people would learn a lesson from the hue and cry over colorization of old black and white films in the 1980s, but apparently you’d be wrong.

Here, then, are the top 10 movies that, for one reason or another, we at GeekDad fervently hope are never … what would the word be? “3-D-ized?” “Depth-ized?” We need a word that evokes the concept of things that looked fine to begin with getting alterations for superficial, faux-cosmetic reasons in order to earn more money. Perhaps something involving Cher.

10. Alien — The chest-burst scene is quite scary and gory enough, without the baby coming out of the screen towards the audience, thank you very much.

9. The Pirates of the Caribbean films — Orlando Bloom is wooden enough in two dimensions. And besides, with the exception of Jack, virtually all of the characters are one-dimensional, so displaying them in three really seems like overkill.

8. The Evil Dead films — Honestly, we’re just afraid someone might injure himself running away for fear of losing an eye to Bruce Campbell’s chin.

7. The Big Lebowski — While the bowling scenes might look pretty cool in 3-D, consider the scene where the thug pees on The Dude’s rug. Or the scene where Walter bites off a guy’s ear. Some things we’re better off not seeing in 3-D.

6. Die Hard — We’re pretty sure we’re better off not being any closer to the bloodied, sweaty John McClane. We’re afraid that people with overactive imaginations might start to think they can smell him, which is certainly not something to be wished for.

5. Star Trek IV: The Voyage Home — With all due respect to the late, great James Doohan, nobody wants Scotty’s stomach any closer to them than absolutely necessary. Plus, in 3-D, it would probably be pretty obvious that the closeups of the whales were done with models.

4. E.T.: The Extra-Terrestrial — If you or someone you love is the sort who gets emotional at movies, consider how much more powerful the emotions would be if E.T. weren’t just reaching out to Elliott, but to you.

3. The Lord of the Rings trilogy – It would be far too likely that all the careful perspective shots director Peter Jackson used to establish the differences in characters’ size would be lost, or at least badly screwed up, by the 3-D conversion process.

2. The Muppet Movie — This is a near-perfect movie, with, at most a few sour notes in an otherwise symphonic masterpiece. It works, as does anything involving Muppets, because it was meticulously filmed so the Muppets were utterly believable as characters. Converting it to 3-D would be bound to make the Muppets look more like they do in real life — that is to say, less like living beings.

1. The Star Wars saga — As though he hadn’t tinkered with the Star Wars films enough already, George Lucas has publicly stated his intentions to release 3-D versions of them. It wasn’t bad enough that he made Greedo shoot first; now he wants to mess around with the whole look and feel of the movies. If we haven’t made our case yet, we have but three more words for you: 3-D Jar Jar.

There are of course plenty more where those came from — feel free to add your own ideas in the comments. Interestingly, while compiling this list, a few films stuck out as ones that might actually be improved by 3-D conversion — look for a list of those next week.

Read More http://www.wired.com/geekdad/2010/03/the-top-10-movies-that-should-never-ever-be-converted-to-3d/#ixzz0hoPR3ayv

Marcial Maciel abusó también de sus hijos

 
Marcial MacielEn el programa de Carmen Aristegui, dos jóvenes revelaron cómo el fundador de Los Legionarios de Cristo inventó una identidad para engañar a su madre.
Los hijos de Maciel aseguran que siempre tuvieron una buena imagen de él. (Foto: Archivo)

Detective privado, a veces agente de la CIA, de nombre «Raúl Rivas», con deseos de formar una familia después de enviudar, así se presentaba el sacerdote Marcial Maciel sin hábito. Así abusó sexualmente de sus hijos.

La identidad “secreta” del fundador de los Legionarios de Cristo le ayudó a acercarse a Blanca Estela Lara Gutiérrez, con quien procreó dos hijos y adoptó de un matrimonio anterior de ella. Maciel registró a los niños como «González Lara».

En entrevista con la periodista Carmen Aristegui, la familia Gutiérrez habló por primera vez sobre la vida con “Raúl Rivas” y de los abusos sexuales que cometió con sus hijos.

Blanca Estela conoció a Maciel a finales de la década de 1970, en Tijuana, Baja California. “Lo idolatraba, alguna vez le dije ‘eres mi dios’”. Nunca se casaron, él viajaba continuamente para pasar un tiempo con su familia en Cuernavaca, Morelos (centro del país) y en ocasiones los llevaba con ellos.

“Para obtener el pasaporte, él me hacía pasar por lo difícil, porque un día era Rivas, otro González, pero siempre le creí, nunca dudé de él porque era una buena persona”.

Sus hijos Omar, José Raúl y Cristian también pensaban lo mismo. “Siempre lo vimos como el patriarca de la familia, nos decía que no fumáramos, que tuviéramos novia hasta los 25 años… nunca tuvimos una mala imagen de él”.

Ni las dudas saltaban cuando los extraños se dirigían a él.

“Cuando desayunábamos fuera —cuenta Omar— algunos le decían ‘buenos días, padre’, y teníamos la orden de retirarnos. Nunca nos preguntábamos por qué le decían ‘padre’, suponíamos que era porque tenía muchos hijos”.

Este miércoles, Omar y Raúl contaron que en esos casos, también se encuentran sus historias.

“En 1997, yo estaba haciendo deportes cuando en los puestos de periódicos vi la revista Contenido, vi su foto (de Marcial Maciel) y su nombre. No lo podía creer. Él estaba en Nueva York (EU) y le marqué: ‘¿Por qué dicen que eres esta persona?’. ‘No les creas’, me dijo”.

Desde allí, le ordenó a su hijo Omar que esperara a un hombre, llamado Antonio, que lo llevaría a Cuernavaca, y comprara todos los ejemplares de la publicación. Y así lo hizo.

En el hogar de los «González Lara» no se tocó mucho el tema hasta dos años después cuando Raúl comenzó a sentirse “raro, dudé de mi sexualidad”.

“Primero le conté a mi mamá, le dije que mi padre había abusado de mí y de mi hermano Omar”.

El primer abuso, contó, fue en Colombia cuando tenía 7 años.

“Estaba acostado con él, como cualquier hijo, sin malicia, me bajé mi calzoncillo y me quiso violar. Se da cuenta, no me fuerza. Fue tan impresionante ese momento que hasta el día de hoy recuerdo qué desayuné ese día”.

En el caso de Raúl, le ocurrió en Madrid. “Nos encontrábamos en Madrid, en el mismo cuarto. Él se hacía el dormido, pero nos pedía que lo masturbáramos. Tomaba fotos y se las quedaba. Nos decía que su tío le hacía lo mismo, que ensayáramos con él”.
Más información en CNNMmexico.

Cyberwar Hype Intended to Destroy the Open Internet

(By:Wired.com)

The biggest threat to the open internet is not Chinese government hackers or greedy anti-net-neutrality ISPs, it’s Michael McConnell, the former director of national intelligence.
McConnell’s not dangerous because he knows anything about SQL injection hacks, but because he knows about social engineering. He’s the nice-seeming guy who’s willing and able to use fear-mongering to manipulate the federal bureaucracy for his own ends, while coming off like a straight shooter to those who are not in the know.
When he was head of the country’s national intelligence, he scared President Bush with visions of e-doom, prompting the president to sign a comprehensive secret order that unleashed tens of billions of dollars into the military’s black budget so they could start making firewalls and building malware into military equipment.
And now McConnell is back in civilian life as a vice president at the secretive defense contracting giant Booz Allen Hamilton. He’s out in front of Congress and the media, peddling the same Cybaremaggedon! gloom.
And now he says we need to re-engineer the internet.

We need to develop an early-warning system to monitor cyberspace, identify intrusions and locate the source of attacks with a trail of evidence that can support diplomatic, military and legal options — and we must be able to do this in milliseconds. More specifically, we need to re-engineer the Internet to make attribution, geo-location, intelligence analysis and impact assessment — who did it, from where, why and what was the result — more manageable. The technologies are already available from public and private sources and can be further developed if we have the will to build them into our systems and to work with our allies and trading partners so they will do the same.

Re-read that sentence. He’s talking about changing the internet to make everything anyone does on the net traceable and geo-located so the National Security Agency can pinpoint users and their computers for retaliation if the U.S. government doesn’t like what’s written in an e-mail, what search terms were used, what movies were downloaded. Or the tech could be useful if a computer got hijacked without your knowledge and used as part of a botnet.
The Washington Post gave McConnell free space to declare that we are losing some sort of cyberwar. He argues that the country needs to get a Cold War strategy, one complete with the online equivalent of ICBMs and Eisenhower-era, secret-codenamed projects. Google’s allegation that Chinese hackers infiltrated its Gmail servers and targeted Chinese dissidents proves the United States is “losing” the cyberwar, according to McConnell.
But that’s not warfare. That’s espionage.
McConnell’s op-ed then pointed to breathless stories in The Washington Post and The Wall Street Journal about thousands of malware infections from the well-known Zeus virus. He intimated that the nation’s citizens and corporations were under unstoppable attack by this so-called new breed of hacker malware.
despite the masterful PR about the Zeus infections from security company NetWitness (run by a former Bush Administration cyberczar Amit Yoran), the world’s largest security companies McAfee and Symantec downplayed the story. But the message had already gotten out — the net was under attack.
Brian Krebs, one of the country’s most respected cybercrime journalists and occasional Threat Level contributor, described that report: “Sadly, this botnet documented by NetWitness is neither unusual nor new.”
Those enamored with the idea of “cyberwar” aren’t dissuaded by fact-checking.
They like to point to Estonia, where a number of the government’s websites were rendered temporarily inaccessible by angry Russian citizens. They used a crude, remediable denial-of-service attack to temporarily keep users from viewing government websites. (This attack is akin to sending an army of robots to board a bus, so regular riders can’t get on. A website fixes this the same way a bus company would — by keeping the robots off by identifying the difference between them and humans.) Some like to say this was an act of cyberwar, but if it that was cyberwar, it’s pretty clear the net will be just fine.
In fact, none of these examples demonstrate the existence of a cyberwar, let alone that we are losing it.
But this battle isn’t about truth. It’s about power.
For years, McConnell has wanted the NSA (the ultra-secretive government spy agency responsible for listening in on other countries and for defending classified government computer systems) to take the lead in guarding all government and private networks. Not surprisingly, the contractor he works for has massive, secret contracts with the NSA in that very area. In fact, the company, owned by the shadowy Carlyle Group, is reported to pull in $5 billion a year in government contracts, many of them Top Secret.
Now the problem with developing cyberweapons — say a virus, or a massive botnet for denial-of-service attacks, is that you need to know where to point them. In the Cold War, it wasn’t that hard. In theory, you’d use radar to figure out where a nuclear attack was coming from and then you’d shoot your missiles in that general direction. But online, it’s extremely difficult to tell if an attack traced to a server in China was launched by someone Chinese, or whether it was actually a teenager in Iowa who used a proxy.
That’s why McConnell and others want to change the internet. The military needs targets.
But McConnell isn’t the only threat to the open internet.
Just last week the National Telecommunications and Information Administration — the portion of the Commerce Department that has long overseen the Internet Corporation for Assigned Names and Numbers — said it was time for it to revoke its hands-off-the-internet policy.
That’s according to a February 24 speech by Assistant Commerce Secretary Lawrence E. Strickling.

In fact, “leaving the Internet alone” has been the nation’s internet policy since the internet was first commercialized in the mid-1990s. The primary government imperative then was just to get out of the way to encourage its growth. And the policy set forth in the Telecommunications Act of 1996 was: “to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”
This was the right policy for the United States in the early stages of the Internet, and the right message to send to the rest of the world. But that was then and this is now.

Now the NTIA needs to start being active to prevent cyberattacks, privacy intrusions and copyright violations, according to Strickling. And since NTIA serves as one of the top advisers to the president on the internet, that stance should not be underestimated.
Add to that — a bill looming in the Senate would hand the president emergency powers over the internet — and you can see where all this is headed. And let the past be our guide.
Following years of the NSA illegally spying on Americans’ e-mails and phone calls as part of a secret anti-terrorism project, Congress voted to legalize the program in July 2008. That vote allowed the NSA to legally turn America’s portion of the internet into a giant listening device for the nation’s intelligence services. The new law also gave legal immunity to the telecoms like AT&T that helped the government illegally spy on American’s e-mails and internet use. Then-Senator Barack Obama voted for this legislation, despite earlier campaign promises to oppose it.
As anyone slightly versed in the internet knows, the net has flourished because no government has control over it.
But there are creeping signs of danger.
Where can this lead? Well, consider England, where a new bill targeting online file sharing will outlaw open internet connections at cafes or at home, in a bid to track piracy.
To be sure, we could see more demands by the government for surveillance capabilities and backdoors in routers and operating systems. Already, the feds successfully turned the Communications Assistance for Law Enforcement Act (a law mandating surveillance capabilities in telephone switches) into a tool requiring ISPs to build similar government-specified eavesdropping capabilities into their networks.
The NSA dreams of “living in the network,” and that’s what McConnell is calling for in his editorial/advertisement for his company. The NSA lost any credibility it had when it secretly violated American law and its most central tenet: “We don’t spy on Americans.”
Unfortunately, the private sector is ignoring that tenet and is helping the NSA and contractors like Booz Allen Hamilton worm their way into the innards of the net. Security companies make no fuss, since a scared populace and fear-induced federal spending means big bucks in bloated contracts. Google is no help either, recently turning to the NSA for help with its rather routine infiltration by hackers.
Make no mistake, the military industrial complex now has its eye on the internet. Generals want to train crack squads of hackers and have wet dreams of cyberwarfare. Never shy of extending its power, the military industrial complex wants to turn the internet into yet another venue for an arms race.
And it’s waging a psychological warfare campaign on the American people to make that so. The military industrial complex is backed by sensationalism, and a gullible and pageview-hungry media. Notable examples include the New York Times’s John “We Need a New Internet” Markoff, 60 Minutes’Hackers Took Down Brazilian Power Grid,” and the WSJ’s Siobhan Gorman, who ominously warned in an a piece lacking any verifiable evidence, that Chinese and Russian hackers are already hiding inside the U.S. electrical grid.
Now the question is: Which of these events can be turned into a Gulf of Tonkin-like fakery that can create enough fear to let the military and the government turn the open internet into a controlled, surveillance-friendly net.
What do they dream of? Think of the internet turning into a tightly monitored AOL circa the early ’90s, run by CEO Big Brother and COO Dr. Strangelove.
That’s what McConnell has in mind, and shame on The Washington Post and the Senate Commerce, Science and Transportation Committee for giving McConnell venues to try to make that happen — without highlighting that McConnell has a serious financial stake in the outcome of this debate.
Of course, the net has security problems, and there are pirated movies and spam and botnets trying to steal credit card information.
But the online world mimics real life. Just as I know where online to buy a replica of a Coach handbag or watch a new release, I know exactly where I can go to find the same things in the city I live in. There are cons and rip-offs in the real world, just as there are online. I’m more likely to get ripped off by a restaurant server copying down the information on my credit card than I am having my card stolen and used for fraud while shopping online. “Top Secret” information is more likely to end up in the hands of a foreign government through an employee-turned-spy than from a hacker.
But cyber-anything is much scarier than the real world.
The NSA can help private companies and networks tighten up their security systems, as McConnell argues. In fact, they already do, and they should continue passing along advice and creating guides to locking down servers and releasing their own secure version of Linux. But companies like Google and AT&T have no business letting the NSA into their networks or giving the NSA information that they won’t share with the American people.
Security companies have long relied on creating fear in internet users by hyping the latest threat, whether that be Conficker or the latest PDF flaw. And now they are reaping billions of dollars in security contracts from the federal government for their PR efforts. But the industry and its most influential voices need to take a hard look at the consequences of that strategy and start talking truth to power’s claims that we are losing some non-existent cyberwar.
The internet is a hack that seems forever on the edge of falling apart. For awhile, spam looked like it was going to kill e-mail, the net’s first killer app. But smart filters have reduced the problem to a minor nuisance as anyone with a Gmail account can tell you. That’s how the internet survives. The apocalypse looks like it’s coming and it never does, but meanwhile, it becomes more and more useful to our everyday lives, spreading innovation, weird culture, news, commerce and healthy dissent.
But one thing it hasn’t spread is “cyberwar.” There is no cyberwar and we are not losing it. The only war going on is one for the soul of the internet. But if journalists, bloggers and the security industry continue to let self-interested exaggerators dominate our nation’s discourse about online security, we will lose that war — and the open internet will be its biggest casualty.
UPDATE: In an interesting coincidence, the Obama administration unclassified on Tuesday portions of the secret Comprehensive National Cybersecurity Initiative it inherited from President Bush, including unclassified summaries all of the 12 initiatives. Note the veiled references to deterrence.
Photo: Michael McConnell, then-Director of National Intelligence, watches on in 2008 as President Bush announced the Protect America Act. White House file photo.

How Google’s Algorithm Rules the Web

(By:Wired.com)

Want to know how Google is about to change your life? Stop by the Ouagadougou conference room on a Thursday morning. It is here, at the Mountain View, California, headquarters of the world’s most powerful Internet company, that a room filled with three dozen engineers, product managers, and executives figure out how to make their search engine even smarter. This year, Google will introduce 550 or so improvements to its fabled algorithm, and each will be determined at a gathering just like this one. The decisions made at the weekly Search Quality Launch Meeting will wind up affecting the results you get when you use Google’s search engine to look for anything — “Samsung SF-755p printer,” “Ed Hardy MySpace layouts,” or maybe even “capital Burkina Faso,” which just happens to share its name with this conference room. Udi Manber, Google’s head of search since 2006, leads the proceedings. One by one, potential modifications are introduced, along with the results of months of testing in various countries and multiple languages. A screen displays side-by-side results of sample queries before and after the change. Following one example — a search for “guitar center wah-wah” — Manber cries out, “I did that search!”

You might think that after a solid decade of search-market dominance, Google could relax. After all, it holds a commanding 65 percent market share and is still the only company whose name is synonymous with the verb search. But just as Google isn’t ready to rest on its laurels, its competitors aren’t ready to concede defeat. For years, the Silicon Valley monolith has used its mysterious, seemingly omniscient algorithm to, as its mission statement puts it, “organize the world’s information.” But over the past five years, a slew of companies have challenged Google’s central premise: that a single search engine, through technological wizardry and constant refinement, can satisfy any possible query. Facebook launched an early attack with its implication that some people would rather get information from their friends than from an anonymous formula. Twitter’s ability to parse its constant stream of updates introduced the concept of real-time search, a way of tapping into the latest chatter and conversation as it unfolds. Yelp helps people find restaurants, dry cleaners, and babysitters by crowdsourcing the ratings. None of these upstarts individually presents much of a threat, but together they hint at a wide-open, messier future of search — one that isn’t dominated by a single engine but rather incorporates a grab bag of services.
Still, the biggest threat to Google can be found 850 miles to the north: Bing. Microsoft’s revamped and rebranded search engine — with a name that evokes discovery, a famous crooner, or Tony Soprano’s strip joint — launched last June to surprisingly upbeat reviews. (The Wall Street Journal called it “more inviting than Google.”) The new look, along with a $100 million ad campaign, helped boost Microsoft’s share of the US search market from 8 percent to about 11 — a number that will more than double once regulators approve a deal to make Bing the search provider for Yahoo.
Team Bing has been focusing on unique instances where Google’s algorithms don’t always satisfy. For example, while Google does a great job of searching the public Web, it doesn’t have real-time access to the byzantine and constantly changing array of flight schedules and fares. So Microsoft purchased Farecast — a Web site that tracks airline fares over time and uses the data to predict when ticket prices will rise or fall — and incorporated its findings into Bing’s results. Microsoft made similar acquisitions in the health, reference, and shopping sectors, areas where it felt Google’s algorithm fell short.
Even the Bingers confess that, when it comes to the simple task of taking a search term and returning relevant results, Google is still miles ahead. But they also think that if they can come up with a few areas where Bing excels, people will get used to tapping a different search engine for some kinds of queries. “The algorithm is extremely important in search, but it’s not the only thing,” says Brian MacDonald, Microsoft’s VP of core search. “You buy a car for reasons beyond just the engine.”
Google’s response can be summed up in four words: mike siwek lawyer mi.
Amit Singhal types that koan into his company’s search box. Singhal, a gentle man in his forties, is a Google Fellow, an honorific bestowed upon him four years ago to reward his rewrite of the search engine in 2001. He jabs the Enter key. In a time span best measured in a hummingbird’s wing-flaps, a page of links appears. The top result connects to a listing for an attorney named Michael Siwek in Grand Rapids, Michigan. It’s a fairly innocuous search — the kind that Google’s servers handle billions of times a day — but it is deceptively complicated. Type those same words into Bing, for instance, and the first result is a page about the NFL draft that includes safety Lawyer Milloy. Several pages into the results, there’s no direct referral to Siwek.
The comparison demonstrates the power, even intelligence, of Google’s algorithm, honed over countless iterations. It possesses the seemingly magical ability to interpret searchers’ requests — no matter how awkward or misspelled. Google refers to that ability as search quality, and for years the company has closely guarded the process by which it delivers such accurate results. But now I am sitting with Singhal in the search giant’s Building 43, where the core search team works, because Google has offered to give me an unprecedented look at just how it attains search quality. The subtext is clear: You may think the algorithm is little more than an engine, but wait until you get under the hood and see what this baby can really do.

Key Advances in
Google Search
Google’s search algorithm is a work in progress — constantly tweaked and refined to return higher-quality results. Here are some of the most significant additions and adaptations since the dawn of PageRank. — Steven Levy
.searchQualityStuff {float:left;width:150px;margin:0px 15px 16px 0px;} .searchQualityStuff p {margin:0px;}

Backrub
[September 1997]

This search engine, which had run on Stanford’s servers for almost two years, is renamed Google. Its breakthrough innovation: ranking searches based on the number and quality of incoming links.

New algorithm
[August 2001]

The search algorithm is completely revamped to incorporate additional ranking criteria more easily.

Local connectivity analysis
[February 2003]

Google’s first patent is granted for this feature, which gives more weight to links from authoritative sites.

Fritz
[Summer 2003]

This initiative allows Google to update its index constantly, instead of in big batches.

Personalized results
[June 2005]

Users can choose to let Google mine their own search behavior to provide individualized results.

Bigdaddy
[December 2005]

Engine update allows for more-comprehensive Web crawling.

Universal search
[May 2007]

Building on Image Search, Google News, and Book Search, the new Universal Search allows users to get links to any medium on the same results page.

Real-Time Search
[December 2009]

Displays results from Twitter and blogs as they are published.

The story of Google’s algorithm begins with PageRank, the system invented in 1997 by cofounder Larry Page while he was a grad student at Stanford. Page’s now legendary insight was to rate pages based on the number and importance of links that pointed to them — to use the collective intelligence of the Web itself to determine which sites were most relevant. It was a simple and powerful concept, and — as Google quickly became the most successful search engine on the Web — Page and cofounder Sergey Brin credited PageRank as their company’s fundamental innovation.
But that wasn’t the whole story. “People hold on to PageRank because it’s recognizable,” Manber says. “But there were many other things that improved the relevancy.” These involve the exploitation of certain signals, contextual clues that help the search engine rank the millions of possible results to any query, ensuring that the most useful ones float to the top.
Web search is a multipart process. First, Google crawls the Web to collect the contents of every accessible site. This data is broken down into an index (organized by word, just like the index of a textbook), a way of finding any page based on its content. Every time a user types a query, the index is combed for relevant pages, returning a list that commonly numbers in the hundreds of thousands, or millions. The trickiest part, though, is the ranking process — determining which of those pages belong at the top of the list.
That’s where the contextual signals come in. All search engines incorporate them, but none has added as many or made use of them as skillfully as Google has. PageRank itself is a signal, an attribute of a Web page (in this case, its importance relative to the rest of the Web) that can be used to help determine relevance. Some of the signals now seem obvious. Early on, Google’s algorithm gave special consideration to the title on a Web page — clearly an important signal for determining relevance. Another key technique exploited anchor text, the words that make up the actual hyperlink connecting one page to another. As a result, “when you did a search, the right page would come up, even if the page didn’t include the actual words you were searching for,” says Scott Hassan, an early Google architect who worked with Page and Brin at Stanford. “That was pretty cool.” Later signals included attributes like freshness (for certain queries, pages created more recently may be more valuable than older ones) and location (Google knows the rough geographic coordinates of searchers and favors local results). The search engine currently uses more than 200 signals to help rank its results.
Google’s engineers have discovered that some of the most important signals can come from Google itself. PageRank has been celebrated as instituting a measure of populism into search engines: the democracy of millions of people deciding what to link to on the Web. But Singhal notes that the engineers in Building 43 are exploiting another democracy — the hundreds of millions who search on Google. The data people generate when they search — what results they click on, what words they replace in the query when they’re unsatisfied, how their queries match with their physical locations — turns out to be an invaluable resource in discovering new signals and improving the relevance of results. The most direct example of this process is what Google calls personalized search — an opt-in feature that uses someone’s search history and location as signals to determine what kind of results they’ll find useful. (This applies only to those who sign into Google before they search.) But more generally, Google has used its huge mass of collected data to bolster its algorithm with an amazingly deep knowledge base that helps interpret the complex intent of cryptic queries.
Take, for instance, the way Google’s engine learns which words are synonyms. “We discovered a nifty thing very early on,” Singhal says. “People change words in their queries. So someone would say, ‘pictures of dogs,’ and then they’d say, ‘pictures of puppies.’ So that told us that maybe ‘dogs’ and ‘puppies’ were interchangeable. We also learned that when you boil water, it’s hot water. We were relearning semantics from humans, and that was a great advance.”
But there were obstacles. Google’s synonym system understood that a dog was similar to a puppy and that boiling water was hot. But it also concluded that a hot dog was the same as a boiling puppy. The problem was fixed in late 2002 by a breakthrough based on philosopher Ludwig Wittgenstein’s theories about how words are defined by context. As Google crawled and archived billions of documents and Web pages, it analyzed what words were close to each other. “Hot dog” would be found in searches that also contained “bread” and “mustard” and “baseball games” — not poached pooches. That helped the algorithm understand what “hot dog” — and millions of other terms — meant. “Today, if you type ‘Gandhi bio,’ we know that bio means biography,” Singhal says. “And if you type ‘bio warfare,’ it means biological.”
Throughout its history, Google has devised ways of adding more signals, all without disrupting its users’ core experience. Every couple of years there’s a major change in the system — sort of equivalent to a new version of Windows — that’s a big deal in Mountain View but not discussed publicly. “Our job is to basically change the engines on a plane that is flying at 1,000 kilometers an hour, 30,000 feet above Earth,” Singhal says. In 2001, to accommodate the rapid growth of the Web, Singhal essentially revised Page and Brin’s original algorithm completely, enabling the system to incorporate new signals quickly. (One of the first signals on the new system distinguished between commercial and noncommercial pages, providing better results for searchers who want to shop.) That same year, an engineer named Krishna Bharat, figuring that links from recognized authorities should carry more weight, devised a powerful signal that confers extra credibility to references from experts’ sites. (It would become Google’s first patent.) The most recent major change, codenamed Caffeine, revamped the entire indexing system to make it even easier for engineers to add signals.
Google is famously creative at encouraging these breakthroughs; every year, it holds an internal demo fair called CSI — Crazy Search Ideas — in an attempt to spark offbeat but productive approaches. But for the most part, the improvement process is a relentless slog, grinding through bad results to determine what isn’t working. One unsuccessful search became a legend: Sometime in 2001, Singhal learned of poor results when people typed the name “audrey fino” into the search box. Google kept returning Italian sites praising Audrey Hepburn. (Fino means fine in Italian.) “We realized that this is actually a person’s name,” Singhal says. “But we didn’t have the smarts in the system.”
The Audrey Fino failure led Singhal on a multiyear quest to improve the way the system deals with names — which account for 8 percent of all searches. To crack it, he had to master the black art of “bi-gram breakage” — that is, separating multiple words into discrete units. For instance, “new york” represents two words that go together (a bi-gram). But so would the three words in “new york times,” which clearly indicate a different kind of search. And everything changes when the query is “new york times square.” Humans can make these distinctions instantly, but Google does not have a Brazil-like back room with hundreds of thousands of cubicle jockeys. It relies on algorithms.

Photo: Mauricio Alejo

Voila — when a hot dog is not a boiling puppy.
Photo: Mauricio Alejo
The Mike Siwek query illustrates how Google accomplishes this. When Singhal types in a command to expose a layer of code underneath each search result, it’s clear which signals determine the selection of the top links: a bi-gram connection to figure it’s a name; a synonym; a geographic location. “Deconstruct this query from an engineer’s point of view,” Singhal explains. “We say, ‘Aha! We can break this here!’ We figure that lawyer is not a last name and Siwek is not a middle name. And by the way, lawyer is not a town in Michigan. A lawyer is an attorney.”

This is the hard-won realization from inside the Google search engine, culled from the data generated by billions of searches: a rock is a rock. It’s also a stone, and it could be a boulder. Spell it “rokc” and it’s still a rock. But put “little” in front of it and it’s the capital of Arkansas. Which is not an ark. Unless Noah is around. “The holy grail of search is to understand what the user wants,” Singhal says. “Then you are not matching words; you are actually trying to match meaning.”
And Google keeps improving. Recently, search engineer Maureen Heymans discovered a problem with “Cindy Louise Greenslade.” The algorithm figured out that it should look for a person — in this case a psychologist in Garden Grove, California — but it failed to place Greenslade’s homepage in the top 10 results. Heymans found that, in essence, Google had downgraded the relevance of her homepage because Greenslade used only her middle initial, not her full middle name as in the query. “We needed to be smarter than that,” Heymans says. So she added a signal that looks for middle initials. Now Greenslade’s homepage is the fifth result.
At any moment, dozens of these changes are going through a well-oiled testing process. Google employs hundreds of people around the world to sit at their home computer and judge results for various queries, marking whether the tweaks return better or worse results than before. But Google also has a larger army of testers — its billions of users, virtually all of whom are unwittingly participating in its constant quality experiments. Every time engineers want to test a tweak, they run the new algorithm on a tiny percentage of random users, letting the rest of the site’s searchers serve as a massive control group. There are so many changes to measure that Google has discarded the traditional scientific nostrum that only one experiment should be conducted at a time. “On most Google queries, you’re actually in multiple control or experimental groups simultaneously,” says search quality engineer Patrick Riley. Then he corrects himself. “Essentially,” he says, “all the queries are involved in some test.” In other words, just about every time you search on Google, you’re a lab rat.
This flexibility — the ability to add signals, tweak the underlying code, and instantly test the results — is why Googlers say they can withstand any competition from Bing or Twitter or Facebook. Indeed, in the last six months Google has made more than 200 improvements, some of which seem to mimic — even outdo — the offerings of its competitors. (Google says this is just a coincidence and points out that it has been adding features routinely for years.) One is real-time search, eagerly awaited since Page opined some months ago that Google should be scanning the entire Web every second. When someone queries a subject of current interest, among the 10 blue links Google now puts a “latest results” box: a scrolling set of just-produced posts from news sources, blogs, or tweets. Once again, Google uses signals to ensure that only the most relevant tweets find their way into the real-time stream. “We look at what’s retweeted, how many people follow the person, and whether the tweet is organic or a bot,” Singhal says. “We know how to do this, because we’ve been doing it for a decade.”
Along with real-time search, Google has introduced other new features, including a service called Goggles, which treats images captured by users’ phones as search queries. It’s all part of the company’s relentless march toward search becoming an always-on, ubiquitous presence. With a camera and voice recognition, a smartphone becomes eyes and ears. If the right signals are found, anything can be query fodder.
Google’s massive computing power and bandwidth give the company an undeniable edge. Some observers say it’s an advantage that essentially prohibits startups from trying to compete. But Manber says it’s not infrastructure alone that makes Google the leader: “The very, very, very key ingredient in all of this is that we hired the right people.”
By all standards, Qi Lu qualifies as one of those people. “I have the highest regard for him,” says Manber, who worked with the 48-year-old computer scientist at Yahoo. But Lu joined Microsoft early last year to lead the Bing team. When asked about his mission, Lu, a diminutive man dressed in jeans and a Bing T-shirt, pauses, then softly recites a measured reply: “It’s extremely important to keep in mind that this is a long-term journey.” He has the same I’m-not-going-away look in his eye that Uma Thurman has in Kill Bill.
Indeed, the company that won last decade’s browser war has a best-served-cold approach to search, an eerie certainty that at some point, people are going to want more than what Google’s algorithm can provide. “If we don’t have a paradigm shift, it’s going to be very, very difficult to compete with the current winners,” says Harry Shum, Microsoft’s head of core search development. “But our view is that there will be a paradigm shift.”
Still, even if there is such a shift, Google’s algorithms will probably be able to incorporate that, too. That’s why Google is such a fearsome competitor; it has built a machine nimble enough to absorb almost any approach that threatens it — all while returning high-quality results that its competitors can’t match. Anyone can come up with a new way to buy plane tickets. But only Google knows how to find Mike Siwek.
Senior writer Steven Levy (steven_levy@wired.com) wrote about Twitter in issue 17.11.

El ‘Temo’ aseguró que México puede ganar un Mundial – Futbol – mediotiempo.com

El ‘Temo’ aseguró que México puede ganar un Mundial – Futbol – mediotiempo.com

Cuahtémoc Blanco no tiene límites y aunque podría pasar por un loco porque cada vez que asiste a un Mundial cree que México podría alcanzar la máxima gloria, sus sueños tienen fundamentos que son los resultados que el Tri ha conseguido ante algunas potencias, por ello, aseguró que en Sudáfrica irán por todo.

Con estas palabras, el «Temo» respalda lo dicho por Javier Aguirre, entrenador nacional, hace dos días, sobre el hecho de que en Sudáfrica 2010 la Selección Mexicana tratará de conseguir un lugar histórico.

«Yo estoy loco, cuando he ido a los Mundiales voy a tratar de conseguir el campeonato del mundo porque tenemos la capacidad para ganarle a cualquiera, ya lo hemos demostrado, ante Brasil o Italia, hemos estado a punto de ganar, aunque los errores nos han costado, pero vamos por todo, con esa ilusión de hacer la cosas bien», señaló.

Dijo que el Tri le puede ganar a cualquiera. (Audio: Héctor Cruz)
Sin embargo, Blanco no se confía, ya que para él ningún jugador tiene el lugar asegurado en el grupo que irá Mundial, por ello seguirá tratando de convencer día a día al «Vasco» con trabajo y no se dará por satisfecho hasta estar en el avión, donde ya nadie lo pueda bajar del vuelo a Sudáfrica.

«Nadie está seguro, hasta que tengamos el boletito en la mano, estemos en el avión y no nos bajen, estamos en el Mundial», dijo previo al juego de preparación del Tri del miércoles ante Bolivia.

Con el típico buen humor que lo caracteriza, el 10 mexicano asistió a la conferencia de prensa en el hotel de concentración del conjunto nacional, donde minimizó su veteranía y hasta bromeó al decir que en su generación «ya se murieron todos», pero mientras él pueda, tratará de guiar a los jóvenes en el Tri.

«Estamos todos de pasada, el tiempo pasa y vienen jóvenes nuevos, ves caras nuevas y es lo bonito del futbol, que vengan jóvenes y tengan ganas de triunfar, de mi generación hay muchos retirados, muchos que no están ya en la Selección, y mi objetivo es apoyar a los jóvenes, desde que empecé a ser titular en América es lo que hago», señaló.

Y finalmente, sobre el duelo ante los sudamericanos, que por cierto vendrán con sólo 18 elementos, muchos de ellos juveniles, Blanco comentó que habrá que sacarle todo el provecho posible en el camino hacia el Mundial.

«Llegamos un poco cansados, pero tenemos que trabajar, son partidos que ya estaban pactados, tenemos que seguir trabajando, hacer las cosas bien, dar un buen partido mañana para la afición de San Francisco, donde hay muchos mexicanos, estoy agradecido con la afición por el gran cariño que me tiene, es gente que les costó llegar a este país y lo hago por ellos», concluyó.

[MEDIOTIEMPO]