CAMBRIDGE, Mass. -- For years, The Amazing Randi sat next to Johnny Carson performing magic tricks on The Tonight Show. But last week, James Randi was holding court for a very different audience -- an invitation-only collection of three dozen computer security experts at MIT's famed Stata Center near Boston. There, in what might be called the hall of fame for hacking, Randi couldn't stop himself from pulling gags. But when he wasn't bending spoons, making things disappear, or stroking his foot-long white beard and wizened chin, Randi revealed secrets about the art of deception.
"Many times," he confessed, "Magicians don't really know why their tricks work. They just work."
Put another way: Charlatans don't bother creating detailed schemes for deception. They just have a feel for what fools people.
On the other hand, the scientists who are working hard to make computers, airports, cities, and everything else safe for us often aren't endowed with this same feeling. They study problems, write papers, review their code, and write sophisticated cryptographic schemes. Then, with heavy hearts, they walk through rows of cubicles at American companies and see Post-It notes tacked onto computer screens with passwords.
At the first ever "Security and Human Behavior"conference last week, many of the world's top minds in computer science gathered to address this paradox. Their self-assessment was refreshingly honest and direct.
"In a field that has been marked by great human achievement during the past several decades, our branch of it can only be called a failure," conceded Matt Blaze, a computer science professor at the University of Pennsylvania, eliciting nervous laughter.
He wasn't really kidding. Despite remarkable advances in technology, most consumers are using the exact same clumsy security procedures they have for decades. And many feel even less secure.
In the meantime, the charlatans have continued to hone their deception skills. And they've enjoyed remarkable success at mucking things up. A trivial trick such as phishing e-mails – look-alike notes designed to steal personal information which appear to come from banks -- has wreaked havoc with companies and consumers alike for years.
That's why this ad hoc geeky group invited a magician, an architect, a photographer, a philosopher, several economists, a few psychologists and about a dozen other experts in behavioral studies to come give them an education in how people think. This high-powered collection of computer scientists humbly arrived at MIT asking for help, in an effort to get a better feel for the people they are trying to protect.
Famed cryptogrpahy experts Bruce Schneier, now of British Telecom, and Ross Anderson, a U.K. proferssor, assembled the small group -- including the magician -- as a way of getting at new answers to old problems.
"Many real attacks on information systems exploit psychology more than technology," Schneier says. "Security design is by nature psychological, yet many systems ignore this."
MIT's Stata Center, designed by Frank Gehry, has impossible towers and absurdly bright colors, and wouldn't look out of place in a Dr. Seuss book. Its hallways are full of plaques memorializing the greatest pranks ever pulled by MIT students - the security squad car that somehow made it onto the top of the campus rotunda, for example. The car actually sits high up on a ledge in the middle of the building's center hall (Forget the rotunda stunt, how did it get there?).
This hall of pranks seemed the perfect place to discuss the failures of technology -- and technologists -- in the modern age.
Bad guys have better people skills
Criminals usually don't bother learning all the ins and out of the technology they exploit -- they simply learn enough to be dangerous. But they spend endless hours understanding the people they plan to fool. Hackers long ago learned a short cut, what they call social engineering: Why spend years trying to hack into a bank when you can just ask an account holder to give you their name and password?
The technologists, on the other hand, tend to fight this battle with one hand tied behind their back. They generally spend most of their time studying technology, learning all its nooks and crannies from the ground up. They write careful research papers following the strict rules of scientific method. They must spend endless hours defend their findings against all comers, and they can't hurt anyone while conducting studies. They know the technology well, but they have little time to sit around understanding how people work.
But all that is starting to change, say some in this group of security researchers turned amateur psychologists. Several years ago, a quiet alliance was formed between behavioral economists – who study why people make irrational choices – and security professionals. Scientists and economists began writing papers together and sharing research costs. With last week's MIT meeting, the computer folks cast a much wider net in their search for answers.
Security, Schneier told the gathering, is "both a feeling and a reality," and both are important. Local police, for example, fight both crime and the perception of crime. Failure in either area can have serious consequences. Regardless of actual crime data, crime fighting is useless if residents of a town don't feel safe.
Pedophelia and the "License to Hug"
To that end, researcher Jean Camp at the Indiana University points out that people can easily assess risk when there are physical clues. People have a natural aversion to dark, empty parking lots for example, but there's no correlation to these kinds of physical clues online. That tends to keep older users from feeling safe while surfing. Camp studies this trust problem with residents at a nearby nursing home. She has created a large glowing box which sits next to a computer screen that turns green when fellow residents recommend a site is safe, and red when it's risky. Seniors find the large, obvious signal, reassuring, she said, and they are more likely to take advantage of the Internet to stay in touch with family.
But the battle to make people feel secure can sometimes feels like a losing cause. Frank Furendi, a noted British author on the subject of Risk and Fear, described what he calls a growing "hysteria" on the subject of pedophilia in the U.K. By next year, he said, one-third of all British citizens will have been subject to police checks. As a result, some parents won't let their children play with kids of parents who haven't been checked. He describes the problem in a new pamphlet, "License to Hug."
"Now we're not worried about pedophiles, we're worried about people who haven't been police checked," he said. "In response to an insecurity, we've created more sources of insecurity."
Often, Furendi noted, it's much easier for governments to create the appearance of security than the reality of security.
Among the fresh ideas discussed at MIT: computers might be too friendly. Our natural risk sensors do a good job of telling us when something physically dangerous is nearby (like a hungry bear), but do a terrible job of warning us about cyber-danger. Meanwhile, software makers have gone to great pains to make computers user-friendly. Perhaps that's a mistake, said Nicholas Humphrey of the London School of Economics. Occasionally, some healthy fear might help online, Humphrey said. Forget small padlocks on e-commerce sites – how about a large shark abruptly appearing on the screen to stoke primal fears?
Security fire drills called for
Privacy expert Alessandro Acquisti of Carnegie Mellon University brought a similar concept from the area of learning science -- the idea of the "teachable moment." Employees rarely read and digest memos about security with great zest and eagerness, he notes. But giving them the equivalent of a security fire drill can immediately change behavior.
Imagine, for example, if once each month or so your company's IT department send a legitimate-looking e-mail with a faux virus attached. Employees who "fall" for the e-mail would get a slightly embarrassing reminder not to click on unexpected e-mail attachments. In some more critical circumstances, failure in such random tests could impact an employees' annual review or raise. In a controlled test, Acquisti said, computer users were far more likely to learn safe computing behavior from this kind of random testing than traditional memos and warnings.
Not so easy to 'Fix the World'
After two days with 35 intense presentations each followed raucous question and answer sessions, things got strikingly quiet during the last panel, called "How Do We Fix the World." The topic of security ranges from keeping the family digital photos safe to keeping terrorists off airplanes. It also has no end-point. Terrorism researchers are plagued by the troubling question: "When will we know we've won the war on terror?" Security researchers face the same rhetorical problem.
But Aquisiti said he is hopeful this first-ever meeting will spur more interdisciplinary discussions. There was even talk of a "dating service," for researchers from different area to help them find each other ("I'm an economist studying the cost of antivirus software looking for a psychologist who is an expert in primal fear of predators.") Aquisiti was even hopeful a new field of study might be born. He struggled a bit to name it, however.
"Hmm…Perhaps the behavioral psychology of privacy and security," he said.
Or perhaps, they could just call it magic.