This article is more than 1 year old

Phishing with Rachna Dhamija

The human factor

Interview Federico Biancuzzi interviews Rachna Dhamija, co-author of the paper "Why Phishing Works" and creator of Dynamic Security Skins. They discuss the human factor, how easy it is to recreate a credible browser window made with images, some new anti-phishing features included in the upcoming version of some popular browsers, and the power of letting a user personalise his interface.

Could you introduce yourself?

I'm a postdoctoral fellow at the Centre for Research on Computation and Society at Harvard University. I teach a computer science course on Privacy and Security Usability, which tackles one of the most challenging problems in computer security: the human factor. Before that I was a PhD student at UC Berkeley, and before that I worked on electronic commerce privacy and security at CyberCash.

Recently you co-authored an experiment to understand how and why phishing works. What did you learn?

We wanted to understand why phishing attacks work. We conducted a usability study where we showed 22 participants 20 websites and asked them to determine which ones were fraudulent, and why. We found that the best phishing website fooled 90 per cent of participants.

We discovered that existing security cues are ineffective, for three reasons:

  1. The indicators are ignored (23 per cent of participants in our study did not look at the address bar, status bar, or any SSL indicators).
  2. The indicators are misunderstood. For example, one regular Firefox user told me that he thought the yellow background in the address bar was an aesthetic design choice of the website designer (he didn't realise that it was a security signal presented by the browser). Other users thought the SSL lock icon indicated whether a website could set cookies.
  3. The security indicators are trivial to spoof. Many users can't distinguish between an actual SSL indicator in the browser frame and a spoofed image of that indicator that appears in the content of a webpage. For example, if you display a popup window with no address bar, and then add an image of an address bar at the top with the correct URL and SSL indicators and an image of the status bar at the bottom with all the right indicators, most users will think it is legitimate. This attack fooled more than 80 per cent of participants.

We also found that popup warnings are ineffective. When presented with a browser warning of a self-signed certificate, 15 out of 22 participants proceeded to click OK (to accept the certificate) without reading the warning. Finally, participants were vulnerable across the board - in our study, neither education, age, sex, previous experience, nor hours of computer use showed a statistically significant correlation with vulnerability to phishing.

How does the detection rate of your test compare to that of real users?

Our participant population was highly educated, consisting of staff and students at a university. The minimum level of education was a bachelor's degree. Our population was also more knowledgeable than average, because they were told that spoofed websites were in the test set. They were also more motivated than the average user would be, because their task in the study was to identify websites as legitimate or not. For these reasons, we would expect that the spoof detection rate in our study would be higher than it would be in real life. However, any spoofs that fooled our participants would also be likely to fool real users.

Were these spoofing methods using OS or browser dependent bugs (or "features")?

In our study, we didn't take advantage of any number of bugs or vulnerabilities that allow spoofing in browsers (such as the IDN spoofing vulnerability). We only used very simple attacks that are easy for attackers to craft today, even if we assume that users are using secure, up-to-date and fully patched browsers. If we took advantage of bugs and vulnerabilities, we expect that the spoofing rate might have been higher.

How much are default settings important for complex topics such as crypto configuration?

Choosing the appropriate default settings is a critical aspect of privacy and security design, whether it is for cookie policies or crypto configuration. Most users do not change the default settings. In our usability study, we used the default browser settings in Firefox, and we took advantage of some of those defaults in crafting attacks. For example, Firefox forces all popup windows to display only a small portion of the chrome (the status bar) by default. This allowed us to insert a false address bar and false status bar with security indicators, and the majority of participants in our study were fooled into thinking that this was a legitimate webpage, rather than a fraudulent pop-up. The next version of Firefox may force the address bar to also be displayed by default, which should help more users notice this type of spoofing attack.

We also tested the condition where a browser encounters a self-signed cert. Currently the default setting in most browsers is to pop-up a modal warning dialog with some options. We found that most users accepted the default option ("Accept this certificate for this session"), and they proceeded to visit the website. IE7 will introduce some new warning notice designs to address this problem. They plan to block known phishing pages by default (such as by showing an inline error web page instead - this page displays a warning and allows the user to click a link to proceed). For suspicious sites or sites with certificate errors, they will color the address bar yellow and drop down a warning from the address bar. Only time (and usability studies!) will show if users will also learn to ignore these warnings just as they have with pop-up warnings.

Are you currently working on other tests about how phishing works?

Currently, I'm working on other techniques to prevent phishing in conjunction with security skins. For example, in a security usability class I taught this semester at Harvard, we conducted a usability study that shows that simply showing a user's history information (for example, "you've been to this website many times" or "you've never submitted this form before") can significantly increase a user's ability to detect a spoofed website and reduce their vulnerability to phishing attacks. Another area I've been investigating are techniques to help users recover from errors and to identify when errors are real, or when they are simulated. Many attacks rely on users not being able to make this distinction.

You presented the project called Dynamic Security Skins (DSS) nearly one year ago. Do you think the main idea behind it is still valid after your tests?

I think that our usability study shows how easy it is to spoof security indicators, and how hard it is for users to distinguish legitimate security indicators from those that have been spoofed. Dynamic Security Skins is a proposal that starts from the assumption that any static security indicator can easily be copied by attacker. Instead, we propose that users create their own customised security indicators that are hard for an attacker to predict. Our usability study also shows that indicators placed in the periphery or outside of the user's focus of attention (such as the SSL lock icon in the status bar) may be ignored entirely by some users. DSS places the security indicator (a secret image) at the point of password entry, so the user can not ignore it.

More about

TIP US OFF

Send us news


Other stories you might like