February 20, 2018

The Role of Worst Practices in Insecurity

These days, security advisors talk a lot about Best Practices: establishes procedures that are generally held to yield good results. Deploy Best Practices in your organization, the advisors say, and your security will improve. That’s true, as far as it goes, but often we can make more progress by working to eliminate Worst Practices.

A Worst Practice is something that most of us do, even though we know it’s a bad idea. One current Worst Practice is the way we use passwords to authenticate ourselves to web sites. Sites’ practices drive users to re-use the same password across many sites, and to expose themselves to phishing and keylogging attacks. We know we shouldn’t be doing this, but we keep doing it anyway.

The key to addressing Worst Practices is to recognize that they persist for a reason. If ignorance is the cause, it’s not a Worst Practice — remember that Worst Practices, by definition, are widely known to be bad. There’s typically some kind of collective action problem that sustains a Worst Practice, some kind of Gordian Knot that must be cut before we can eliminate the practice.

This is clearly true for passwords. If you’re building a new web service, and you’re deciding how to authenticate your users, passwords are the easy and obvious choice. Users understand them; they don’t require coordination with any other company; and there’s probably a password-handling module that plugs right into your development environment. Better authentication will be a “maybe someday” feature. Developers make this perfectly rational choice every day — and so we’re stuck with a Worst Practice.

Solutions to this and other Worst Practices will require leadership by big companies. Google, Microsoft, Facebook and others will have to step up and work together to put better practices in place. In the user authentication space we’re seeing some movement with new technologies such as OpenID which reduce the number of places users must log into, thereby easing the move to better practices. But on this and other Worst Practices, we have a long way to go.

Which Worst Practices annoy you? And what can be done to address them? Speak up in the comments.


  1. On the one hand, inclusion of widgets and tools from providers around the web is a good cheap way of adding useful functions to pages. On the other, those of us who try to avoid having our browsers hijacked often find that some page won’t work properly because we’re not allowing scripts from a third-party widget/tool we’ve never heard of before. So I just run up the list clicking “temporarily allow” and reloading until something seems to work, or cross my fingers and temporarily allow all. I know I should vet all these little bits more carefully (or there should be a protocol for pages to tell a browser which scripts are necessary for what parts of the page to work). But that would take way too long, and I’d probably get it wrong anyway. So click and pray.

    I don’t know if this is also a worst practice, but at this point I have no idea of what most of my site passwords are. My browser knows, and sometimes I have to ask it.

  2. What do people think of products like LastPass? (http://lastpass.com/) The principle is that they store your password database on their servers, but that database is first “encrypted locally on your PC”. You then have a single password to unlock your password database. They have a Firefox plugin, which lets you access your database from any machine you use. They also allow you to have your password remembered by the plugin. The plugin can also auto-generate a unique and hard to guess password for each new site you add.

    There seem to be a few opportunities for vulnerabilities in this system, but it sure is handy.

    • Jay Libove, CISSP, CIPP says:

      I use LastPass for my personal needs. (I would have sworn that I’d posted a comment on this “Worst Practices” article the day it was published, but now I can’t find it… argh!).
      As only I’m at risk from my personal use of LastPass, I haven’t performed an audit of the technology or the company behind its operations, and while I did read the Ts & Cs, Privacy Policy, etc in full detail, I did it from a personal angle, not a corporate one.
      All that out of the way, the question with LastPass and any other such hosted password vault service is one of trust and compliance – like all outsourcing/ partnership/ “cloud” services.
      Audited (both the tech, to be sure that the client indeed can’t work in a way to allow the server to compromise an individual user’s vault; and the service, to be sure that when you type your vault password to the service’s web page it uses it only as advertised; and all the usual other tech audits), checked more fully from the legal contract angle (legal promise to not change those things; auditability; adequate penalties to prompt compliance with these terms), and ongoing compliance with all of the above, sure LastPass could be a great corporate tool.
      Given the risk of an unannounced or malicious change to the service or the service’s centrally delivered technology, I’d be much more comfortable buying the technology as a product to run on my own network, if I wanted it for corporate use, but, yes, the concept should be good from a security perspective. It exchanges the known large everyday realized risk of bad password practices for the known much smaller centralized (arguably in single instance larger, but also manageable) risk of compromise of the Public Key Infrastructure – oh, I’m sorry, I meant to say the vault server… but indeed a PKI, internal or external, is a very similar idea to a centralized password vault like LastPass in terms of risk. Do you trust Verisign’s federated / hosted identity services? I do.

  3. There’s a small number of popular email clients and SMTP services. If these were configured to support a small number of anti-spam conventions (whitelisted senders, whitelisted domains, certificate checking, signature checking, hash cash, automated signing) we ought to be able to shift the cost-benefit equation for spammers and phishers.

    Just for example, if it’s not possible to forge sending from mac/me.com, hotmail.com, aol.com, yahoo.com, and gmail.com, AND if those services adopt a practice of putting some quality-of-authentication metric on their email, then you can combine those two bits of information to infer that high-quality-email from those popular services is probably not spam. Spam could be delayed and evaluated right now, and as we make progress on anti-spam habits, the automated treatment could become more draconian. It’s not news that we could do this, which is why failing to follow through on it (for years, if not decades) is a worst practice.

    This is not foolproof, since one use of a botnet would be to send authenticated spam, but then spam recipients can actually, effectively, complain to someone about it.

  4. Reusing the same password on multiple sites is a bad idea. If that one password is compromised, then your accounts on all systems are compromised. With openID, you provide your openID password, and then openID logs you in to all your websites. This means that for all practical purposes, your openID password is the password to multiple websites. How is this any better than using the same password everywhere? If anything, I would consider this a step backward, since not only is there another site where your information could be compromised, but you’ve also made it crystal clear where an attacker should concentrate the attack effort.

    • OpenID isn’t perfect, but the point is that fewer people are authorized to have access to your login. Using my google ID to log into a random message board means that I still authenticate through google, and never give my password to the new site.

      Similarly, if my account IS compromised I might suffer some damage but I can fix it in one place. However, this doesn’t change the fact that compromise of your openID credentials would be devastating. It’s a trade off I think is worth taking, but others may disagree.

  5. My browser (Firefox) knows most of my passwords and keeps them encrypted. This works very well and puts the security of those passwords under my control. But then there are those sites that think they can improve password security by not using the protocol that Firefox understands. Like the bank sites that have split the userid and password entry across two pages with some dumb picture or other anti-phishing gimmick in between. What’s needed is a standardized protocol, that verifies the authenticity of the site. Keeping all my passwords under my control on an encrypted USB flash drive (with backup of course) is what I want.

    Recent research has also shown that most of the password complexity and expiration rules make the problem worse by causing users to write passwords down. Resistance to attack can be achieved in other, better, ways.

    So my vote for worst practice is all the lame attempts that systems administrators are making to “improve” password security.

  6. Clive Robinson says:


    You asked for what people consider the worst of worst practices,

    I nominate “patching”.

    The fact that security and other product patching has only “rework” not “return” costs for intangable product suppliers has prevented development of “best practice” for security, availability, and assurance of such products.

    If you look at the manufacture of tangable goods such as Fast Moving Consumer Electronics (FMCE) you will see a very marked asymetric cost between delivery and returns often by three orders of magnitude.

    Also there is a re-work cost per returned item.

    In software supply almost the opposit is true. The distrubution costs of “boxed consumer” products is orders of magnitude more expensive than the cost of defect returns.

    Put simply the cost falls down to the development of the rework (patch) which is a one off and the shared cost of the servers on which it is placed.

    Thus the incentive that gave rise to Quality Assurance in the FMCE manufacturing industry has not given rise to a similar raising of standards in the manufacture of intangables such as software.

    Now some will disagree with me that the security, availability and asurance development processes are equivalent to the quality process.

    But I think most will after a little thought realise the near zero cost of patching/customer is not acting as a driver to improve the overall software development process.

    Which makes me ask the question I could get flamed for,

    Should all internet connected hosts pay an “outbound byte tax” ?

    I apreciate that it will destroy a lot of business models (anti-virus) etc but the question is what will we gain in return?

  7. Data sharing with unknown entities. Facebook apps give the app owner your user data plus much of your friends’ user data. It only takes two clicks.

  8. Yes, for all the virtues of best practice, I see very few who really embrace it 100%. Like with hand grenades, close, in most cases, is good enough to get the job done.

    IT Security best practices? What does this mean? Hell, there’s no consensus as to what represents a best practice. You’ve got Cert, NIST, ITIL, ISO and on and on. Common bits and pieces are contained in all of them, but with each standards body lording over their progeny, pride in their offspring tends to get in the way to agreeing about what is best. I digress.

    More fundamentally, the idea of best practices to Internet-related activities is pure, unadulterated folly. The primary commercial function of Internet today is advertising and pushing content. There are people and companies making oil-tanker loads of cash leveraging the Web’s mercurial nature and exotic allures. Best practices in insecurity….YOU BETCHA! There’s a sucker born every minute and more than enough Internet schemes already set to take people’s money, identify and even take over their PC/machine…to add insult to injury.

    From an administrator’s point of view, the Internet is fraught with peril much of which cannot be overcome even with the best IT security practices in-place. It’s one big freaking social experiment, a raw untamed metropolis of action and adventure where every electronic dark alley beckons to the passerby “Hey babe, take a walk on the wild side!” and more than a few people take them up on the offer.

    From a user perspective, who gives a whack about security? They’ve got anti-virus and whatnot. What’s a few Trojans or bots or key loggers as long as one can access their favorite content?

    The only best practice is least practice whereas should minimize the amount of time spent delving into Internet cracks and crevices. As momma told us, if someone asks you to stick your hand in a meat grinder don’t do it. Now that’s best practice if I ever heard it.

  9. Ed, have you seen Cormac’s analysis of a few familiar so-called “worst practices”? He argues, in effect, that a security researcher’s “worst practice” is often a user’s perfectly rational economic decision…

    • Jay Libove, CISSP, CIPP says:

      Dan, thanks very much for the reference to Cormac’s analysis. (Hyperlink on the word “analysis” in Dan’s post above). Excellent paper, everyone should read it.
      However, this discussion thread isn’t about users’ actions which are bad – it’s about *our* actions which produce user actions which produce, in the aggregate, bad results. In short, Cormac’s analysis fits very neatly with the premise of this discussion: that we’re exercising worst practices in many of the things we do, and as result our users make rational decisions to bypass much of our policy, procedure, standards, training, guidance, advice, edicts, threats, carrots & sticks, …

  10. Too many pages just do something simple like submit a form (where the target can be checked for https: urls) but instead of just using a single URL action, they do on.submit, which goes to some buried javascript, which somewhere does nothing more than what it would have done had it been an ordinary URL. Or the stupid menus which do precisely what an ordinary href would have done, but they again use javascript. Noscript generally breaks the site until I allow it.

    I also have a redirect interceptor. (noredirect). Today when placing an order, I noticed a MySQL error message in one of the intermediate pages which I doubt anyone would see without the pause. If they are processing something, there is no reason to return some middle page – or pages, sometimes on financial sites it will go through a half-dozen to get to the real page. Is every one of the steps along the ricocheting redirection secure? At least I get a warning when I try going off an insecure page, but what about 30x?

    Finally, (extensions: flashblock and better privacy), there’s flash. The wrong answer to active-x. Cookies you can’t find, a black box which does who knows what – and sites that just display an overly wide box with my flashblock icon and nothing more. And Adobe has lots of problems. Fine if I have to use it to view video (until HTML5 becomes more universal), but this over-active content that wants to peek and poke my hard drive?

    For anything which needs to be secure, it should only use plain HTML, and should go directly to the target page without a single redirect.