Thursday, May 26, 2005

Making paper passwords more secure

Regarding putting passwords on paper, there is a way to eliminate the additional risks of writing down passwords instead of simply keeping your passwords in your head --- write down a list of passwords on paper, but when you actually create/use the password online, add an extra word to the end, like so:

Your paper sheet:

The actual passwords:

This gives a poor man's two-factor authentication scheme, but of course is nowhere near the strength of one time passwords.

Meta-commentary on passwords

I just ran across Passwords and Security along with the other articles about how Microsoft says it's okay to put passwords on paper.

The one thing missing in all this discussion about how to choose and store reusable passwords is their fatal flaw -- reusability.

The problem with passwords isn't that somebody might write them down, it's that they are static, unchanging for days, weeks, months, years. Once intercepted (by a keylogger, from a buffer, in transit on the network, at the destination, etc) a "reusable password" is the ultimate definition of being vulnerable to a "replay attack".

One Time Password schemes such as SecurID (and their competiton such as Safeword, Cryptocard, etc) doesn't gain it's vastly improved security because it takes the password out of the user's hand and keeps them from writing their passwords down on paper. OTPs are more secure because while they are still passwords, they are no longer reusable passwords.

In fact, OPIE (nee S/Key) is a free and very functional OTP scheme that actually encourages the user to write down (print out) a list of passwords on paper... one time passwords which you cross off the list as you use them.

Saturday, May 21, 2005

News for OpenBSD nerds?

Amazing. Not just one, but two articles on the release of OpenBSD 3.7, plus one about TOR.

Oddly, the second OpenBSD article isn't tagged as a BSD article.

Friday, May 20, 2005

Disco Stu doesn't advertise

Slashdot posted to OpenBSD 3.7 Released as Disco Stu doesn't advertise:

xbsd wrote: against the testimonies in the OpenBSD website.

Except that perhaps many of the largest users of an OS designed as "proactively secure" might maybe be paranoid enough about security not to announce their choice on a public web page?

Tuesday, May 17, 2005

Car breakins using bluetooth

Found on comp.risks, an interesting new risk of bluetooth:

Subject: Car breakins using bluetooth
From: Andrew Nicholson

I recently lost our rental car in one of the huge parking lots of Disney World.


Here's the interesting part: every break-in in the past month had involved a laptop with internal bluetooth. Apparently if you just suspend the laptop the bluetooth device will still acknowledge certain requests allowing the thief to target only cars containing these laptops.

Sunday, May 15, 2005

New Crytogram today

Bruce Schneier's Cryptogram is updated on the 15th of the month.

Contents include Bruce's predictable fear-mongering about REAL ID, the "Combating Spam" rant that he published on his blog weeks ago, and a ton of self promotion (or Counterpane promotion, I don't know which is worse).

Overall, I found the comments from readers more interesting than what Mr. Schneier has to say. I suppose that's the risk a pundit takes when operating a (popular) blog.

I have no worries in that department

Friday, May 13, 2005

Time for me to find a new line of work

Ran across "Post-Exploitation on Windows using ActiveX Controls, linked from Slashdot.

Boiled down to the most basic principles, it reads as "we're all screwed":

When exploiting software vulnerabilities it is sometimes impossible to build direct communication channels between a target machine and an attacker's machine due to restrictive outbound filters that may be in place on the target machine's network. Bypassing these filters involves creating a post-exploitation payload that is capable of masquerading as normal user traffic from within the context of a trusted process. One method of accomplishing this is to create a payload that enables ActiveX controls by modifying Internet Explorer's zone restrictions. With ActiveX controls enabled, the payload can then launch a hidden instance of Internet Explorer that is pointed at a URL with an embedded ActiveX control. The end result is the ability for an attacker to run custom code in the form of a DLL on a target machine by using a trusted process that uses one or more trusted communication protocols, such as HTTP or DNS.

The only viable defense against this attack is to have total control over the desktop, and I have yet to find a large corporation where locking down the desktop to the extent required would be politically viable.

Tuesday, May 10, 2005

SecurID authentication for OpenBSD for SSH and Apache

Just because OpenBSD is a "secure" platform is not an excuse not to harden it further by taking advantage of strong authentication. It is possible to integrate SecurID with OpenBSD even though RSA has not seen fit to release a binary version of their ACE libraries for any OpenBSD hardware platform.

I normally use S/Key with RMD160 as a one-time-password solution for access to OpenBSD. This has the advantage of being integrated into OpenBSD (at least on the i386 platform, there are still bugs with S/Key on Sparc64), but my less paranoid cow-orkers do not want to carry around a Zaurus or "cheat sheet" just so they can log into a web server. But they already have company issued SecurID tokens...

Lacking pam_securid for my platform, I use OpenBSD's stock login_radius for SSH and console logins, and link mod_auth_radius into the default hardened apache included with OpenBSD. And then because I'm paranoid, run it all under systrace.

mod_auth_radius works correctly with one time passwords, including SecurID, because this authentication module only actually passes the "password" (tokencode) up to the RADIUS server once, when you first authenticate. After successful RADIUS authentication, mod_auth_radius sends back to the client a hashed time-limited cookie. So long as the client returns the cookie with each request and the cookie is valid (not expired, cryptographically intact, etc), then mod_auth_radius will not need to re-prompt for authentication credentials.

There are three cases with mod_auth_radius where it might prompt again for authentication:

  1. The cookie has expired, or otherwise doesn't check out as valid.
  2. The client is not accepting and returning the cookie.
  3. mod_auth_radius can have a strange interaction with Apache depending on how you reach the first "protected" web page, this is most often a problem if the first protected URL you access is a URL ending in / that needs to be processed via DirectoryIndex, or if you access an unprotected page containing protected images.

The solution I choose to work around the DirectoryIndex problem was to have the main index page for the site (e.g. contain a "login" link that points to

Thursday, May 05, 2005

Evaluating websense "censorware" software

Websense is one of the most well-known and widely deployed "corporate" URL filtering products, but it doesn't receive much scrutiny. For example, Peacefire's most recent WebSENSE examination dates back to 2001!

"Websense Enterprise" is normally deployed in a "sniffer" type setup, where the "Network Agent" tries to inspect web requests as they flow by (either directly to the Internet, or as requests towards a proxy or pool of proxies). If it sees something it doesn't like, it spoofs packets to hijack the session and send back a "blocked" page.

There are two positives to this type of deployment:

  1. If the Websense Network Agent fails, all internet traffic just flows like normal, it "fails open".

  2. If you don't already have proxies deployed, you don't need to deploy a proxy for Websense to work -- you can just set it up as a sniffer without slowing down your Internet throughput.

There is a problem with this "sniffer" design, a problem that leads to a high rate of false positives.

In my experience, Websense Enterprise can, under load, miss "seeing" certain requests, so if you really want to watch the forbidding paris-hilton.mpg, just keep hitting reload and eventually you will get lucky (and your admin will get a ton of log events to review, from all of the times it did successfully block the request.)

The above "false negative" problem is made worse by a weird, unpublicized bug in Websense.

For each request from a client, Websense will do DNS lookups on the URL hostname and IP destination of the TCP session. This is necessary if not deployed in front of a proxy, where Websense needs to do a reverse lookup to figure out the real web site being accessed.

The problem is that Websense can miss out on blocking HTTP requests if it gets slow DNS answers, even for requests towards a proxy where the cleartext URL has a "banned" domain name, requests for which you would not expect DNS lookups to be a factor in the allow/deny decision.

I've found it difficult to reliably exploit this, and so I don't currently have a working "exploit" to publish. If you want to try it for yourself, find a blocked MPG link on Yahoo Video Search, wait until about 11:30 in the morning, then just keep hitting reload and wait for the movie to appear or (HR to come calling).

Lastly, Websense is generally the most expensive for per-seat pricing, and they have a funny notion of "seats" -- The Websense software counts all unique client IP addresses seen as "seats", so if you have short DHCP lease times you can get hit up for a lot more seats than you have employees.

Wednesday, May 04, 2005

Proxy.PAC support in RealPlayer V10

I just recently noticed that RealPlayer V10 actually has support for using Proxy Automatic Configuration (PAC) scripts.

Not just the usual "support" by virtue of embedding Internet Explorer into the player for displaying HTML content, but actual options to select a PNA or RTSP proxy server through a PAC script.

If this actually works, it'll be cool.

But that's a very big if.

Sunday, May 01, 2005

Testing "nannyware" tools for fitlering URLs

Time to re-evaluate deploying web censorship tools. Specifically, the free (dansguardian),
the expensive (websense, and the obscure (smartfilter.

To test for false-positives, I have a Perl script with http client behavior which will read a file of URLs and attempt each one with realistic client-like headers, then examine the result to see if the request was successful or blocked.

If you have a budget, there are a number of good tools for running
load tests against web sites; The best are actually meant for QA'ing your web server and site backend, but work just as well to test web filtering -- a web page being blocked by a filter looks a lot like a web server failure, timing out, returning a HTTP error resutl code, or returning a bogus "blocked" web page.

For load testing, I have to work on the cheap so I just use httpperf running on a bunch of old retired desktop PCs with OpenBSD.

URL filtering is a lot easy to implement and test when you force all
desktop clients to make their HTTP/HTTPS requests via an explicitly
configured proxy. You can't go out directly to internet IP addresses on TCP/80 or TCP/443 or any other port -- clients MUST make the request via the proxy, and the proxy knows how to check with the URL filtering software.


Smartfilter hooks directly into the proxy, so can only be as fast as the proxy itself. Since it is only doing URL lookups, it is quite fast.


Dansguardian wants to inspect the page itself, and can become very slow under load. It is possible to throw hardware at a software problem, if you run this on a fast enough machine you don't notice the lag quite so much.


Websense is normally deployed as a sniffer, where it just inspects traffic passing by. This is useful if you need a "fail open" environment where a crashed filter doesn't just kill all web access. More on Websense later this week.