got privacy?  Musings on the state of Privacy in a connected world.
 
Interesting article this week in the IAPPs Privacy Advisor which talks about the ethics of Googling someone, which got me to thinking.

Even a couple of years ago - before social networks really caught on - this question wouldn't really have been asked.  Unless you were a celebrity or information about you was available through other channels such as magazines - Google wouldn't have had a great deal of additional information to add.  That has certainly changed over a relatively short period of time, particularly since Social Networks like Facebook started exposing more of the data that they had collected about people outside of their own network so that search engines could see it.  Anyone who has tried to manage their Facebook privacy settings will know that these are far from being easy to use and it is easy to see how people unintentionally expose information to the world that they intended to keep just within a network of a few friends.
 
Which brings us back to the Ethics of Googling someone.  While this blog thinks that things that are posted onto the public Internet, such as this blog, are fair game for anyone to stumble upon or find, there are some types of information that people have an expectation to be kept private, which unfortunately is not always met.  And then, there are to our mind the practices that are completely unreasonable invasions of privacy. 

The worst example that I've seen of this to date (although I'm sure there are others) was brought to our attention viaTwitter (thanks @ChristianVW for the heads-up).  The City of Bozeman, Montana has decided that just doing a Google search on a potential employee is not enough.  They have been asking for usernames and passwords to prospective employee's Facebook and other social networking accounts.

The quote that I thought best summed up this sorry affair was prompted by a local radio station.  "One thing that's important for folks to understand about what we look for is none of the things that the federal constitution lists as protected things, we don't use those," said attorney Greg Sullivan.  Basically - give us access to everything and trust us to use it properly.

Sorry - that doesn't cut it with us, and I suspect with a lot of readers of this Blog feel the same way.  At a minimum, Bozeman should engage someone who actually does understand Internet and Privacy law and rethink how they run their background check process.  Beyond that - anyone who has handed over any passwords should change them immediately.

We'd be interested to hear of any other employers who are trying similar tactics.  Please comment and let us know.
 
 
Interesting report in Wired (http://www.wired.com/threatlevel/2009/10/delta/) The Register (http://www.theregister.co.uk/2009/10/14/delta_flyersrights_hacking_lawsuit/) and probably other tech news outlets about allegations that a large corporation accessed an activists private email accounts as part of an effort to derail some legislation that could cost them up to $40m per year. 

While this blog still believes in the principle of "innocent until proven guilty" - we will be watching this case with interest.  In many cases, people's private and work lives are closely intertwined, and there aren't many people who can claim that their work email is 100% separate from their personal email (*cough* Sarah Palin *cough*).

If proven, this case could impose punitive damages and may certainly cause other organizations to think twice about using similar tactics.  Until then - we'll be keeping an eye on how this progresses as it goes to court.
 
 
How many of us use Social Networks such as Facebook or LinkedIn and never think about what information we are sharing or who we might be sharing it with?  Hopefully no-one reading this Blog would admit to that, but the majority of users of these applications are only slowly becoming aware of some of the implications of using these technologies.

As I heard someone say recently.  "What happens in Vegas, stays on Facebook" - which is a very simple way of explaining the risks to people who may not normally think about how the photos and other information that they are sharing may come back to bite them one day.

Which leads me on to an interesting presentation that Julien Freudiger (http://twitter.com/jFreudiger/) posted a link to.  Called "Towards Privacy-aware OpenSocial applications" it discusses what benefits might be realized from social networking applications being able to be much more aware of how sensitive information is and advise users when they are trying to do something that would decrease their overall privacy.  The math in this presentation isn't for everyone - but the conclusions are interesting - particularly the comparison to FICO credit scores, which have gone from obscure to well known and actively managed by many people. 

Wouldn't it be great if we had the same level of visibility over our privacy preferences?  Hopefully if this kind of framework is supported by the big players in the space, it will just become part of the infrastructure that we don't have to think about.

http://www.slideshare.net/starrysky2/towards-privacyaware-opensocial-applications
 
 
An interesting point was made the other day by Joel Scambray in a presentation that he gave to the WTIA Security SIG.  How many tests does a doctor do when you go in for a checkup?  The number is around 8, weight, blood pressure etc.  Of course - this will not diagnose some ailments, but even if you take the pareto principle and say that it can give a trained practitioner a good guess at 80% of the common things that could be wrong with someone who walks into their office - that's a pretty good outcome.

If that is the case, given the complexity of the human body - then why do most information security frameworks (ISO, COBIT, PCI) have over 100 [in some cases over 200] individual controls.  Can't we just ask 8-10 questions and get a good feeling for 80% of the major things that are wrong with an information security or privacy program?

If we can - what are those key 8-10 questions that would help us to determine if we need to get a specialist involved to do more detailed testing?
 
 
The Federal Trade Commission (FTC) has settled with 6 organizations that claimed falsely that they complied with Safe Harbor (Sidenote: I still have to stop myself from spelling it "Harbour" even though I've lived in the US for a few years...). 

For those of you not familar with Safe Harbor, it is a way for US organizations to share data between the US and Europe even though there are very different data protection legislative environments in place.  There is a fundamental right to privacy in the draft European Constitution, but not in the US constitution - http://www.edri.org/edrigram/number12/privacy-eu-constitution

Safe Harbor is a self-certification process.  Organizations can download the principles from the FTC website, review their practices against them and then pay a nominal fee to be included in the list of organizations that are Safe Harbor "compliant".  So far, so subject to abuse?  Frankly I am amazed that the EU has allowed this self-certification process to continue for so long when it provides so little real comfort that organizations are doing what they need to to protect EU Citizens personal information.  I guess that it's partly due to the balance of power in the EU / US relationship where the US govenment has no doubt been lobbied hard by business not to make the standard any more onerous.

I'm all for self-regulation when it works, but at Ronald Reagan said "Trust, but Verify".  Now that the FTC has stepped up its actions I wonder how many of the organizations that have gone through the self-certification process will revisit their answers just to check whether they would stand up to an outside inspection.

FTC statement regarding the settlement http://www.ftc.gov/opa/2009/10/safeharbor.shtm
Much more detailed analysis of the case and some possible implications at http://www.huntonprivacyblog.com/2009/10/articles/enforcement-1/ftc-takes-additional-safe-harborrelated-enforcement-actions/index.html
 
 
Just saw an interesting statistic posted by Mal Fletcher (http://twitter.com/malfletcher) on twitter.  Apparently there is 1 CCTV camera for every 14 people in the UK (most surveilled population in the world - one of the reasons I no longer live there), but only one crime is solved for every 1,000 cameras in London. 

I'm not questioning the data, but this made me think of a few questions.  Humor me...
How many crimes could we reasonably expect a CCTV camera to help solve over its lifetime?
Is solving crimes the primary purpose of many / most / all CCTV cameras?
Are all CCTV cameras located where crimes are known to occur?
Are there some places that just never experience a crime?  If we don't put CCTV cameras there, will the crime migrate?
Are criminals clever enough to know when they are on CCTV?
Would it make things easier to RFID tag every member of the population so that we didn't have tedious facial matching to do?  [just kidding...]

These should lead us to asking the question that really matters..."what is the optimum number of CCTV cameras to achieve the right balance between crime reduction/prevention and privacy?"  I don't know that there is an answer - very much in the eye of the beholder, but extrapolation in both directions leads to craziness.  The UK is at the leading edge of CCTV deployment - if there's going to be a backlash anywhere - the UK will probably get it first.
 
 
I remember watching Minority Report a few years ago and thinking that although the plot was a little contrived (another not quite successful translation of a Philip K Dick novel to the big screen) some of the gadgets were pretty cool.  I remember back to Blade Runner (another PKD based movie) where everyone wanted not only the flying cars but also those umbrellas that light up  - which would be pretty handy in Seattle!

Now, it increasingly looks like some of the vision is becoming reality.  This article from the BBC website showcases some of the places where money and privacy collide - particularly around location aware advertising.  I don't fully buy into the argument that people would rather have ads that are relevant to them.  The statistics about how many adverts we already are exposed to are pretty incredible.  If possible - I'd like to be able to avoid ads that are targeted based on who I am and where I am, or at least be exposed to them on an opt-in basis only.

Here's the article.  Enjoy.

http://news.bbc.co.uk/2/hi/technology/8280564.stm
 
 
Looks like privacy does have a price - at least if you're David Letterman.  It appears that Letterman was subject to a blackmail plot, and as part of the fallout he has had to relinquish some of his privacy - information that he had been sleeping with some of the women who worked on his show (although at this point in time, it is not clear how many women and when this occurred).  Obviously this information would have come out when the alleged blackmailer ended up in court - and at least some credit goes to Letterman for the chutzpah to break the news on his show this week.  The whole incident did leave me wondering though - how much would have to be at stake for me to expose previously known information about me that could threaten my job, my marriage and my reputation?  It's a question that I hope never to have to answer in reality.