got privacy?  Musings on the state of Privacy in a connected world.
 
Interesting report in Wired (http://www.wired.com/threatlevel/2009/10/delta/) The Register (http://www.theregister.co.uk/2009/10/14/delta_flyersrights_hacking_lawsuit/) and probably other tech news outlets about allegations that a large corporation accessed an activists private email accounts as part of an effort to derail some legislation that could cost them up to $40m per year. 

While this blog still believes in the principle of "innocent until proven guilty" - we will be watching this case with interest.  In many cases, people's private and work lives are closely intertwined, and there aren't many people who can claim that their work email is 100% separate from their personal email (*cough* Sarah Palin *cough*).

If proven, this case could impose punitive damages and may certainly cause other organizations to think twice about using similar tactics.  Until then - we'll be keeping an eye on how this progresses as it goes to court.
 
How many of us use Social Networks such as Facebook or LinkedIn and never think about what information we are sharing or who we might be sharing it with?  Hopefully no-one reading this Blog would admit to that, but the majority of users of these applications are only slowly becoming aware of some of the implications of using these technologies.

As I heard someone say recently.  "What happens in Vegas, stays on Facebook" - which is a very simple way of explaining the risks to people who may not normally think about how the photos and other information that they are sharing may come back to bite them one day.

Which leads me on to an interesting presentation that Julien Freudiger (http://twitter.com/jFreudiger/) posted a link to.  Called "Towards Privacy-aware OpenSocial applications" it discusses what benefits might be realized from social networking applications being able to be much more aware of how sensitive information is and advise users when they are trying to do something that would decrease their overall privacy.  The math in this presentation isn't for everyone - but the conclusions are interesting - particularly the comparison to FICO credit scores, which have gone from obscure to well known and actively managed by many people. 

Wouldn't it be great if we had the same level of visibility over our privacy preferences?  Hopefully if this kind of framework is supported by the big players in the space, it will just become part of the infrastructure that we don't have to think about.

http://www.slideshare.net/starrysky2/towards-privacyaware-opensocial-applications
 
An interesting point was made the other day by Joel Scambray in a presentation that he gave to the WTIA Security SIG.  How many tests does a doctor do when you go in for a checkup?  The number is around 8, weight, blood pressure etc.  Of course - this will not diagnose some ailments, but even if you take the pareto principle and say that it can give a trained practitioner a good guess at 80% of the common things that could be wrong with someone who walks into their office - that's a pretty good outcome.

If that is the case, given the complexity of the human body - then why do most information security frameworks (ISO, COBIT, PCI) have over 100 [in some cases over 200] individual controls.  Can't we just ask 8-10 questions and get a good feeling for 80% of the major things that are wrong with an information security or privacy program?

If we can - what are those key 8-10 questions that would help us to determine if we need to get a specialist involved to do more detailed testing?
 
The Federal Trade Commission (FTC) has settled with 6 organizations that claimed falsely that they complied with Safe Harbor (Sidenote: I still have to stop myself from spelling it "Harbour" even though I've lived in the US for a few years...). 

For those of you not familar with Safe Harbor, it is a way for US organizations to share data between the US and Europe even though there are very different data protection legislative environments in place.  There is a fundamental right to privacy in the draft European Constitution, but not in the US constitution - http://www.edri.org/edrigram/number12/privacy-eu-constitution

Safe Harbor is a self-certification process.  Organizations can download the principles from the FTC website, review their practices against them and then pay a nominal fee to be included in the list of organizations that are Safe Harbor "compliant".  So far, so subject to abuse?  Frankly I am amazed that the EU has allowed this self-certification process to continue for so long when it provides so little real comfort that organizations are doing what they need to to protect EU Citizens personal information.  I guess that it's partly due to the balance of power in the EU / US relationship where the US govenment has no doubt been lobbied hard by business not to make the standard any more onerous.

I'm all for self-regulation when it works, but at Ronald Reagan said "Trust, but Verify".  Now that the FTC has stepped up its actions I wonder how many of the organizations that have gone through the self-certification process will revisit their answers just to check whether they would stand up to an outside inspection.

FTC statement regarding the settlement http://www.ftc.gov/opa/2009/10/safeharbor.shtm
Much more detailed analysis of the case and some possible implications at http://www.huntonprivacyblog.com/2009/10/articles/enforcement-1/ftc-takes-additional-safe-harborrelated-enforcement-actions/index.html
 
Just saw an interesting statistic posted by Mal Fletcher (http://twitter.com/malfletcher) on twitter.  Apparently there is 1 CCTV camera for every 14 people in the UK (most surveilled population in the world - one of the reasons I no longer live there), but only one crime is solved for every 1,000 cameras in London. 

I'm not questioning the data, but this made me think of a few questions.  Humor me...
How many crimes could we reasonably expect a CCTV camera to help solve over its lifetime?
Is solving crimes the primary purpose of many / most / all CCTV cameras?
Are all CCTV cameras located where crimes are known to occur?
Are there some places that just never experience a crime?  If we don't put CCTV cameras there, will the crime migrate?
Are criminals clever enough to know when they are on CCTV?
Would it make things easier to RFID tag every member of the population so that we didn't have tedious facial matching to do?  [just kidding...]

These should lead us to asking the question that really matters..."what is the optimum number of CCTV cameras to achieve the right balance between crime reduction/prevention and privacy?"  I don't know that there is an answer - very much in the eye of the beholder, but extrapolation in both directions leads to craziness.  The UK is at the leading edge of CCTV deployment - if there's going to be a backlash anywhere - the UK will probably get it first.
 
I remember watching Minority Report a few years ago and thinking that although the plot was a little contrived (another not quite successful translation of a Philip K Dick novel to the big screen) some of the gadgets were pretty cool.  I remember back to Blade Runner (another PKD based movie) where everyone wanted not only the flying cars but also those umbrellas that light up  - which would be pretty handy in Seattle!

Now, it increasingly looks like some of the vision is becoming reality.  This article from the BBC website showcases some of the places where money and privacy collide - particularly around location aware advertising.  I don't fully buy into the argument that people would rather have ads that are relevant to them.  The statistics about how many adverts we already are exposed to are pretty incredible.  If possible - I'd like to be able to avoid ads that are targeted based on who I am and where I am, or at least be exposed to them on an opt-in basis only.

Here's the article.  Enjoy.

http://news.bbc.co.uk/2/hi/technology/8280564.stm
 
Looks like privacy does have a price - at least if you're David Letterman.  It appears that Letterman was subject to a blackmail plot, and as part of the fallout he has had to relinquish some of his privacy - information that he had been sleeping with some of the women who worked on his show (although at this point in time, it is not clear how many women and when this occurred).  Obviously this information would have come out when the alleged blackmailer ended up in court - and at least some credit goes to Letterman for the chutzpah to break the news on his show this week.  The whole incident did leave me wondering though - how much would have to be at stake for me to expose previously known information about me that could threaten my job, my marriage and my reputation?  It's a question that I hope never to have to answer in reality.
 
just a short post to get this started.  To what extent is publishing a blog about Information Privacy a contradiction in terms?  I had been wondering that for a while as I thought about dipping my toe into this.  My take is no - I'm intending this blog to be about the issues of the day and my view of them (and occasionally, the views of my friends and colleagues in the privacy space).  I'd be interested in your comments on my musing and let me know when you think I've missed the point or you want to take the other side of an issue.  Hopefully this is going to be a learning experience for me, and another channel for you to find out what's new and exciting in the way of privacy and information security.

To kick off - here's a link to a 5 minute video that I saw for the first time a couple of weeks ago showing Microsoft's vision of possible technologies in use in 2019.  Certainly this is impressive stuff - but there are at least a couple of technologies in here that could present significant privacy issues if not appopriately managed.  Have a look and see how many you can find.

http://www.istartedsomething.com/20090228/microsoft-office-labs-vision-2019-video/