got privacy?  Musings on the state of Privacy in a connected world.
 
January 28th is Data Privacy Day.  In a single generation, privacy concerns have shifted from worrying about who can see through your windows to who might be able to see your medical records on the Internet.  Data Privacy Day gives us a chance to reflect on these changes, and to think about what steps we can take to better control personal information and manage our privacy.

The fact is that information, from where you live to how you live, is now available to many companies that you do business with, or in some cases to everyone with an Internet connection.    This disclosure can provide many benefits, from customized offers based on purchase history to a free cup of coffee on your birthday.  Disclosure also carries risks.  Many of us have received notices telling us that our personal information has been lost or stolen, and although most of these instances do not lead to direct harm to us individually, they often cause concern.

Interestingly, the number one privacy concern that most people have is not related to the information that they share. Given the proliferation of social networking and other online activities, people are often comfortable (sometimes too comfortable) when it comes to sharing information in the public (or semi-private) domain.  The real concern for many is how information that has been shared with trusted people or organizations will be managed and protected once is out of our direct control.  Individuals can reduce this risk by limiting what they share, but we also need to take responsibility for holding organizations to their privacy policies and agreements; they are stewards of your information.

So to mark Data Privacy Day, here are 4 simple things that you can do to improve your own privacy:

1.       Think before sharing your personal information.  For example, when a shop asks for your phone number at the checkout ask why they need it.  Usually the request is because they want a number that uniquely identifies you, rather than because they plan to call you.  So, consider declining or just choose a generic number that you can remember.  Similarly, if someone asks for your birthday, then January 1st will often suffice.

2.       Always opt-out.  Unlike Europe, where you need to opt-in to consent to your data being shared, we in the U.S. have to ensure that we opt-out whenever we have the opportunity to restrict companies from sharing information with other companies or partners.  It only takes a few seconds, and restricts what can be done with your information.  Find those boxes, and tick them.

3.       Treat Social Networks like coffee shops.  If you wouldn’t talk about it in a coffee shop, don’t talk about it on Facebook or Myspace.  If you wouldn’t shout it on a street corner, don’t share it on Twitter!  Once you have shared something electronically, it is out of your control, even if you think that only your friends will be able to see it.

4.       Maintain Healthy Skepticism.  Be suspicious about any requests for personal information, even if they look like they come from a person or organization that you know.  Many people continue to be fooled by these requests.  It’s easy to take a couple of minutes to make a call and confirm that a request is genuine before providing information that could be used to commit identity theft, or cause you other problems.
 
If a tree falls in a virtual forest, does it make a virtual sound?  These days, a lot of trees are falling in a lot of virtual forests and the noise is becoming louder in the real world.  There are now university classes taught virtually, simulators replicate situations that are expensive or dangerous in real life and surgeons practice techniques virtually before they attempt the real thing.

As Virtual Worlds (VW’s) have become more complex and functional, they have become more valuable, both to their users and to attackers.  Nearly half a million users spent money in Second Life, one of the most popular VWs, in August 2009.  Interestingly, over 1000 of these transactions exceeded $4000.  The total GDP of Second Life was estimated at around $500m in 2007 – larger than some small countries. 

This increase in functionality and usage has also led to an increase in the number of people attacking the system or the people using it.  While some early attacks focused on gaining control of in-world resources or disrupting the experience of other users, more recent attacks try to gain access to real world resources and bank accounts.

While few corporations currently use VWs, it is likely that this will change over the next decade as they become more ubiquitous and gain Enterprise Class features to encourage their adoption.  This will increase the urgency to develop a system of controls to protect both users and the environments themselves.

So, as information security professionals, how can we help to make Virtual Worlds a better place to live and work?

To help secure VWs from attack, it helps to think about them as a connected system with a number of components which can each be modeled.  This helps us to understand what the attack surface looks like, and understand the key vulnerabilities and how they might be able to be defended against.

The major vulnerability points are:

·         Client Software.  Once you have installed code on a client machine, that code is vulnerable to being manipulated, either by changing the code itself or changing the way that it interacts with the VW server.  This technique was used successfully to hack many online games and resulted in the development of programs such as PunkBuster which control which other programs can be running at the same time as the game client and performing checksums on key files to ensure their integrity.

 ·         The Virtual Environment.  Whether it’s performing a certain sequence of events that always produces game currency, or manipulating certain aspects of the VW to operate outside the rules (basically what the character Neo does in the film The Matrix), designers of the VWs are not able to predict every single way that a user might interact with the world, so they have to design safeguards that will work whatever the interaction is. 

 ·         The Users.  One of the most common attack vectors seen to date is to exploit trust between users to the benefit of an attacker.  Most users tend to assume that if they have been interacting with another character in a virtual world for some time, that they can trust them.  In reality, many of the cues that we get when interacting in person are masked when interacting with their avatar.  Both the appearance and actions of an avatar may be designed to elicit certain responses in the same way that con artists may take on a certain persona to achieve their goals.

Gaming VWs (e.g. World of Warcraft) are by their nature used by very competitive people who would be tempted by anything that might give them an advantage.  This has enabled recent attacks to be successful by promising to show how to achieve or obtain certain things within the game world and then downloading malware which is used to steal credentials or set up backdoors on the user’s machine.

While not a new phenomenon, attacks against VWs have been getting more attention as the technology becomes more mainstream and blended attacks result in real-world losses.  As security practitioners, we need to understand the benefits and risks related to the use of VWs in our environments and set boundaries appropriately.  It is likely that the use of VWs for business purposes will expand in the future, just as social networks have done.  Humans are social animals and these technologies provide new and fun ways to interact with our colleagues and clients.  We just need to be aware that a virtual bear could be hiding behind every virtual tree and act accordingly.
 


One of the major problems that organizations face when they’re reviewing their compliance program is understanding why they are doing what they are doing and how to achieve a ‘steady state’ where compliance becomes part of the scenery rather than an ongoing struggle.  For many organizations, this state seems to be receding ever further into the distance.  Each year bring more controls that need to be implemented and monitored.  Every gap analysis finds more gaps and every effort at remediation appears to lead to little relief.

Part of this issue is that for most organizations a ‘gap analysis’ is about the worst thing that they could be doing.  A ‘gap analysis’ frames the situation to prejudice an outcome and rarely helps an organization get closer to a steady state of compliance.  Framing is a term from linguistics which describes how the choice of words activates certain emotions and thought patterns.  With a ‘gap analysis’ the framing works like this:  Gaps are bad.  Analyzing and fixing gaps is good.  Having no gaps is best of all.  However, in practice there are always more gaps to be found.  Existing gaps may reoccur in other forms or auditors will just dig deeper to find smaller and smaller gaps.  But seeking to identify weak areas is good.  It shows that we care about what is wrong.  Therefore we should do gap analyses.

Even in organizations that are highly compliance focused, this approach doesn’t make a lot of sense.  It provides a never ending stream of ‘remediation activities’ and ‘refresh testing’ which keeps people employed and consultants in business, but it may or may not contribute to making organizations more secure or compliant.  And after a point, there is not much point in being more compliant than is enough to achieve a particular sign-off or to provide a level of due care should an organization be sued.