As Virtual Worlds (VW’s) have become more complex and functional, they have become more valuable, both to their users and to attackers. Nearly half a million users spent money in Second Life, one of the most popular VWs, in August 2009. Interestingly, over 1000 of these transactions exceeded $4000. The total GDP of Second Life was estimated at around $500m in 2007 – larger than some small countries.
This increase in functionality and usage has also led to an increase in the number of people attacking the system or the people using it. While some early attacks focused on gaining control of in-world resources or disrupting the experience of other users, more recent attacks try to gain access to real world resources and bank accounts.
While few corporations currently use VWs, it is likely that this will change over the next decade as they become more ubiquitous and gain Enterprise Class features to encourage their adoption. This will increase the urgency to develop a system of controls to protect both users and the environments themselves.
So, as information security professionals, how can we help to make Virtual Worlds a better place to live and work?
To help secure VWs from attack, it helps to think about them as a connected system with a number of components which can each be modeled. This helps us to understand what the attack surface looks like, and understand the key vulnerabilities and how they might be able to be defended against.
The major vulnerability points are:
· Client Software. Once you have installed code on a client machine, that code is vulnerable to being manipulated, either by changing the code itself or changing the way that it interacts with the VW server. This technique was used successfully to hack many online games and resulted in the development of programs such as PunkBuster which control which other programs can be running at the same time as the game client and performing checksums on key files to ensure their integrity.
· The Virtual Environment. Whether it’s performing a certain sequence of events that always produces game currency, or manipulating certain aspects of the VW to operate outside the rules (basically what the character Neo does in the film The Matrix), designers of the VWs are not able to predict every single way that a user might interact with the world, so they have to design safeguards that will work whatever the interaction is.
· The Users. One of the most common attack vectors seen to date is to exploit trust between users to the benefit of an attacker. Most users tend to assume that if they have been interacting with another character in a virtual world for some time, that they can trust them. In reality, many of the cues that we get when interacting in person are masked when interacting with their avatar. Both the appearance and actions of an avatar may be designed to elicit certain responses in the same way that con artists may take on a certain persona to achieve their goals.
Gaming VWs (e.g. World of Warcraft) are by their nature used by very competitive people who would be tempted by anything that might give them an advantage. This has enabled recent attacks to be successful by promising to show how to achieve or obtain certain things within the game world and then downloading malware which is used to steal credentials or set up backdoors on the user’s machine.
While not a new phenomenon, attacks against VWs have been getting more attention as the technology becomes more mainstream and blended attacks result in real-world losses. As security practitioners, we need to understand the benefits and risks related to the use of VWs in our environments and set boundaries appropriately. It is likely that the use of VWs for business purposes will expand in the future, just as social networks have done. Humans are social animals and these technologies provide new and fun ways to interact with our colleagues and clients. We just need to be aware that a virtual bear could be hiding behind every virtual tree and act accordingly.