Ender’s Game: Prosocial Gaming (Part III)

Go to:   Part I – Video Game Compulsion,   Part II – Social Conditioning,

Remember the games you used to play growing up? They seemed so innocent and fun and then social politics gradually come into play and competition is heightened, soon enough you’re left wondering if you’re still playing the same innocent game. That’s because games are often used as a cover-up of the true intentions of its creators. It can be unsavory, sinister, usually psychologically observational in its literal sense, disciplinary using reconditioning/programming techniques, it could also be used for training.

Furthermore, if individuals feel a sense of freedom in games, so do groups of people. If one person innovates a practice, other people are more likely to assess it fairly and perhaps even adopt it. Simply because it does not matter. If a group of different people emerges, there is no particular reason for them to fear persecution as a result of their behavior. It is just a game, after all. Therefore the act of making it “fun” in the form of a game will camouflage its sometimes cruel aspects to keep participants engaged, without moral questioning and if its challenging enough, even keep them coming back for more.

To this end, there is a strange paradox of isolation in digital game playing that becomes the first step in dehumanization. An individual will become starved of real human contact, despite the increased opportunities and ways to connect, from sending and receiving text messages on one’s hand-held device, to sending and receiving emails, instant messages, and even electronic “pokes” on the Internet. Whether we see the faces behind the words or not, we must still remember that a real human being is directly involved with each of these interactions. When a person is connected to the world only through his or her iPhone, or only through status messages and news feed messages on Facebook, or in this case through gaming, then he or she is not likely to be deeply engaged with the world.

A dangerous situation for our shared future.

In the Hollywood film Ender’s Game, isolation was a key factor in the development of the main character. Ender, a boy with special abilities is enrolled in a military school where his training is given in the form of a series of games. These games employed the use of predictive and persuasive technologies to react to the psychological status of his thoughts and emotions. Ender quickly adapts and excels ahead of his classmates while his teachers and trainers keep him isolated from his family and friends and trains him hard to keep him focused. Ender thinks he’s playing a game but little does he know that with each game he plays he’s actually engaged in battle with the alien species the “buggers”. In what he thinks is the final assimilation is actually an epic battle where Ender annihilates the species – he’s shocked and angered when he finally learns he committed an act of genocide. Filled with guilt and grief he visits the desolated colony only to find one last remaining bugger that happens to be pregnant. Ender promises to take it off the planet to find her a new home to repopulate its species.

What I find striking is Ender’s sense of empathy and compassion which stands in stark contrast of what his training and teachers tell him about the “enemy”.  Somehow the game was able to detect these “feminine” receptive qualities adopting those aspects into the programming which, in the end, enabled Ender to connect with the alien moving him to promise that he would find it a new home. This aspect of gaming is called pro-social gaming. You’ll find pro-social gaming in developmental activities mostly for children and young adults teaching our next generation, in contrast to violence, about empathy and compassion through game playing.

Empathy in Computing

Parents, with good cause, have generally been concerned about the level of violence found in video games. Like the television, gaming has become the new babysitter occupying hours and hours of the minds of our little ones. For the amount of time kids spend playing games, if only we could make games educational, to help equip them with the soft skills that are needed to excel. Although the relationship between violent games and behavior has received considerable research and media attention, an increasing number of developers and researchers are exploring the impact of games designed to support positive qualities and pro-social behavior.

Studies in psychology have already combined the use of well-being measures with digital technologies for the delivery of Internet-based “interventions” (interventions are therapeutic or promotional efforts to improve mental or physical health). The Journal of Medical Internet Research and the Journal of Cyber-Psychology, Behavior, and Social Networking are two of the most highly ranked journals publishing in this area. IEEE Transactions on Affective Computing also publishes research on the emotional impact of computers, but from an engineering perspective. Psychology research continues to uncover strategies empirically shown to lead to increases in long-term well-being.

Therefore as players, programmers, engineers, researchers seek to include principles, although pending evaluation at the time of this writing, that are founded in current practice and related empathy research:

    • Induce Empathy from the start. Belman and Flanagan (2010) state that “players are likely to empathize only when they make an intentional effort to do so as the game begins. The game may explicitly ask players to empathize or it may more subtly encourage them to take on a focused empathetic posture. However, without some kind of effective empathy induction at the outset, most people will play ‘unempathetically.”
  • Recommend Actions. Belman and Flanagan suggest designers give players specific suggestions as to actions they can take to address issues represented in the game. They speculate that the importance of empowering players to take on action may help prevent negative consequences associated with empathetic suffering. This also relates back to the connection between low coping ability and empathetic distress. Based on these findings Jennifer Goetz, Dacher Keltner and Emiliana Simon-Thomas (2010) speculate that “variables that enhance a sense of coping should make one more likely to feel compassion than distress.”

    • Design for cognitive and/or effective empathy as appropriate. If desired outcomes don’t require significant changes in player beliefs, then a “short burst of emotional empathy can work well. However, when deeper changes in thinking are a requirement, then “the game should integrate both cognitive and emotional empathy.
  • Emphasize similarity but with delicacy. Belman & Flanagan suggest that designers “emphasize points of similarity between the players and people or groups with whom she is supposed to empathize, but beware of provoking defensive avoidance.”

These games target a variety of platforms from handheld devices and console games to mobile and computer-based games. Herotopia for example in an app that allows kids to learn about diverse cultures and lifestyles by visiting other kids homes virtually, and Mission US: Flight to Freedom is a computer simulated game in which players experience the life of a slave girl in pre-Civil War America. The diversity and creativity in these early examples are inspiring and games for empathy are by no means restricted to children’s titles.

Games have some magical powers when it comes to supporting empathy. They have the capacity to provide us with “firsthand” (even embodied) experience of scenarios that would otherwise remain totally foreign to us. As vehicles for role-playing, they allow us to “walk a mile in another man’s shoes” as close to literally as possible. Combine these affordances with the growing work in games for social good and a new era of empathy promoting games becomes visible on the horizon. For example brows selection at GamesforChange.org and you’ll find that making social change requires empathy and that many of these games are designed to promote it.

Belman and Flanagan (2000) speculate on a state of gameplay they call “empathetic play” in which “players intentionally try to infer the thoughts and feelings of people or groups represented in the games (cognitive empathy), and/or they prepare themselves for an emotional response, for example by looking for similarities between themselves and characters in games (emotional empathy).” They further speculate that games for empathy must deliberately support the player in entering a state empathetic play and that it should not be assumed that the content is enough to elicit empathy as intended. This recommendation is formulated as one of their four principles form their experience working with designers of “games for good”.

Machine Learning

Computers themselves demonstrate a total lack of empathy. We get not so much as an apologetic nod when they crash or lose days of hard work. Although designers have become more adept at hiding this lack of empathy through, for example, more creatively written error messages and elegant interface interactions the reality remains that our personal computers currently have no feminine receptive abilities to understand what we are feeling or to react appropriately. But times are changing. Developers are now using the principles of persuasive and predictive technologies in machine learning.

Machine learning is a branch of artificial intelligence, a masculine construct, whereby using computing, systems are designed that can learn from data in a manner of being trained. The systems, which might learn and improve with time, are linear and used to predict outcomes of questions based on the previous learning.  Machine learning runs on algorithms which enables you to use a variety of types to make its predictions. The required output is what decides which to use. These algorithms characteristically fall into one of two types of learning: supervised and unsupervised.

Supervised learning refers to working with a set of labeled training data. For every example, in the training data, you have an input object and output object. Unsupervised learning on the opposite end of this spectrum is unsupervised learning where you let the algorithm find a hidden pattern in a load of data. With unsupervised learning there is no right or wrong answer; it’s just a case of running the machine learning algorithm and seeing what patterns and outcomes occur.

A side note: I am introducing early into this article the notion of masculine and feminine principles in the computing/digitized world and thus I implore that you refrain from making gender associations when I use these words. Artificial intelligence, I refer to as being masculine because it’s linear, active, disciplined, formulaic, mathematical, robotic, unsympathetic with no feelings at all; while artificial consciousness relies on feminine qualities of “receptivity” (vs masculine being “active”) of empathy and compassion. What we are striving to find, as we always are when speaking of masculine/feminine energies, is an “active-receptivity”, which is the ultimate balance of these two energies that enables one to be adaptive (receptive) while moving forward (active) – as in this case with unsupervised machine learning.

Unsupervised learning might be more a case of data mining than of actual learning. If you’re looking at clustering data, then there’s actual learning, so there’s a good chance you’re going to spend a lot of time with unsupervised learning.  Therefore it is important to know the question you are trying to answer prior to starting any data project.  It’s the question that is key here and it starts with having open discussions and thorough planning.

For example, in his book Machine Learning, author Jason Bell describes the use of machine learning in gaming constructs: “Microsoft has spent studying the data from Halo 3 to see how players perform on certain levels and also to figure out when players are using cheats. Fixes have been created based on the analysis of data coming back from the consoles…[Microsoft’s] driving game Forza Motorsport. When you first play the game, it knows nothing about your driving style. Over a period of practice laps the system learns your style consistency, exit speeds on corners, and your positioning on the track. The sampling happens over three laps, which is enough time to see how your profile behaves. As time progresses the system continues to learn from your driving patterns. After you’ve let the game learn your driving style the game opens up new levels and lets you compete with other drivers and even your friends.”

“…It’s still the early days of game companies putting machine learning into infrastructure to make the games better. With more and more games appearing on small devices, such as those with the iOS and Android platforms, the real learning is in how to make players come back and play more and more. Analysis can be performed about the “stickiness” of the game – do players return to play again or do they drop off over a period of time in for something else? Ultimately there’s a trade-off between the level of machine learning and gaming performance, especially in smaller devices. Higher levels of machine learning require more memory within the device. Sometimes you have to factor in the limit of what you can learn from within the game.”

How do you get data? – Google’s Database of Human Intent

In the book Ch@nge: How Internet is Changing our Lives, the contributing author, Michael Nielsen quotes the CEO of Google, Eric Schmidt making a remarkable statement at a media event in Abur Dhabi, he says, “…one day we had a conversation where we figured we could just use Google’s data about its users to predict the stock market. And then we decided it was illegal. So we stopped doing that”.

The book also makes note of journalist John Battelle (2010) describing Google as “the database of [human] intentions”. Battelle noticed that the search queries entered into Google express human needs and desires. By storing all those queries – more than a trillion a year – Google can build up a database of human intent. That knowledge of intent then makes it possible for Google to predict the movement of the stock market (and much else). Imagine how much information about the human intent a researcher could collect from the gaming experience. Of course, neither Google nor anyone else has a complete database of human intentions. But part of the power of Battelle’s phrase is that it suggests that the ultimate future of search is to connect directly to users’ brains ultimately to understand how people think.

“Consider the following examples,” Nielsen continues, “Facebook CEO Mark Zukerberg has used data to predict which Facebook users will start relationships, researchers have used data from Twitter to forecast box office revenue for movies, and Google has used search data to track influenza outbreaks around the world. These few examples are merely the tip of a much larger iceberg; with the right infrastructural data can be converted into knowledge, often in surprising ways.”

Wikipedia is impressive in size, with more than 4 million articles in the English language edition. The Wikipedia database contains more than 40 gigabytes of data. But while that sounds enormous consider that Google routinely works with data at a petabyte scale – a million gigabytes! By comparison, Wikipedia is minuscule and it’s easy to see why there’s this difference. What the Wiki-media Foundation considers “the sum of all knowledge” is extremely narrow compared to the range of data about the world that Google finds useful – everything from scans of books to the data being generated by Google’s driver-less cars (each generates nearly a gigabyte per second about its environment). And so Google is creating a far more comprehensive database of knowledge.” [Change, Big Data, p. 83]

‘Having all this information on my patient’s diagnostics is great, but I think I need a degree in data analytics to sort it all out…’

Therefore, the database of human intentions is a small part of a much bigger vision; a database containing all the world’s knowledge. This idea is nothing new and goes back to the early days of modern computing, and people such as Arthur C. Clarke and H.G. Wells exploring visions of a “world brain”; what’s changed recently is that a small number of technology companies are engaged in the early stages of serious efforts to build databases which really will contain much of human knowledge. “Think, for example, of the way Facebook has mapped out the social connections between more than 1 billion people. Or the way Wolfram Research has integrated massive amounts of knowledge about mathematics and the natural and social sciences into Wolfram Alpha. Or Google’s efforts to build Google Maps the most detailed map of the world ever constructed and Google Books which aspires to digitize all the books (in all languages) in the world. Building a database containing all the world’s knowledge has become profitable.”

And we’ve only begun to scratch the surface.

Persuasive and Predictive Technologies

In the book, Positive Computing: Technology for Well-being and Human Potential, authors Rafael A. Calvo, Dorian Peters writes: “Researchers focusing on using technology, for well-being related behavior change draw, on various behavior theories…Some work in this area, especially within the category of persuasive technology, can slide into a rhetoric of designer – control and is happily applied by business to increase profitable behavior. Thus implications for unethical use follow closely behind any discussion of these methods. As such, researchers are working to outline ethical guidelines. For positive computing part of addressing misuse will emerge from the field’s definitive aim to support psychological well-being and the imperative to provide evidence (via established multidimensional measures) for that claim in practice.” [Positive Computing]

“In addition to ethical concerns of user autonomy, we will need to join those researchers in cognitive behavioral therapy (CBT) who are challenging the quick fix-thinking that neglects complex, difficult and long-term change. Martin A. Siegel and Jordan Beck (2014) discuss behavior change [re: behavioral addictions] technology for quality of life improvement advocating for greater acknowledgment that much change is slow and occurs within systems that are complex. They provide the groundwork for an ongoing theory and practice of interaction design for slow changes that they define as “attitudinal and behavioral changes that are difficult to initiate and sustain,” bringing to light ethical dilemmas, impacts of timescale and the value of systems thinking inherent to slow change problems.” [Positive Computing]

August 2014, the World Health Organization (WHO) hosted a conference in Tokyo, Japan about, Public Health Implications of Excessive use of the Internet, Computers, Smartphones and Similar Electronic Devices. The meeting report noted:

“The use of the Internet, computers, mobile phones, smartphones and other electronic devices has dramatically increased over the last decades in all parts of the world. This may promote public health with respect to the provision of information, facilitation of pro-social activities and other factors. However, this increase is also associated with documented cases of excessive use that warrant consideration.

Given that the patterns and extent of use vary widely (at individual and population levels), there is continuing debate on how best to define such excessive use from a public health perspective. Currently, behavioral addictions are usually characterized by an often irresistible urge, impulse or drive to repeatedly engage in an activity (non-substance use) and an inability to reduce or cease this behavior (loss of control) despite serious negative consequences to the person’s physical, mental, social and/or financial well-being. Within this context are often considered behavioral disorders or excessive behaviors associated with gambling, viewing pornography, video gaming, internet-based single-player and multi-player gaming, excessive use of various social media, smartphone applications (apps) and similar electronic devices.”

What are the various uses for predictive, persuasive, cognitive behavioral therapy (CBT) and technologies, and more importantly, how is the raw data extracted and what are the various use(s) of these technologies? In terms of how the raw material is harvested, social media sites have become an excellent source of information in a technique called, for obvious reasons, “exploration/exploitation”. It is a data mining extraction process that essentially gives predictive and persuasive tools the raw material of information needed for profiling.

“Emotions research focuses on understanding the social, psychological, and physiological processes involved in emotional experience; the social construction of emotions (including historically and cross-culturally); the role of emotions in social life; the management of emotions in social interactions; and the role of emotions in the research itself.

Researchers in this sub-field collect data on emotions via methods that may closely resemble those used in other branches of sociology and social psychology where their analyses entertain a variety of qualitative and quantitative techniques. A few approaches bear special mention, yet for the purpose of this article, we will focus on a technique known as; social engineering attack.

Social Engineering Attacks

“Social engineering attacks is the art of gaining information and restricted privileges by directly exploiting the human psychology of those in possession of these resources. In particular, social engineers frequently employed to gain access to computer systems networks and confidential data. A competent, social engineer customizes each attack to a specific person taking advantage of the person’s culture, knowledge, training, state of mind, beliefs, and biases. An exhaustive list of all possible exploits is infinite…”

I call it the three “E”s, (or E to the power of 3): explore, exploit, and extract. Locate and “explore” the full range of your “emotions”, strengths and weaknesses then “exploit” them and what you get in terms of reaction you “extract” the data. It’s an emotionally exhausting technique, that heavily relies on and works best when the subject is in complete social and economic isolation and exclusion (including sleep and food deprivation). This becomes almost necessary to effectively employ the various manipulation and brainwashing tactics meant to exhaust the subject’s emotions and perception of reality. Otherwise, it would not work as effectively if the subject had the social and economical means to seek support to help them stay anchored in reality.

Ultimately, the objective to confuse the subject by using the power of suggestion and brainwashing techniques meant to induce the subject to question his/her reality, beliefs (religious or not), romantic inclinations, sexual identity, self-esteem, social standings, thus manufacturing a sense of confusion and hopelessness almost to the point of suicide. Depending on the capabilities of the AI or as a result of lying, in addition to trial SE attacks such as lying, befriending, blackmailing, hypnotizing and forming romantic relationships, the AI (machine learning) could utilize a number or more advanced strategies.

This is all part of the process designed to exploit and wear down the subject’s psychological and emotional defenses so they’d become receptive and malleable as a form of [re]conditioning and/or [de]radicalization “treatment” or to cause psychological harm in the hopes of destabilizing the subject’s mental state – a ‘dumbing down’ process – designed to make them passive and controllable stripped of personality and identity features that make them the unique individual that they are. All things considered, it is easy to see how the mental state of a person could be stressed to become unstable. For example, a religious guard could be informed of all the (unknown to the guard) contradictions in the main text of the subject’s religion, causing the individual to question personal beliefs and the purpose of life.

These techniques no doubt have been developed from effective harassment and cyberbullying techniques, in which case, the perpetrators are motivated by hate expressed through forms of racism, bigotry, misogyny, sexism, ableism, classism, jealousy, revenge and religious intolerance to name a few, however using these techniques in a clinical setting – for whatever the reason – is just as effective. Technically speaking, how is this done?

In a previous article discussing the manipulation of the Brexit vote, it was identified that Facebook enabled a data company called Cambridge Analytica to target individuals to collect psychological insights by obtaining its vast data-set and “harvested Facebook data (legally) for “research purposes” and published pioneering peer-reviewed work about determining personality traits, political partisanship, sexuality and much more from people’s Facebook “likes”. The goal was to capture citizens’ browsing history en masse, recording phone conversations and applying natural language processing to the recorded voice data to construct a national police database, complete with scores for each citizen on their propensity to commit a crime.

Tamsin Shaw, an associate professor of philosophy at New York University, researched the US military’s funding and use of psychological research for use in torture. “The capacity for this science to be used to manipulate emotions is very well established. This is military-funded technology that has been harnessed by a global plutocracy using it to sway elections in ways that people can’t even see, and don’t even realize is happening to them,” she says. “It’s about exploiting existing phenomenon like nationalism and then using it to manipulate people at the margins.”

The following is taken in terms of social media attacks on commercial enterprises but the same could be said, using the similar tactics on individuals. Therefore I’ve taken the liberty to alter the text to support circumstances used by an individual using the same technique. (click: here for original text) Similar techniques could be used for private jails a dangerous precedent for those who can’t lawfully prosecute but would like to administer discipline.

Adversaries traditionally target an individual’s network using two phases: reconnaissance and exploitation. Reconnaissance involves foot-printing (for example, gathering information about an individual’s IP address and, if they have any, domains), scanning (identifying what system is using the IP), and enumeration (identifying the services and ports available on these target systems). When attackers use social media, their strategy is similar, but the methods of attack are quite different. In social media, targeting an individual involves foot-printing, monitoring, and profiling, impersonating or hijacking, and, finally, attacking.

First, the adversary seeks to establish a social media fake trust network to monitor and profile activities, behaviors, and interests across the social media account(s) mapped to the individual. Keywords, hashtags, and @ mentions are also analyzed and used to establish trust when communicating with an impersonator account. Using relevant lingo establishes credibility and makes other individuals less aware of an impersonator, making them more vulnerable to engaging in conversations.

Now that the individual has been foot-printed, monitored, and profiled across the social networks, the adversary can set up one or more impersonating accounts. Impersonation is one of the most common techniques used by attackers on social media, particularly when targeting enterprises. Our sample of approximately 100 customers shows more than 1,000 impersonation accounts are created weekly by perpetrators. By impersonating a key executive, an attacker can quickly establish a trust to befriend other employees. The adversary may use the actual profile image and bio from the legitimate account to build the impersonation account.

Hijacking an account is more difficult than impersonating it but yields quicker results if successful. The most effective social media attacks on an individual occur when an attacker is successful in finding a method to hijack an account and use that to further infiltrate a network. Numerous social network data dumps have made account hijacking much easier.

Whether a trust is established through an impersonation account or hijacked an account, the adversary begins an attack by sending a direct message with malware or a phishing link to harvest credentials or infect a machine inside the network. This can be difficult to detect, as many of the social networks use URL shorteners that obfuscate the actual URL and may include multiple redirects. The following diagram depicts the anatomy of the attack. At this point, the internal beachhead has been established, the network has been compromised, and the adversary can expand their infiltration of the network.” (source)

Monitoring

The above clip is a scene in Ender’s Game where the psychological assessor (played by Viola Desmond) argues with the presiding officer (Harrison Ford). With Ender playing the games, he reaches new heights and levels even they can’t understand what’s happening. Since the program reacts and adapts emotional and psychological nature she is worried about what the game playing is doing to Ender’s overall state of mind and being. A moral conflict develops between the captain and the psych officer which results in her promptly resigning. In fact, as the game simulation is performed a certified psych professional and engineer should be working together observing the game design alerting engineers of possible biases and human rights infringement (vis-a-vis Nuremberg Code, The Belmont Report, and Declaration of Helsinki) that might have crept into the game design. We’ll approach this subject later in the article.

No doubt these data harvesting techniques aren’t only being used for military or political purposes. As a predictive tool, it leaves the door wide open for a variety of uses resulting in all sorts of potential advances (and abuses), especially if it is not administered and monitored by a certified psychological professional. The professional’s certification and insurance coverage is mainly required for the pre-assessment process that identifies and depending on the purpose of the treatment or for testing if the exercises are for “research” purposes; if the subject is eligible for the treatment, do they even need “treatment” in the first place, what’s the right course of action for treatment, and for testing, what are the ethical implications of the assignment, will the subsequent tests damage the subject’s state of mind? Temporarily or permanently? What is the recovery process? Liability?

Similar to court judges, a certified professional (preferable a psychological assessor with a background in engineering) should have no attachment to the individual nor hold any attachment to the outcome of the individual’s assessment to prevent bias. Their professional’s opinion holds weight in the court of law, hence the $1million insurance coverage demanded by industry professionals. At this point is not advised to allow AI technology to assume these responsibilities, more often than not as the engineer’s biases are commonly found in the AI’s programming. These [un]conscious biases could include but not limited to: racism, bigotry, misogyny, sexism, ableism, classism, jealousy, revenge and religious intolerance (to name a few). As such, this can be potentially dangerous work and be damaging to the research subject/gammer.

Games as Sites of Innovation

Games are commonly used for a variety of reasons; research, disciplinary, training and simply for pure fun socializing with your peers. With an Internet game, the scope of such innovation groups is very large. In terms of social interaction, a large, persistent game is a very good site for cultural innovation and despite what I’ve listed above is generally a safe space. They could acquire thousands and thousands of people from around the globe, and they could persist safely for many years. Generally speaking, all agree that a game is just a game and that nothing in the game really matters. Therefore people feel freer to experiment and express themselves in new ways. In a large game, such a group could expose many millions of others to its behavior. All of this makes adoption by the real world more likely.

People who make Internet games today have the power to change our cultural world. Perhaps they will create a small change that seeps into our daily lives, changes our expectations slowly and subtly until one day, decades later, we suddenly realize that our culture has changed forever. Or perhaps a designer may invent a very new and very wonderful world that solves many of our problems and helps us to live as people ought to live. Who will make these wonderful new worlds? Perhaps game designers; perhaps elite creators in other fields. But we can expect ordinary people to come to the fore eventually. An isolated genius, probably already alive today, will design the game that changes our lives forever; “As empowered by the Internet, games today are a demonstrated infrastructure for that new City on a Hill. Many such cities will be built, and some will directly point the way to our future”.

Go to:   Part I – Video Game Compulsion,   Part II – Social Conditioning,