Well it's been a thoroughly engaging few months for me, covering a whole host of topics and areas, not just related to gaming, but also to internet technologies and functions, online collaboration and to literacy and learning as a whole. I can honestly say that I have had no greater desire to learn about and understand a subject - perhaps because it interests me greatly, perhaps because of the enthusiasm of other members and moderator of the course, or perhaps from the intentional design of the course to encourage these features - but probably a combination of all three. Here I hope to review my original aims for the course, and some persistent themes that have arisen in my weblog - I'll finish briefly with some possible future directions.
In terms of original aims for the course, I was particularly interested in how computer games might be used alongside f2f group teaching/training. I had initial reservations about people, especially children, spending so much time physically isolated from others in front of a screen, and wanted to find ways to incorporate digital technologies and f2f group activities. I was thoroughly pleased to be introduced then, to the concept of ARGs, which integrated online collaboration and often group activities too. Games like The Go Game, Augmented Reality Games, 'Big Games' and even the concept of Smart Mobs all combine technology features with personal collaboration and learning. The step that ARGs take, from full fantasy, to semi-real, semi-fantasy (or 'real-play' as opposed to 'role-play' in McGonigal's words) is for me a significant motivation for exploring this field further, and the scope it has for engaging learners in collaborative ways.
This is not to say that I haven't also learned to appreciate the value of single-player, screen-based games. Through a focussed enquiry into the game *Civilisation IV*, and the writings of Squire and Gee in particular, I have come to realise the significant potential that video games, especially simulation games, have for providing learners with a fully immersive environment in which to learn about subjects. In such games players are free to make choices, make mistakes, and act as if they really *are* the game's protagonist character, encouraging meaningful, embodied action of direct relevance to the player. This is in my view an enormous step forward from a simple text-based pedagogy. The thoughts and experiences of Sasha Barab and David Shaffer were also very helpful in coming to appreciate just how valuable learning in an immersive game environment might be, as they both express eloquently how a game with well-designed learning intentions might not just teach a player facts about a subject, but require the player to be a *practitioner* of that subject.[This NML webcast was fascinating in this respect]
In terms of continuing themes, one that has consistently arisen, both in this course and the previous (IDEL), is the Digital Native vs Digital Immigrants debate. Through this study of games I have come to realise that this gulf is perhaps even wider than I had first appreciated, and this point is relentlessly drilled home by Gee's analysis of where schools are failing engage learners in the way that good video games do (see previous post). However, as I argued previously, I believe that using games and digital simulations may be one way of diminishing this divide. Utilising the inherent appeal of good, well-designed games will inspire students to learn using technology they are accustomed to, while also introducing older generation teachers to digital age teaching technology.
Another theme that has persisted throughout this course is what constitutes a *good* game. From readings by traditional play theorists to modern-day digital game designers, it is clear that the features that make a game enjoyable and fun to play have not suddenly changed in the digital age. Elements of fantasy and challenge, while not always necessary, seem to be in most cases a common factor for the game's appeal. Likewise, ensuring that the challenge is not too easy, nor too hard, seems to be an enduring problem for game designers, very much like it is for teachers who are planning their lessons.
As a meta-reflection on the game-centred design of this particular course, it has been fascinating for me to actually experience the potential for GBL, by carrying out game-like tasks, using Second Life, Google Earth and WebQuests. It was interesting to experience directly the motivational draw that such tasks inspired in me, and most of the time I forgot that they were actually *learning* tasks. The subsequent discussions that ensued from these tasks where on balance highly entertaining, and rather than distracting from the intended learning, actually encouraged (for me anyway) further subject enquiry. As a teaching model this is potentially extremely powerful.
I'd like to reflect finally on the game design task, in which Andrew Sides and I created the ARG *Tomorrow Calling* (which should be ready for 'launching' soon). It was and still is (as I had feared it might) a challenging and painfully creative, yet hugely rewarding experience. Here's a list of things I have learned in the process: importance of narrative structure and character development; website development, design and uploading; code encryption and challenge design; audio recording and cross-media integration; the value of subject-related forums (affinity groups); features of a good/bad ARG (I hope); and effective and not-so-effective ways to produce work and communicate with a co-worker online. Such an outcome of having a game designed and produced by the end of the course (almost), was not something I had originally expected.
In terms of future directions, I am now hugely inspired to further my research and application of GBL and, as I mentioned above, I'm particularly drawn to the potential of ARGs and Cross-Media initiatives for learning and teaching (Project NML for example, includes some innovative projects). I get the impression (though this is likely to be biased of course) that the field of GBL is likely to grow over the next few years, and for me this is one of the more exciting and potentially important directions that the wider field of e-learning is heading.
Showing posts with label Games. Show all posts
Showing posts with label Games. Show all posts
Monday, 9 April 2007
Friday, 6 April 2007
Level 10 - Why and how the modern school system *could try harder*
A common theme from Gee's (2003) 'What video games have to teach us...' is the notion that modern day schools systems are failing to inspire good learning as well as a well-designed videogame might. Here I want to assess these arguments and look at how schools might adapt their practices to accommodate Gee's theories of what constitutes good learning and teaching.
One of his ongoing proposals is that VGs stimulate *critical* learning, or learning that is highly reflexive. But do schools not teach critical learning already? As far as I can tell, many are at least trying. There is currently some enthusiasm in mainstream schooling for what is generally termed 'Critical skills', also known as 'Thinking Skills', and these skills encapsulate a whole host of advanced cognitive processes, such as making inferences, bridging with other subject areas and considering one's own thinking processes (or metacognition). The history of this direction of thinking about learning is very interesting, and is generally attributed to three main sources. Firstly the work of Reuven Feuerstein, who while testing the IQs of recently arrived immigrants to Israel post-WW2 found that the tests were woefully inadequate at assessing their true cognitive capacities. He went on to develop other tests that better assessed their ability to think and reason. Secondly, the work of Matthew Lipman, the founder of 'Philosophy for Children', who as a college professor, found that many of his undergraduate students failed to articulate basic concepts. On further examination of the primary and secondary school system he realised that schools were completely failing to encourage reflective thinking, so went on to design a whole series of lesson ideas that might encourage more articulated thinking and discussion. And thirdly the work of Edward de Bono, who has developed a whole host of 'thinking tools' , including brain-based learning (which always makes me smile, as I have no idea how else someone learns). The very fact that schools are considering the integration of these ideas is I suppose a step in the right direction. However the impression I get from teachers is that encouraging these skills in lessons sounds great in theory, but often gets set aside by the need to impart curriculum-based knowledge; the results-driven exam system of assessment makes it very difficult to stray from the current norm. Which is a shame - as in my view learning *how to think* should be one of the main priorities of schooling. I actually asked Ted Wragg at a conference whether he thought there might be a time when critical skills might be assessed, and he said he hoped not, using the amusing analogy that being assessed on a sex education class would take the fun out of it. While I don't dare differ with the highly-esteemed Mr. Wragg, I fear that without some form of assessment of these skills, teachers will never be motivated enough to include thinking development in lessons.
From this current conception of critical thinking, I feel that Gee's concept l takes this direction a step further. In his view, critical thinking 'involves learning to think of semiotic domains as design spaces that manipulate us in certain ways and that we can manipulate in certain ways'. There are two key points here. In terms of design spaces, Gee recognises that a semiotic domain is simply a subject that has been created by others, and is still in the process of being designed. With reference to games, but could also be said of any subject, he writes about the value in 'thinking about the (internal) design of the game, about the game as a complex system of interrelated parts meant to engage and even manipulate the player in certain ways. This is metalevel thinking, thinking about the game as a system and a designed space, and not just playing within the game moment by moment'. The second key point here is the recognition that design spaces affect and are affected by their users. In other words, he takes the idea that reflecting on a given subject is good for learning, but also includes the consideration that learners are themselves *changed* by this new understanding (they literally *identify* with the subject), and are also empowered to affect this domain with their own contributions. This is a significant development for critical thinking in my opinion, as instead of the learner encouraged just to objectively criticise a given subject, the learner is also encouraged to be personally involved with other like-minded people (the affinity group) who serve to assess, contribute and design the future direction of the domain.
An additional feature of good learning that Gee describes is the value of *active* learning, where the learner actively experiences the application of what they are learning. Experiential learning is something that VGs do very well to encourage, as it is potentially possible for a player character to experience anything that one can experience in non-virtual life, and more besides. Using Civilisation iv as an example, I can learn how best to manage the resources of a country by actually simulating the management of a country; reading a book on the subject would be nowhere near as close to the real practice of running a country. Where possible however, schools already do try to give learners practical experience of a subject, particularly in vocational subjects, but will claim (perhaps rightly) that there are practical limitations on how much direct experience can be offered. Gee is I feel a little unfair on schools when he writes that 'one thing that designers of video games realize, but that many schools seem not to, is that learning for human beings is, in large part, a practice effect.' I think its fair to say that teachers do recognise the value of experiential learning, but might attest that it's neither possible nor necessary to offer learning from direct experience all the time. So does it matter that learning is not always experiential?
Gee argues it *does*, explaining that one of the key values of experiential learning is the scope for fully situated meaning. He suggests that a school's failure to provide experiential embodied learning makes it very hard for students to arrive at a meaningful understanding of a subject. 'Purely verbal meanings, meanings that a person has no ability to customize for specific situations and that offer the person no invitations for embodied actions in different situations, are useless (save for passing tests in school).' He uses this example: 'if you can't use "democracy" in a situation-specific way..., then the word does not make sense to you, no matter how well you can repeat a dictionary definition for the word.' This is fairly damning stuff, and applies to tertiary education as much as primary and secondary. If what Gee suggests is true, then learning something out of a personally relevant context literally makes no sense. As he mentions, ''one good way to make people look stupid is to ask them to learn and think in terms of words and abstractions that they cannot connect in any useful way to images or situations in their embodied experiences in the world. Unfortunately, we regulartly do this in schools... In school, words and meanings usually float free of material conditions and embodied actions. They take on only general, so-called decontextualized meanings. Their meanings just amount to spelling out a word or phrase in terms of still other words and phrases, themselves with only general meanings.' So words to explain more words to explain more words doesn't do much to help students achieve a meaningful understanding of a subject. But given the budget and logistical limitations that schools have, how can situated meaning and experiential learning be further fostered in lessons? One clear solution is to employ immersive VG simulations that will at least offer greater similarity to true experience than traditional 'chalk and talk' teaching.
I feel slightly bad at this stage for giving schools such a hard time, as I'm sure there are great inroads already being made in these areas. However one area I'm particularly concerned about is schools' universal drive towards knowledge acquisition and retention, over knowledge application. Plato recognised the futility of this tenet and is quoted as saying 'Knowledge which is acquired under compulsion obtains no hold on the mind'. This still holds true today. As Gee once again points out, '[students] are learning to store discrete facts and elements of knowledge, not deeper patterns... if people have a pattern in their mind, however, when they are faced with a new situation, they can reflect on how this pattern can be revised to cover the new situation.' So while knowledge accumulation can be easily evaluated, it is mainly the capacity of a student's memory that is being tested, rather than their ability to piece together items of knowledge in meaningful ways. Gee calls terms this learning to *use* knowledge effectively, *pattern thinking*, where 'you are, in reality learning how to situate the meaning of the word or concept... to fit different situations, including situations you may not have seen before. Lists require no such thinking and learning. Patterns are experiential theories... that we change with more experience, more probing and reprobing of the world.' So instead of focusing so intently on *how much* knowledge a student can accumulate, school evaluation systems would do well to look a little more at how well a student is able to make sense of the little knowledge they might have.
But to use Gee's reference to cultural models, the current view on evaluation asserts that "exams prove whether a student is clever and/or works hards". Another one might be that "exams show how much the student has learned and are good at categorizing students as better or worse". Yet another might be, "exams results show how successful the teacher has been at imparting the subject". But these are all dangerous assumptions to make. As someone who has 'played' and 'won' many examinations to get to this academic stage, I can be thankful that I've a good enough short term memory to get here. But on reflection, such short-term memorisation to pass exams barely constitutes understanding of a subject (just try asking me to write an essay on philosophy - my undergraduate subject - now. I fear it would be woefully inadequate.) So I feel that part of the change to accommodate all of Gee's theories about good learning must involve a change in the evaluative process a the end. This is I suppose already occurring, with a move towards more and more coursework, but things still have some way to go. [Some progressive and forward-thinking courses now require no examinations at all - their designers must truly be innovative and enlightened educational pioneers ;-)]
The final consideration of Gee's that I want to address here is his proposal that 'learning... is very much a matter of being situated in a material, social, and cultural world'. In other words, learning cannot and should not be isolated from other people. For as Gee points out, 'the patterns... in our heads, ... become meaningful ("right" or "wrong") only from the perspective of the workings of social groups that "enforce" certain patterns as ideal norms toward which everyone in that group should orient.' Thus we rely on other people's verification of our thoughts to indicate whether they are meaningful or not. For example, scientific pioneers such as Einstein and Newton only really become 'right', when enough experienced people validate their work. Without the assistance of a group, no thinker has any way to test wether their theories are veridical. But as Gee suggests, schools regularly segregate students from social corroboration. He writes that 'schools still isolate children form such powerful networks - for example, a network built around some branch of science- and test and assess them as isolated individuals, apart from other people and apart from tools and technologies that they could leverage to powerful ends.' So although steps toward co-operative and group learning *are* being made in schools [http://edtech.kennesaw.edu/intech/cooperativelearning.htm], this enforced individualism does little to make use of the potential for networked learning.
By networked learning here, I mean making use of the potential that a myriad of connected networks has for helping store and a disseminate knowledge and ideas. The internet has really made this opportunity for mass communication of ideas possible, and we as teachers and learners should make full use of it. Already it is being used for research purposes on a large scale, much like people might use a library, but currently its capacity to interconnect *people* is being underused in traditional learning environments (see Web 2.0 usage study). And as Gee suggests, the ability to make use of such networks will be an important skill for future generations. 'The power of distribution - of storing knowledge in other people, texts, tools, and technologies - is really in the way in which all these things are networked together. The really important knowledge is in the network... not in any one "node", but in the network as a whole. Does the network store lots of powerful knowledge? Does it ensure that this knowledge moves quickly and well to the parts of the system that need it now? Does it adapt to changed condtions by learning new things quickly and well? These are the most crucial knowledge questions we can ask in the modern world. They are hardly reflected at all in how we organize schooling and assessment in schooling'. So (without blowing our e-horn here) when you compare the highly adaptive network of knowledge, people and skills created for and by students of this course, with a traditional single desk classroom structure, one can clearly see the greater scope for knowledge distribution, peer learning and learning that adapts quickly to the requirements of the teacher, students and the subject area as a whole. Within such a networked learning framework, a competitive ethos where students compete against each other by hoarding knowledge, makes the network as a whole suffer. However, when students compete to become a more *active* 'node' in the system (such as during the Second Life tasks in this module), the individual and collective learning that takes place increases.
This direction towards networked learning is not just important from a learning perspective, but it will also be important for preparing students for a networked society and the so-called knowledge economy. It is estimated that by the year 2020 the greatest employer will be the *self*, for every 100 full-time jobs there will be 2000 part-time jobs, and peoples' working life will involve on average seven complete careers changes (Negroponte, 2000). So students must learn now how to manage multiple tasks and to constantly learn new skills to adapt to rapidly changing work environments. But in teaching only knowledge items, and not how to make best use of networks for storing and disseminating knowledge, schools are ill-preparing students for skills they'll need in the workplace. As Gee writes, 'if we want to know how good students are in science - or how good employees are in a modern knowledge-centered workplace - we should ask all of the following (and not just the first): What is in their heads? How well can they leverage knowledge in other people and in various tools and technologies (including their environment)? How are they positioned within a network that connects them in rich ways to other people and various tools and technologies?'. He then goes on to highlight that, 'schools tend to care only about what is inside students' heads ..., isolated from others, from tools and technologies, and from rich environments that help make them powerful nodes in networks. Good workplaces in our science- and technology-driven "new capitalism" don't play this game. Schools that do are, in my view, DOA in our current world - and kids who play videogames know it'. Thus schools could do better to encouraging uses of networking tools for learning, which at present, young people are already better than their teachers at using.
So part of helping schools to apply Gee's theories on literacy and learning will come from a greater integration of digital thinking and collaboration tools in the classroom. I don't mean to say that increased use of technology is the *only* way forward, but simply that it provides some of the answers. I was recently speaking to a young boy who had just been grounded by his parents, and had had his computer and his mobile phone taken away from him. He didn't like it - yet this is what schools do to kids all the time. As Gee mentions, 'in school we test people without their thinking tools... we want to know what they can do all by themselves. But in the modern world - and this is certainly true of many modern high-teck workplaces- it is equally or more important to know what people can think and do with others and with various tools and technologies.' While there is some argument for encouraging abilities such as handwriting and mental arithmetic without assistance, there are also plenty of reasons for testing just how well students get on *with* computerised tools. A carpenter may come with something of value with a pocket knife - but imagine what she could do with full access to her workshop. As David Shaffer, author of 'How video games help children learn', expains at a conference entitled 'Do video games help kids learn?', allowing computers to substitute some of our basic cognition helps us to do more. Like the invention of paper and writing before enabled humans to share and store knowledge, computers take care of basic (and complex) thinking processes so we can do other things. Networked learning and society doesn't necessarily entail that individuals then become insignificant nodes of a collective beast. Rather both the individual and the network become enriched by the value that each brings to the other.
So in conclusion, there is much that Gee has to say that highlights current inadequacies in mainstream education. But this not to say that current schools systems are shockingly bad, or that teachers are failing pupils on a massive scale. It is just that the potential for computer and network technology is making traditional whiteboard-and-pen teaching seem insufficient. And the good news is that younger generations (or digital natives) are already becoming highly proficient at making use of these new tools, partly because they recognise the value in their potential. As Gee writes about video games, which are really just one product of this digital revolution: 'They situate meaning in a multimodal space through embodied experiences to solve problems and reflect on the intricacies of the design on imagined worlds and the design of both real and imagined social relationships and identitiies in the modern world. ... And people get wildly entertained to boot.' If only regular lessons could start to get even close inspiring similar engagement. The earlier that teachers and schools start recognising and utilising the value of these digital thinking tools the better.
One of his ongoing proposals is that VGs stimulate *critical* learning, or learning that is highly reflexive. But do schools not teach critical learning already? As far as I can tell, many are at least trying. There is currently some enthusiasm in mainstream schooling for what is generally termed 'Critical skills', also known as 'Thinking Skills', and these skills encapsulate a whole host of advanced cognitive processes, such as making inferences, bridging with other subject areas and considering one's own thinking processes (or metacognition). The history of this direction of thinking about learning is very interesting, and is generally attributed to three main sources. Firstly the work of Reuven Feuerstein, who while testing the IQs of recently arrived immigrants to Israel post-WW2 found that the tests were woefully inadequate at assessing their true cognitive capacities. He went on to develop other tests that better assessed their ability to think and reason. Secondly, the work of Matthew Lipman, the founder of 'Philosophy for Children', who as a college professor, found that many of his undergraduate students failed to articulate basic concepts. On further examination of the primary and secondary school system he realised that schools were completely failing to encourage reflective thinking, so went on to design a whole series of lesson ideas that might encourage more articulated thinking and discussion. And thirdly the work of Edward de Bono, who has developed a whole host of 'thinking tools' , including brain-based learning (which always makes me smile, as I have no idea how else someone learns). The very fact that schools are considering the integration of these ideas is I suppose a step in the right direction. However the impression I get from teachers is that encouraging these skills in lessons sounds great in theory, but often gets set aside by the need to impart curriculum-based knowledge; the results-driven exam system of assessment makes it very difficult to stray from the current norm. Which is a shame - as in my view learning *how to think* should be one of the main priorities of schooling. I actually asked Ted Wragg at a conference whether he thought there might be a time when critical skills might be assessed, and he said he hoped not, using the amusing analogy that being assessed on a sex education class would take the fun out of it. While I don't dare differ with the highly-esteemed Mr. Wragg, I fear that without some form of assessment of these skills, teachers will never be motivated enough to include thinking development in lessons.
From this current conception of critical thinking, I feel that Gee's concept l takes this direction a step further. In his view, critical thinking 'involves learning to think of semiotic domains as design spaces that manipulate us in certain ways and that we can manipulate in certain ways'. There are two key points here. In terms of design spaces, Gee recognises that a semiotic domain is simply a subject that has been created by others, and is still in the process of being designed. With reference to games, but could also be said of any subject, he writes about the value in 'thinking about the (internal) design of the game, about the game as a complex system of interrelated parts meant to engage and even manipulate the player in certain ways. This is metalevel thinking, thinking about the game as a system and a designed space, and not just playing within the game moment by moment'. The second key point here is the recognition that design spaces affect and are affected by their users. In other words, he takes the idea that reflecting on a given subject is good for learning, but also includes the consideration that learners are themselves *changed* by this new understanding (they literally *identify* with the subject), and are also empowered to affect this domain with their own contributions. This is a significant development for critical thinking in my opinion, as instead of the learner encouraged just to objectively criticise a given subject, the learner is also encouraged to be personally involved with other like-minded people (the affinity group) who serve to assess, contribute and design the future direction of the domain.
An additional feature of good learning that Gee describes is the value of *active* learning, where the learner actively experiences the application of what they are learning. Experiential learning is something that VGs do very well to encourage, as it is potentially possible for a player character to experience anything that one can experience in non-virtual life, and more besides. Using Civilisation iv as an example, I can learn how best to manage the resources of a country by actually simulating the management of a country; reading a book on the subject would be nowhere near as close to the real practice of running a country. Where possible however, schools already do try to give learners practical experience of a subject, particularly in vocational subjects, but will claim (perhaps rightly) that there are practical limitations on how much direct experience can be offered. Gee is I feel a little unfair on schools when he writes that 'one thing that designers of video games realize, but that many schools seem not to, is that learning for human beings is, in large part, a practice effect.' I think its fair to say that teachers do recognise the value of experiential learning, but might attest that it's neither possible nor necessary to offer learning from direct experience all the time. So does it matter that learning is not always experiential?
Gee argues it *does*, explaining that one of the key values of experiential learning is the scope for fully situated meaning. He suggests that a school's failure to provide experiential embodied learning makes it very hard for students to arrive at a meaningful understanding of a subject. 'Purely verbal meanings, meanings that a person has no ability to customize for specific situations and that offer the person no invitations for embodied actions in different situations, are useless (save for passing tests in school).' He uses this example: 'if you can't use "democracy" in a situation-specific way..., then the word does not make sense to you, no matter how well you can repeat a dictionary definition for the word.' This is fairly damning stuff, and applies to tertiary education as much as primary and secondary. If what Gee suggests is true, then learning something out of a personally relevant context literally makes no sense. As he mentions, ''one good way to make people look stupid is to ask them to learn and think in terms of words and abstractions that they cannot connect in any useful way to images or situations in their embodied experiences in the world. Unfortunately, we regulartly do this in schools... In school, words and meanings usually float free of material conditions and embodied actions. They take on only general, so-called decontextualized meanings. Their meanings just amount to spelling out a word or phrase in terms of still other words and phrases, themselves with only general meanings.' So words to explain more words to explain more words doesn't do much to help students achieve a meaningful understanding of a subject. But given the budget and logistical limitations that schools have, how can situated meaning and experiential learning be further fostered in lessons? One clear solution is to employ immersive VG simulations that will at least offer greater similarity to true experience than traditional 'chalk and talk' teaching.
I feel slightly bad at this stage for giving schools such a hard time, as I'm sure there are great inroads already being made in these areas. However one area I'm particularly concerned about is schools' universal drive towards knowledge acquisition and retention, over knowledge application. Plato recognised the futility of this tenet and is quoted as saying 'Knowledge which is acquired under compulsion obtains no hold on the mind'. This still holds true today. As Gee once again points out, '[students] are learning to store discrete facts and elements of knowledge, not deeper patterns... if people have a pattern in their mind, however, when they are faced with a new situation, they can reflect on how this pattern can be revised to cover the new situation.' So while knowledge accumulation can be easily evaluated, it is mainly the capacity of a student's memory that is being tested, rather than their ability to piece together items of knowledge in meaningful ways. Gee calls terms this learning to *use* knowledge effectively, *pattern thinking*, where 'you are, in reality learning how to situate the meaning of the word or concept... to fit different situations, including situations you may not have seen before. Lists require no such thinking and learning. Patterns are experiential theories... that we change with more experience, more probing and reprobing of the world.' So instead of focusing so intently on *how much* knowledge a student can accumulate, school evaluation systems would do well to look a little more at how well a student is able to make sense of the little knowledge they might have.
But to use Gee's reference to cultural models, the current view on evaluation asserts that "exams prove whether a student is clever and/or works hards". Another one might be that "exams show how much the student has learned and are good at categorizing students as better or worse". Yet another might be, "exams results show how successful the teacher has been at imparting the subject". But these are all dangerous assumptions to make. As someone who has 'played' and 'won' many examinations to get to this academic stage, I can be thankful that I've a good enough short term memory to get here. But on reflection, such short-term memorisation to pass exams barely constitutes understanding of a subject (just try asking me to write an essay on philosophy - my undergraduate subject - now. I fear it would be woefully inadequate.) So I feel that part of the change to accommodate all of Gee's theories about good learning must involve a change in the evaluative process a the end. This is I suppose already occurring, with a move towards more and more coursework, but things still have some way to go. [Some progressive and forward-thinking courses now require no examinations at all - their designers must truly be innovative and enlightened educational pioneers ;-)]
The final consideration of Gee's that I want to address here is his proposal that 'learning... is very much a matter of being situated in a material, social, and cultural world'. In other words, learning cannot and should not be isolated from other people. For as Gee points out, 'the patterns... in our heads, ... become meaningful ("right" or "wrong") only from the perspective of the workings of social groups that "enforce" certain patterns as ideal norms toward which everyone in that group should orient.' Thus we rely on other people's verification of our thoughts to indicate whether they are meaningful or not. For example, scientific pioneers such as Einstein and Newton only really become 'right', when enough experienced people validate their work. Without the assistance of a group, no thinker has any way to test wether their theories are veridical. But as Gee suggests, schools regularly segregate students from social corroboration. He writes that 'schools still isolate children form such powerful networks - for example, a network built around some branch of science- and test and assess them as isolated individuals, apart from other people and apart from tools and technologies that they could leverage to powerful ends.' So although steps toward co-operative and group learning *are* being made in schools [http://edtech.kennesaw.edu/intech/cooperativelearning.htm], this enforced individualism does little to make use of the potential for networked learning.
By networked learning here, I mean making use of the potential that a myriad of connected networks has for helping store and a disseminate knowledge and ideas. The internet has really made this opportunity for mass communication of ideas possible, and we as teachers and learners should make full use of it. Already it is being used for research purposes on a large scale, much like people might use a library, but currently its capacity to interconnect *people* is being underused in traditional learning environments (see Web 2.0 usage study). And as Gee suggests, the ability to make use of such networks will be an important skill for future generations. 'The power of distribution - of storing knowledge in other people, texts, tools, and technologies - is really in the way in which all these things are networked together. The really important knowledge is in the network... not in any one "node", but in the network as a whole. Does the network store lots of powerful knowledge? Does it ensure that this knowledge moves quickly and well to the parts of the system that need it now? Does it adapt to changed condtions by learning new things quickly and well? These are the most crucial knowledge questions we can ask in the modern world. They are hardly reflected at all in how we organize schooling and assessment in schooling'. So (without blowing our e-horn here) when you compare the highly adaptive network of knowledge, people and skills created for and by students of this course, with a traditional single desk classroom structure, one can clearly see the greater scope for knowledge distribution, peer learning and learning that adapts quickly to the requirements of the teacher, students and the subject area as a whole. Within such a networked learning framework, a competitive ethos where students compete against each other by hoarding knowledge, makes the network as a whole suffer. However, when students compete to become a more *active* 'node' in the system (such as during the Second Life tasks in this module), the individual and collective learning that takes place increases.
This direction towards networked learning is not just important from a learning perspective, but it will also be important for preparing students for a networked society and the so-called knowledge economy. It is estimated that by the year 2020 the greatest employer will be the *self*, for every 100 full-time jobs there will be 2000 part-time jobs, and peoples' working life will involve on average seven complete careers changes (Negroponte, 2000). So students must learn now how to manage multiple tasks and to constantly learn new skills to adapt to rapidly changing work environments. But in teaching only knowledge items, and not how to make best use of networks for storing and disseminating knowledge, schools are ill-preparing students for skills they'll need in the workplace. As Gee writes, 'if we want to know how good students are in science - or how good employees are in a modern knowledge-centered workplace - we should ask all of the following (and not just the first): What is in their heads? How well can they leverage knowledge in other people and in various tools and technologies (including their environment)? How are they positioned within a network that connects them in rich ways to other people and various tools and technologies?'. He then goes on to highlight that, 'schools tend to care only about what is inside students' heads ..., isolated from others, from tools and technologies, and from rich environments that help make them powerful nodes in networks. Good workplaces in our science- and technology-driven "new capitalism" don't play this game. Schools that do are, in my view, DOA in our current world - and kids who play videogames know it'. Thus schools could do better to encouraging uses of networking tools for learning, which at present, young people are already better than their teachers at using.
So part of helping schools to apply Gee's theories on literacy and learning will come from a greater integration of digital thinking and collaboration tools in the classroom. I don't mean to say that increased use of technology is the *only* way forward, but simply that it provides some of the answers. I was recently speaking to a young boy who had just been grounded by his parents, and had had his computer and his mobile phone taken away from him. He didn't like it - yet this is what schools do to kids all the time. As Gee mentions, 'in school we test people without their thinking tools... we want to know what they can do all by themselves. But in the modern world - and this is certainly true of many modern high-teck workplaces- it is equally or more important to know what people can think and do with others and with various tools and technologies.' While there is some argument for encouraging abilities such as handwriting and mental arithmetic without assistance, there are also plenty of reasons for testing just how well students get on *with* computerised tools. A carpenter may come with something of value with a pocket knife - but imagine what she could do with full access to her workshop. As David Shaffer, author of 'How video games help children learn', expains at a conference entitled 'Do video games help kids learn?', allowing computers to substitute some of our basic cognition helps us to do more. Like the invention of paper and writing before enabled humans to share and store knowledge, computers take care of basic (and complex) thinking processes so we can do other things. Networked learning and society doesn't necessarily entail that individuals then become insignificant nodes of a collective beast. Rather both the individual and the network become enriched by the value that each brings to the other.
So in conclusion, there is much that Gee has to say that highlights current inadequacies in mainstream education. But this not to say that current schools systems are shockingly bad, or that teachers are failing pupils on a massive scale. It is just that the potential for computer and network technology is making traditional whiteboard-and-pen teaching seem insufficient. And the good news is that younger generations (or digital natives) are already becoming highly proficient at making use of these new tools, partly because they recognise the value in their potential. As Gee writes about video games, which are really just one product of this digital revolution: 'They situate meaning in a multimodal space through embodied experiences to solve problems and reflect on the intricacies of the design on imagined worlds and the design of both real and imagined social relationships and identitiies in the modern world. ... And people get wildly entertained to boot.' If only regular lessons could start to get even close inspiring similar engagement. The earlier that teachers and schools start recognising and utilising the value of these digital thinking tools the better.
Labels:
Barab,
critical skills,
Games,
GBL,
Gee,
MSc,
Multiliteracies,
networks,
schools,
Shaffer,
thinking skills,
web 2.0
Monday, 12 March 2007
Level 9 - Diminishing the Digital Divide revisited - Games to bridge the Generation Gap
Articles by Prensky, Oblinger and Squire, a recently published Demos article entitled 'Their Space: Education for a Digital Generation' (Green and Hannon, 2007), plus several other commentators, all highlight a significant difference between current generations in terms of their familiarity with computers, the internet and technology (or lack of it). This raises important issues for educators at ALL levels, in particular how these technologies can be put to best use in order to engage students without creating a 'digital dependence' where human relationships and overall learning might suffer. One way of engaging both students *and* teachers with digital technology is through games - and it is a potential that games have to bridge this divide that I want to explore further here.
To recap briefly on this digital disparity, Oblinger draws up 3 general but useful distinctions between generations and their understanding and comfort with ICTs (this includes internet use): Boomers, the 40+ age group, might be classed as 'digital migrants', or those who did not grow up surrounded by computers and associated technology, but had to learn about them at a later stage; Gen-Xers (which I fall into) who were the first generation to grow up with (albeit rudimentary) computers and who are the first 'digital natives'; and Millenials, or 'digital natives 2.0' who were born after 1982, and who are completely au fait with computers, internet and associated linking gadgets such as mobile phones, iPods etc. Millenials have also been classed as Generation-C - where C stands for Content - meaning that for this generation, creation of 'consumer generated content plays a significant role in their social life, generating streams of new text, images, audio and video on an ongoing basis'. These characterisations are extremely useful for educators of the Boomer generation who wish to better understand and inform the younger generations, and to an extent vice versa.
If responsibility to adapt rests with one generation more than another, it must surely be more with teachers than with students (though teachers will probably disagree), as digital proficiency is becoming the dominant paradigm. When pupils enter the job market, a large number of roles (in developed countries at least), will involve some basic technological know-how. So for teachers to do their job well nowadays, it's becoming more necessary to adopt methods that inform and appeal to what might be termed the "information-age mind-set." As Oblinger writes, 'the attitudes - and aptitudes - of students who have grown up with technology appear to differ from those of students who rarely use technology." She cites Jason Frand, who lists ten key attributes of this emerging mind-set: "[1] Computers aren't technology; [2] Internet is better than TV; [3] Reality is no longer real (re: potential inaccuracies of digital content); [4] Doing is more important than knowing; [5] Learning more closely resembles Nintendo than logic; [6] Multi-tasking is a way of life; [7] Typing is preferred to handwriting; [8] Staying connected is essential; [9] Zero tolerance of delays; and [10] Consumer and creator are blurring." These all combine to highlight fairly significant changes in how young people process information. Charles Monereo (The virtual construction of the mind: the role of educational psychology, 2004) explains this cognitive change well. "Just as steps from oral culture to written culture and then from writing into printing had clear repercussions for forms of learning and thinking, the transition from the printed culture to this new digital culture will have diverse consequences for our cognition". As Prensky has also recognised, DN's cognitive processes have been moulded by their immersion in a digital environment.
In particular, point [4] above marks a giant step, where knowledge accumulation needs no longer be a principle goal of education, as information can be easily accessed and needn't necessarily be stored in an individual's brain (see Gee 2003 on *distributed knowledge*). Monereo again, describes this distinction, writing that "technological migrants [Boomers] regard knowledge as something that they possess, something that they carry around with them; technological natives [Gen-X onwards] see it as something that they obtain through a set of applications and instruments. This distinction modifies substantially notions such as intelligence, wisdom and ability." This change also has significant epistemological ramifications. Once again Monereo raises this issue. "For the older generation there have always been universal truths, both scientific and moral... But for the younger generation, "everything depends"; all truths are relative and depend on who, when, how, and why they are stated; they are never independent of their utterer or their context." This is a considerable, and I think exciting, philosophical change that teachers must accommodate - teaching that knowledge is absolute will not convince our young learners; they will want to explore the bigger picture, and then make their own minds up. [See the excellent Civil War History Game, cited by Oblinger, which provides students with a broad range of documents relating to the civil war, encouraging them to view sources as a multitude of opinions, not facts, and to synthesize a broad range data to form opinions of their own.]
I also feel that Frand's point [5] above, 'Learning more closely resembles Nintendo than logic', also points to a growing problem that teachers face - how to engage students in ways that complex problem-solving computer games might, with high levels of interactivity, a compelling narrative and a multimedia experience. Recognising and accomodating these changes will I think prove to be an important challenge for teachers over the coming years, moving away from linear, textbook-based learning, to classes that, in Oblinger's words, 'give way to simulations, games and collaboration'. To address this problem, teachers must recognise that Millenials exhibit distinct learning styles, as Clare Raines suggests in 'Managing Millenials'. Millenials' learning preferences embrace "teamwork, technology, structure, entertainment & excitement [and] experiential activities", which differ significantly to those of the Boomer generation when they were at school, which reflected an older paradigm of individualism, knowledge acquisition and rote learning. And here lies a challenge faced by Boomers in management positions in schools, colleges and universities: how can they tailor the way that teaching and learning is presented to engage the students of the tech-generations, while also inspiring teachers of an older paradigm? If they do not, there is a danger of disenfranchising large numbers of young people, and teachers too, perhaps even turning them off from education completely.
If we look closely again at the learning preferences of both generations, we can see them mirrored in the qualities of a well-designed game, paticularly ARGs. *Good* games will often involve teamwork, technology, entertainment, excitement and experience, but *will also embrace* individualism and knowledge acquisition. For examples of such games see The Go Game, Futurelab's Savannah and MIT's Augmented Reality Games, and for a clear example of individualism and teamwork combined, see this BBC article about the winner of Perplex City. Games therefore have the potential to appeal to learning preferences on both sides of the divide, so long as they are well designed. And while gaming is never likely to be the primary pedagogic methodology, games have the (in my limited view) unique potential for uniting both natives and immigrants in a digital and non-digital cause.
But as Squire writes (Changing the game, 2005), 'as challenging as it is to design a good educational game, it may be more challenging to design a good educational system for an educational games to flourish in'. Changes are likely to be slow, but as Green and Hannon point out (2007), change is becoming more necessary. 'In an economy driven by knowledge rather than manufacturing, employers are already valuing very different skills, such as creativity, communication, presentation skills and team-building. Schools are at the front line of this change and need to think about how they can prepare young people for the future workplace.' Squire (2005) also comments on this need for change (citing Gee, Hull, and Lankshear 1996; and Reich 1990), writing that 'learning to identify problems and then devise solutions across several domains is uncommonly found in school, but precisely the kind of skill valued among knowledge workers in the new economy'. With business influencing much of school policy decision-making nowadays (which is another discussion altogether), it is likely that schools will be pushed into better preparing students for this 'new capitalism' or knowledge economy. And the great news for students, which teachers must eventaully recognise, is that digital games appear to instill precisely the qualities that knowledge workers require.
[The UK government recognises the skills required for enterprise which is a step toward the skills mentioned above. See the Determined to Succeed initiative for Scotland's answer to encouraging enterprise, team-working and ambition]
To recap briefly on this digital disparity, Oblinger draws up 3 general but useful distinctions between generations and their understanding and comfort with ICTs (this includes internet use): Boomers, the 40+ age group, might be classed as 'digital migrants', or those who did not grow up surrounded by computers and associated technology, but had to learn about them at a later stage; Gen-Xers (which I fall into) who were the first generation to grow up with (albeit rudimentary) computers and who are the first 'digital natives'; and Millenials, or 'digital natives 2.0' who were born after 1982, and who are completely au fait with computers, internet and associated linking gadgets such as mobile phones, iPods etc. Millenials have also been classed as Generation-C - where C stands for Content - meaning that for this generation, creation of 'consumer generated content plays a significant role in their social life, generating streams of new text, images, audio and video on an ongoing basis'. These characterisations are extremely useful for educators of the Boomer generation who wish to better understand and inform the younger generations, and to an extent vice versa.
If responsibility to adapt rests with one generation more than another, it must surely be more with teachers than with students (though teachers will probably disagree), as digital proficiency is becoming the dominant paradigm. When pupils enter the job market, a large number of roles (in developed countries at least), will involve some basic technological know-how. So for teachers to do their job well nowadays, it's becoming more necessary to adopt methods that inform and appeal to what might be termed the "information-age mind-set." As Oblinger writes, 'the attitudes - and aptitudes - of students who have grown up with technology appear to differ from those of students who rarely use technology." She cites Jason Frand, who lists ten key attributes of this emerging mind-set: "[1] Computers aren't technology; [2] Internet is better than TV; [3] Reality is no longer real (re: potential inaccuracies of digital content); [4] Doing is more important than knowing; [5] Learning more closely resembles Nintendo than logic; [6] Multi-tasking is a way of life; [7] Typing is preferred to handwriting; [8] Staying connected is essential; [9] Zero tolerance of delays; and [10] Consumer and creator are blurring." These all combine to highlight fairly significant changes in how young people process information. Charles Monereo (The virtual construction of the mind: the role of educational psychology, 2004) explains this cognitive change well. "Just as steps from oral culture to written culture and then from writing into printing had clear repercussions for forms of learning and thinking, the transition from the printed culture to this new digital culture will have diverse consequences for our cognition". As Prensky has also recognised, DN's cognitive processes have been moulded by their immersion in a digital environment.
In particular, point [4] above marks a giant step, where knowledge accumulation needs no longer be a principle goal of education, as information can be easily accessed and needn't necessarily be stored in an individual's brain (see Gee 2003 on *distributed knowledge*). Monereo again, describes this distinction, writing that "technological migrants [Boomers] regard knowledge as something that they possess, something that they carry around with them; technological natives [Gen-X onwards] see it as something that they obtain through a set of applications and instruments. This distinction modifies substantially notions such as intelligence, wisdom and ability." This change also has significant epistemological ramifications. Once again Monereo raises this issue. "For the older generation there have always been universal truths, both scientific and moral... But for the younger generation, "everything depends"; all truths are relative and depend on who, when, how, and why they are stated; they are never independent of their utterer or their context." This is a considerable, and I think exciting, philosophical change that teachers must accommodate - teaching that knowledge is absolute will not convince our young learners; they will want to explore the bigger picture, and then make their own minds up. [See the excellent Civil War History Game, cited by Oblinger, which provides students with a broad range of documents relating to the civil war, encouraging them to view sources as a multitude of opinions, not facts, and to synthesize a broad range data to form opinions of their own.]
I also feel that Frand's point [5] above, 'Learning more closely resembles Nintendo than logic', also points to a growing problem that teachers face - how to engage students in ways that complex problem-solving computer games might, with high levels of interactivity, a compelling narrative and a multimedia experience. Recognising and accomodating these changes will I think prove to be an important challenge for teachers over the coming years, moving away from linear, textbook-based learning, to classes that, in Oblinger's words, 'give way to simulations, games and collaboration'. To address this problem, teachers must recognise that Millenials exhibit distinct learning styles, as Clare Raines suggests in 'Managing Millenials'. Millenials' learning preferences embrace "teamwork, technology, structure, entertainment & excitement [and] experiential activities", which differ significantly to those of the Boomer generation when they were at school, which reflected an older paradigm of individualism, knowledge acquisition and rote learning. And here lies a challenge faced by Boomers in management positions in schools, colleges and universities: how can they tailor the way that teaching and learning is presented to engage the students of the tech-generations, while also inspiring teachers of an older paradigm? If they do not, there is a danger of disenfranchising large numbers of young people, and teachers too, perhaps even turning them off from education completely.
If we look closely again at the learning preferences of both generations, we can see them mirrored in the qualities of a well-designed game, paticularly ARGs. *Good* games will often involve teamwork, technology, entertainment, excitement and experience, but *will also embrace* individualism and knowledge acquisition. For examples of such games see The Go Game, Futurelab's Savannah and MIT's Augmented Reality Games, and for a clear example of individualism and teamwork combined, see this BBC article about the winner of Perplex City. Games therefore have the potential to appeal to learning preferences on both sides of the divide, so long as they are well designed. And while gaming is never likely to be the primary pedagogic methodology, games have the (in my limited view) unique potential for uniting both natives and immigrants in a digital and non-digital cause.
But as Squire writes (Changing the game, 2005), 'as challenging as it is to design a good educational game, it may be more challenging to design a good educational system for an educational games to flourish in'. Changes are likely to be slow, but as Green and Hannon point out (2007), change is becoming more necessary. 'In an economy driven by knowledge rather than manufacturing, employers are already valuing very different skills, such as creativity, communication, presentation skills and team-building. Schools are at the front line of this change and need to think about how they can prepare young people for the future workplace.' Squire (2005) also comments on this need for change (citing Gee, Hull, and Lankshear 1996; and Reich 1990), writing that 'learning to identify problems and then devise solutions across several domains is uncommonly found in school, but precisely the kind of skill valued among knowledge workers in the new economy'. With business influencing much of school policy decision-making nowadays (which is another discussion altogether), it is likely that schools will be pushed into better preparing students for this 'new capitalism' or knowledge economy. And the great news for students, which teachers must eventaully recognise, is that digital games appear to instill precisely the qualities that knowledge workers require.
[The UK government recognises the skills required for enterprise which is a step toward the skills mentioned above. See the Determined to Succeed initiative for Scotland's answer to encouraging enterprise, team-working and ambition]
Labels:
Augmented Reality Games,
digital divide,
Games,
GBL,
Generation-C,
MSc,
Oblinger,
Prensky,
Squire
Monday, 5 March 2007
Bonus Level - Games for appreciating other cultures
Gee's (2003) fascinating analysis of the *Under Ash* computer game, in which a Palestinian boy fights with Israeli soldiers and occupiers, got me thinking about the potential of VGs for giving players immersive experiences of cultures. While the notion of fighting against Israeli soldiers may be abhorrent to some (mainly Israelis), a relativist view would see it as no different to creating a game where an American soldier kills Arabs (of which Gee gives an extreme example of the game *Ethnic Cleansing*). What is interesting is that the very outcry for protection from kids playing games that go against the cultural standards, suggest the implication that playing such a game has the potential for significantly affecting a child's (or even and adult's) belief system. Which means that this tool is potentially powerful, but for both good and ill.
There is a fascinating documentary film *Judah and Mohammed*, which tracks a year in the life of two fifteen year old boys, one Israeli and one Palestinian. The film clearly highlights the entrenched cultural models that each child is growing up in, from the media they absorb, the conversations with their friends and family, and most alarmingly, the versions of history they are taught at school. In comparing their two lives, the viewer can't help but appreciate the deterministic fate that both boys are tied into, with each of their cultures so heavily imposed on them, and with the boys so determined to uphold their own cultural values. In spite of sharing similar interests and hobbies, the situation is such that the two boys could never be friends. A compromise between their two cultures seems even further away.
How could one of these diametrically opposed cultures ever reach a resolution? One way is to fight the other culture until one or the other gives in. A more diplomatic solution would be to try to understand where the other is coming from, recognise similar experiences and pastimes, and work to resolve differences with peaceful means. Media and literature have always been effective tools for supporting both war and peace. For war, propaganda through news reports, films etc is notoriously effective at imposing cultural values. Likewise for peace, objective and investigative news and films such as the one mentioned above, can highlight the misery of war, and the suffering it causes. With the rise of the internet and accessibility of digital media (see the Iraq war YouTube videos), information has become more readily available and this has in many respects been a force for peace. Governments are less able to hide behind disinformation (though some still do a good job of it), and activists are more able to communicate abuses of power. With this 'power of transparency' in mind, and following the success of his Shoah interviews, Stephen Spielberg's recent program involves giving video cameras to Israelis and Palestinians which will then be swapped later, as a way of helping the two cultures to appreciate their similarities, instead of focusing on their differences. IMHO this a very progressive and potentially beneficial approach to take (though of course this is just my cultural model speaking).
But as Gee mentions, 'interactive media like video games are a more powerful device than such passive media'. And this is where VGs could potentially come in use - as a means for appreciating the experiences of others, in an experiential, immersive environment, thus beginning to move people away from entrenched cultural models that demand the hating of another group of people, to a model that is more appreciative and less aggressive. As Gee writes, 'far more interactively than you could in any novel or movie, you would have experienced the 'other' from the inside'. So a game like Food Force, which I've mentioned in a previous posting, far from being an insensitive mockery of a serious issue like famine, is actually a very powerful way of encouraging 'players' to appreciate the experience of hungry people.
So in an ironic twist, the playing of violent video games might, if employed with ethical intentions, actually be a valuable force for peace. Gee suggests that 'if we are willing to take none but our own side, even in play, then violence would seem inevitable'. In homage to Kane's vision of a 'play ethic', VGs as diplomatic anti-propaganda will hopefully be something we see more of in the future.
There is a fascinating documentary film *Judah and Mohammed*, which tracks a year in the life of two fifteen year old boys, one Israeli and one Palestinian. The film clearly highlights the entrenched cultural models that each child is growing up in, from the media they absorb, the conversations with their friends and family, and most alarmingly, the versions of history they are taught at school. In comparing their two lives, the viewer can't help but appreciate the deterministic fate that both boys are tied into, with each of their cultures so heavily imposed on them, and with the boys so determined to uphold their own cultural values. In spite of sharing similar interests and hobbies, the situation is such that the two boys could never be friends. A compromise between their two cultures seems even further away.
How could one of these diametrically opposed cultures ever reach a resolution? One way is to fight the other culture until one or the other gives in. A more diplomatic solution would be to try to understand where the other is coming from, recognise similar experiences and pastimes, and work to resolve differences with peaceful means. Media and literature have always been effective tools for supporting both war and peace. For war, propaganda through news reports, films etc is notoriously effective at imposing cultural values. Likewise for peace, objective and investigative news and films such as the one mentioned above, can highlight the misery of war, and the suffering it causes. With the rise of the internet and accessibility of digital media (see the Iraq war YouTube videos), information has become more readily available and this has in many respects been a force for peace. Governments are less able to hide behind disinformation (though some still do a good job of it), and activists are more able to communicate abuses of power. With this 'power of transparency' in mind, and following the success of his Shoah interviews, Stephen Spielberg's recent program involves giving video cameras to Israelis and Palestinians which will then be swapped later, as a way of helping the two cultures to appreciate their similarities, instead of focusing on their differences. IMHO this a very progressive and potentially beneficial approach to take (though of course this is just my cultural model speaking).
But as Gee mentions, 'interactive media like video games are a more powerful device than such passive media'. And this is where VGs could potentially come in use - as a means for appreciating the experiences of others, in an experiential, immersive environment, thus beginning to move people away from entrenched cultural models that demand the hating of another group of people, to a model that is more appreciative and less aggressive. As Gee writes, 'far more interactively than you could in any novel or movie, you would have experienced the 'other' from the inside'. So a game like Food Force, which I've mentioned in a previous posting, far from being an insensitive mockery of a serious issue like famine, is actually a very powerful way of encouraging 'players' to appreciate the experience of hungry people.
So in an ironic twist, the playing of violent video games might, if employed with ethical intentions, actually be a valuable force for peace. Gee suggests that 'if we are willing to take none but our own side, even in play, then violence would seem inevitable'. In homage to Kane's vision of a 'play ethic', VGs as diplomatic anti-propaganda will hopefully be something we see more of in the future.
Thursday, 1 March 2007
Level 7 - Good games 2.0
In my previous posting on what makes a good game, I identified *playability* - roughly, an opportunity for experimentation and exploration; *competition/challenge* - the game is not too easy nor too hard, and offers some contest, either against another person or the VDU; and *primacy* (Rouse, 2001) - an elusive quality in which the player becomes immersed in the game. Having since read Tom Malone's articles (seminal - judging by the amount of times they are referenced by others), it is clear that these initial suggestions of key qualities, while useful, can be refined. In his article, What makes things fun to learn?, Malone tries to get right to the heart of which qualities encourage *intrinsic motivation* (Lepper and Greene, 1979) - 'of what makes an activity fun or rewarding for its own sake rather than for the sake of some external reward'. He identifies three key areas: *challenge* (which I referred to in the previous posting), *fantasy* and *curiosity* (which I did not).
The challenge element of a game is an obvious essential. If it's too easy, the outcome is likely to be certain, making the game futile (having said this, there seems to be a new breed of games offering a kind of pointless but aesthetically meditative gameplay, with little challenge, but are somehow addictive - see Flow in Games and the excellent FlyGuy); too hard and players are quickly demotivated. Providing an appropriate goal offers a motivation for the challenge, and by appropriate I mean something that is relevant to the world view of the player. Malone cites an interesting study by Morozova (1955) into the motivational capacity of goals. Children reading a passage on latitude and longitude were more engaged by a version involving a child hero trying to find her location. Malone reflects on several intriguing qualities of such a goal which I think are fundamental to good *learning* game design: [1] 'Using the skill being taught was a means to achieving the goal, but was not the goal in itself' - hence the learning was kind of slipped in through the back door so to speak; children were learning about mapping without even realising. [2] ' The goal was part of an intrinsic fantasy' - where the skill depended on the fantasy, adding to the potential for immersion. And [3] 'the goal was one with which the child readers could identify' - this is element of a target that this relevant to the play is key to their engagement (an olympic athlete is unlikely to be inspired by the goal of a deep-fried mars bar, for instance).
I hadn't picked up on the fantasy element of a game being such a strong motivator, but of course, almost every game requires the player to take on the role of a fantasy identity, a process that is apparently extremely fulfilling. In monopoly for instance, players take on the role of capitalist property developer (often to alarming degrees!), in Civilisation games, the player is a great leader, in 1st-person-shooters, the player is often a goodie trying to eliminate the baddies, even playing chess, the player acts as a kind of omnipotent being directing their army. As Malone points out, fantasies in games 'derive some of their appeal from the emotional needs they help to satisfy in the people who play them'. Thus, we are intrigued by games that precipitate emotional responses, and explains why so many games 'embody emotionally-involving fantasies like war, destruction and competition'. This attraction to an emotional narrative, does not of course just apply to gaming domains, but can be recognised throughout human history in mythology, folkloric stories, and in modern times in novels, films and music. [This is IMHO a powerful counter to the claim that VGs are too violent - that violence and war has permeated human narratives for thousands of years. Try telling the Greeks that their gory battle myths are unsuitable for children under 15, or telling the Tibetans that their visions of hell - too graphic even for this weblog - are unnecessarily profane.] The key point to remember in designing games for instructional use though, is that 'different people will find different fantasies appealing'. As one of Malone's studies found (in 'Heuristics for designing enjoyable user interfaces'), girls playing a maths darts game found the darts fantasy unappealing, whereas the darts element really appealed to the boys. Some games get round this problem by offering the opportunity to be one of several characters, thus widening the appeal to a greater number of users, but this level of adaptability may not be relevant in all cases. Needless to say, designers should be aware of likes and dislikes relating to social groups such as gender, race, age, etc.
The final characteristic Malone recognises in good games is curiosity, which he defines as the attraction to environments which have an 'optimal level of informational complexity (Berlyne, 1965; Piaget, 1952)'. He later reflects that a game should be 'novel and suprising, but not completely incomprehensible'. These qualities, in my view, link very closely to the potential for immersion in a game. The greater the curiosity invoked by a task, the greater the likelihood of undistracted effort, leading ultimately to a 'flow state' (see Flow: The Psychology of Optimal Experience, Mihalyi Csikszentmihaly, 1991) in which the participant is completely absorbed in their actions.
Malone identifies two main features of curiosity: *sensory* and *cognitive* curiosity. Sensory curiosity, or the attraction to lights, music, movement, images etc. is for ever being satisfied by computer games in more sophisticated ways. Current arcade machines will more the player's entire body, adding bumps and loud noise at every opportunity. Likewise, current technology (as can be seen with the Nintendo Wii) is giving players a much more immersive sensory role in the game, massively adding to the appeal.
I am, however more interested in cognitive curiosity, or 'the desire to bring better "form" to one's knowledge structure'. This curiosity can be credited for inspiring people to complete challenges, from puzzles and crosswords to 'feature-length' VGs, but also to resolve any problem that is not fully understood. This is surely at the very heart of a person's motivation to learn anything, the desire to make what was previously unknown, understood. Malone suggests that 'people are motivated to bring to all their cognitive structures three of the characteristics of well-formed scientific theories: completeness, consistency, and parsimony'. Thus, in Gestalt psychotherapy terminology (Fritz Perls, The Gestalt Approach, 1973), people seek cognitive *closure* to unresolved problems, and this experience of closure is in itself rewarding.
Of course not every game does, or should, include all these criteria for 'goodness', but these qualities of playability, challenge, primacy, fantasy and curiosity, certainly make the game appealing. As I have previously mentioned, designers of 'learning' games could learn a lot from COTS games with mass appeal and involving the features in GBL that make popular VGs so attractive is a good start. As a further note on teaching in general, these features of a good game can apply not only to games, but to course development as a whole. Understanding what engenders primacy and cognitive curiosity in learners is extremely valuable knowledge for teachers of any subject.
The challenge element of a game is an obvious essential. If it's too easy, the outcome is likely to be certain, making the game futile (having said this, there seems to be a new breed of games offering a kind of pointless but aesthetically meditative gameplay, with little challenge, but are somehow addictive - see Flow in Games and the excellent FlyGuy); too hard and players are quickly demotivated. Providing an appropriate goal offers a motivation for the challenge, and by appropriate I mean something that is relevant to the world view of the player. Malone cites an interesting study by Morozova (1955) into the motivational capacity of goals. Children reading a passage on latitude and longitude were more engaged by a version involving a child hero trying to find her location. Malone reflects on several intriguing qualities of such a goal which I think are fundamental to good *learning* game design: [1] 'Using the skill being taught was a means to achieving the goal, but was not the goal in itself' - hence the learning was kind of slipped in through the back door so to speak; children were learning about mapping without even realising. [2] ' The goal was part of an intrinsic fantasy' - where the skill depended on the fantasy, adding to the potential for immersion. And [3] 'the goal was one with which the child readers could identify' - this is element of a target that this relevant to the play is key to their engagement (an olympic athlete is unlikely to be inspired by the goal of a deep-fried mars bar, for instance).
I hadn't picked up on the fantasy element of a game being such a strong motivator, but of course, almost every game requires the player to take on the role of a fantasy identity, a process that is apparently extremely fulfilling. In monopoly for instance, players take on the role of capitalist property developer (often to alarming degrees!), in Civilisation games, the player is a great leader, in 1st-person-shooters, the player is often a goodie trying to eliminate the baddies, even playing chess, the player acts as a kind of omnipotent being directing their army. As Malone points out, fantasies in games 'derive some of their appeal from the emotional needs they help to satisfy in the people who play them'. Thus, we are intrigued by games that precipitate emotional responses, and explains why so many games 'embody emotionally-involving fantasies like war, destruction and competition'. This attraction to an emotional narrative, does not of course just apply to gaming domains, but can be recognised throughout human history in mythology, folkloric stories, and in modern times in novels, films and music. [This is IMHO a powerful counter to the claim that VGs are too violent - that violence and war has permeated human narratives for thousands of years. Try telling the Greeks that their gory battle myths are unsuitable for children under 15, or telling the Tibetans that their visions of hell - too graphic even for this weblog - are unnecessarily profane.] The key point to remember in designing games for instructional use though, is that 'different people will find different fantasies appealing'. As one of Malone's studies found (in 'Heuristics for designing enjoyable user interfaces'), girls playing a maths darts game found the darts fantasy unappealing, whereas the darts element really appealed to the boys. Some games get round this problem by offering the opportunity to be one of several characters, thus widening the appeal to a greater number of users, but this level of adaptability may not be relevant in all cases. Needless to say, designers should be aware of likes and dislikes relating to social groups such as gender, race, age, etc.
The final characteristic Malone recognises in good games is curiosity, which he defines as the attraction to environments which have an 'optimal level of informational complexity (Berlyne, 1965; Piaget, 1952)'. He later reflects that a game should be 'novel and suprising, but not completely incomprehensible'. These qualities, in my view, link very closely to the potential for immersion in a game. The greater the curiosity invoked by a task, the greater the likelihood of undistracted effort, leading ultimately to a 'flow state' (see Flow: The Psychology of Optimal Experience, Mihalyi Csikszentmihaly, 1991) in which the participant is completely absorbed in their actions.
Malone identifies two main features of curiosity: *sensory* and *cognitive* curiosity. Sensory curiosity, or the attraction to lights, music, movement, images etc. is for ever being satisfied by computer games in more sophisticated ways. Current arcade machines will more the player's entire body, adding bumps and loud noise at every opportunity. Likewise, current technology (as can be seen with the Nintendo Wii) is giving players a much more immersive sensory role in the game, massively adding to the appeal.
I am, however more interested in cognitive curiosity, or 'the desire to bring better "form" to one's knowledge structure'. This curiosity can be credited for inspiring people to complete challenges, from puzzles and crosswords to 'feature-length' VGs, but also to resolve any problem that is not fully understood. This is surely at the very heart of a person's motivation to learn anything, the desire to make what was previously unknown, understood. Malone suggests that 'people are motivated to bring to all their cognitive structures three of the characteristics of well-formed scientific theories: completeness, consistency, and parsimony'. Thus, in Gestalt psychotherapy terminology (Fritz Perls, The Gestalt Approach, 1973), people seek cognitive *closure* to unresolved problems, and this experience of closure is in itself rewarding.
Of course not every game does, or should, include all these criteria for 'goodness', but these qualities of playability, challenge, primacy, fantasy and curiosity, certainly make the game appealing. As I have previously mentioned, designers of 'learning' games could learn a lot from COTS games with mass appeal and involving the features in GBL that make popular VGs so attractive is a good start. As a further note on teaching in general, these features of a good game can apply not only to games, but to course development as a whole. Understanding what engenders primacy and cognitive curiosity in learners is extremely valuable knowledge for teachers of any subject.
Tuesday, 20 February 2007
LEVEL 6 - Towards meaningful play and gaming
If gaming is to be accepted and encouraged in mainstream education as a valid and effective teaching practice then it must overcome one of its biggest hurdles: the notion that playing games is meaningless distraction and of no real value. As I mentioned in my previous entry, modern culture's general image of play tends to disregard the more 'mature' benefits of rule-informed playing, such as the mental and verbal contests that Huizinga recognises in sophisticated disciplines such as politics, philosophy and law. The potential value of play for adults tends to be subsumed by the perception of play as something that's infantile and foolish, and for children only. The 'work ethic' mentality, that has arisen since the industrial revolution, compelling the workforce to toil relentlessly and efficiently, has contributed to the opinion that play and games have no place in an adult (or even young adult) work or learning environment - work is qualitatively distinct from play. In spite of play now being generally acknowledged as a necessary feature of our free time, the general view is that *work and play are irreconcilable* - at least in any self-respecting organisation. However, it is clear from research and anecdotal evidence that work and play *are* compatible. And that when play and games are embraced thoughtfully in a work and/or learning environment, the results can be *better* than they would be without their influence. Games and play are not just mindless distractions, but can in fact facilitate meaningful developments psychologically, socially and functionally.
Kane's chapter 2 of 'The Play Ethic: A manifesto for a different way of living' is a compelling call-to-arms for play and gaming enthusiasts. (In spite anything with *manifesto* in the title tending to be somewhat one-sided!). Drawing on rich sources from literature, culture, history and science, he presents a persuasive case for the importance of play in personal and social development, and in scientific endeavour. He goes as far as to present the case for a *play ethic*, a way of living and working that it is in his view more personally fulfilling and better suited to present day life than a 'work ethic'. In his words, the 'play ethic is about counterposing a purely neo-liberal, capitalist network with a whole thicket of other networks - emotional and sexual, geographical and traditional, artistic and civic... [It's] a 21st century identity'. In spite of being perhaps a little overdone, he presents some convincing arguments and evidence for the benefits of play.
One of his propositions is that play contributes significantly to human progress and development. I have touched on this issue of play in development in previous postings, highlighting how play is a form of experimentation for infants. Kane echoes this point, writing that 'the consensus from biologists and psychologists, derived from over a century of observing animal and human play, is that play is a necessity, not a luxury, for advanced mammals.' In terms of education, there are a number of progressive educationalists who have recognised play's potential for inspiring children, most notably Froebel, who conceived of the now well-established Kindergarten - 'where children could blossom like flowers'. Froebel gave children 'play gifts', such as play-dough and crayons, etc. , so they could 'externalise consepts in their minds, rather than have 'the facts' imprinted on their brains. There are also a number of other unorthodox teaching methods championed by Maria Montessori, Rudolf Steiner, and Kurt Hahn among others, involving creative and undogmatic principles, which are only now beginning to creep in to mainstream thinking, though mainly at a primary level. The ultimate aim of these methods is not for the children to enjoy themselves, though this is often a welcome advantage, but to *facilitate effective learning*. What makes using play more effective? The answer lies in the brain. Kane references a New Scientist article which states that, 'early play in childhood is less about practising to fight and mate..., and much more about improving brain power at a crucially formative moment... The very act of playing seems to strengthen and extend the number of neural connections in the brain... Neuropsychologist Stephen Siviy, when observing brain chemistry under experimental conditions [says], 'play just lights everything up'.'
Another of Kane's more striking propositions is that much of modern scientific thinking and language is inspired by playfulness, which he proposes is a fitting attitude for the complexities of the present day. He uses examples of Schrodinger's half-serious thought experiment involving a cat at the mercy of the unpredictability of quantum theory, and Einstein's famous (misquoted) aphorism, 'God doesn't play dice with the universe'. References to play and gaming can be found throughout scientific research, and that some of the more recent sciences especially, such as chaos, complexity, systems and networks theory, actually require a playful vision to appreciate. Kane quotes Manfred Eigen and Ruthild Winkler as saying: 'Everything that happens in our world resembles a vast game in which nothing is determined but the rules, and only the rules are open to objective understanding'. Kane suggests that modern visions of the world require us to accept some of the apparently random and arbitrary nature of the universe, understanding the world better by 'entering into its games, by respecting its creativity, by joining in the play of living forms'. As biologist Brian Goodwin says, 'this realization that the ovarall behaviour of complex systems (like the weather, or the brain, or human society) cannot ultimately be predicted has 'enormous consequences'. "Real systems, and particularly living ones such as organisms, ecological systems and societies, are radically unpredictable in their behaviour"'. To progress in these uncertain fields, scientists must accept 'just how unpredicatbly creative the evolutionary process is', adopting a 'playful' attitude of both controlling/understanding and being controlled/challenged by our environment. Kane also cites Geoffrey Miller, whose book The Mating Mind, suggests that from an evolutionary perspective, 'nature commands that humans should play in order to survive and thrive.' With respect to Maslow's pyramid of needs, there is no priority in life that has greater *meaning* to humans than survival itself.
Accepting play as meaningful action can also mean acknowledging play's significant contribution to facilitating our development as people - by recognising play's contribution to selfhood, imagination and identity. Play enables us to fashion not only our early selves, but also throughout our adult development. As Kane writes, 'play is the primal force which built our early selves, and can revivify and infuse our adult selves with a craving for action and innovation.' As neuroscientific studies have shown, our brains 'light up' during play, even through adulthood. As Howard Gardner, the Harvard Educationalist and psychologist says: 'We play to master our self, our anxiety and the world.' So play helps us to develop as people, but also as groups. Playful group rituals such as festivals and parties helps us to bond as communities. On this subject Kane quotes Alessandro Falassi, whose book, Time out of Time, describes collective play as being a 'periodic renewal of the stream of the community by creating new energy ... which gives sanction to its institutions'. Thus collective play is a *meaningful* act of collaboration, encouraging reciprocal altruism, and enhanced civility (though not *always* of course).
So all in all, playful activities (including games) are valuable and meaningful activities, not just during leisure time, but also potentially during working hours and learning environments. Not only can playful activities lighten serious and monotonous work, they can also make it more efficient. And I agree with Kane's manifesto proposition, that play and games are especially necessary in our changing working and learning environments. Raigeleuth's and other's recognition that we have entered an *information age* [see appendix below] requires that we change our attitudes. As Kurt Squire writes, there is a growing recognition that traditional models of instruction, organized by modernist, scientific, rationalist social theory and assembly line metaphors for social organization are failing to work for us in the new economy. Like Reigeluth, Gee, and others, I argue that new organizing metaphors for learning and new models of learning environments are needed to respond to the social and economic realities of the 21st century'. Hence play and games offer one possible way of engaging young 'digital native' learners and the subsequent workers that grow up in the digital age.
Accepting the meaningfulness of play and games in education however, is likely to require some institutional reform, particularly in evaluation methodology. Play will often involve using imaginative and creative processes, but as Kane writes, 'how much 'imagination' can educators allow into the teaching process - when the currriculum is geared towards a competitive jobs market and is based on test reults rather than an open-ended journey towards understanding?'. This concern is likely to polarise educators, until at some point, the digital natives at home in the information age and the 'experience economy' will have their way. Finally, as Kane suggests, 'by recognising our essential ludicism, by dignifying our play with an ethical force, we can begin to create and act, rather than simply consume and spectate.' Let's hope this recognition comes sooner rather than later.
Appendix - Table 1: Changes in Global Economies (Reigeluth, 1999).
INDUSTRIAL AGE INFORMATION AGE
Standardization .......... Customization
Centralized control .......... Autonomy with accountability
Adversarial relationships .......... Cooperative relationships
Autocratic decision making.......... Shared decision making
Compliance .......... Initiative
Conformity .......... Diversity
One-way communications .......... Networking
Compartmentalization .......... Holism
Parts-oriented .......... Process-oriented
Teacher as "King" .......... Learner (customer) as "King"
Kane's chapter 2 of 'The Play Ethic: A manifesto for a different way of living' is a compelling call-to-arms for play and gaming enthusiasts. (In spite anything with *manifesto* in the title tending to be somewhat one-sided!). Drawing on rich sources from literature, culture, history and science, he presents a persuasive case for the importance of play in personal and social development, and in scientific endeavour. He goes as far as to present the case for a *play ethic*, a way of living and working that it is in his view more personally fulfilling and better suited to present day life than a 'work ethic'. In his words, the 'play ethic is about counterposing a purely neo-liberal, capitalist network with a whole thicket of other networks - emotional and sexual, geographical and traditional, artistic and civic... [It's] a 21st century identity'. In spite of being perhaps a little overdone, he presents some convincing arguments and evidence for the benefits of play.
One of his propositions is that play contributes significantly to human progress and development. I have touched on this issue of play in development in previous postings, highlighting how play is a form of experimentation for infants. Kane echoes this point, writing that 'the consensus from biologists and psychologists, derived from over a century of observing animal and human play, is that play is a necessity, not a luxury, for advanced mammals.' In terms of education, there are a number of progressive educationalists who have recognised play's potential for inspiring children, most notably Froebel, who conceived of the now well-established Kindergarten - 'where children could blossom like flowers'. Froebel gave children 'play gifts', such as play-dough and crayons, etc. , so they could 'externalise consepts in their minds, rather than have 'the facts' imprinted on their brains. There are also a number of other unorthodox teaching methods championed by Maria Montessori, Rudolf Steiner, and Kurt Hahn among others, involving creative and undogmatic principles, which are only now beginning to creep in to mainstream thinking, though mainly at a primary level. The ultimate aim of these methods is not for the children to enjoy themselves, though this is often a welcome advantage, but to *facilitate effective learning*. What makes using play more effective? The answer lies in the brain. Kane references a New Scientist article which states that, 'early play in childhood is less about practising to fight and mate..., and much more about improving brain power at a crucially formative moment... The very act of playing seems to strengthen and extend the number of neural connections in the brain... Neuropsychologist Stephen Siviy, when observing brain chemistry under experimental conditions [says], 'play just lights everything up'.'
Another of Kane's more striking propositions is that much of modern scientific thinking and language is inspired by playfulness, which he proposes is a fitting attitude for the complexities of the present day. He uses examples of Schrodinger's half-serious thought experiment involving a cat at the mercy of the unpredictability of quantum theory, and Einstein's famous (misquoted) aphorism, 'God doesn't play dice with the universe'. References to play and gaming can be found throughout scientific research, and that some of the more recent sciences especially, such as chaos, complexity, systems and networks theory, actually require a playful vision to appreciate. Kane quotes Manfred Eigen and Ruthild Winkler as saying: 'Everything that happens in our world resembles a vast game in which nothing is determined but the rules, and only the rules are open to objective understanding'. Kane suggests that modern visions of the world require us to accept some of the apparently random and arbitrary nature of the universe, understanding the world better by 'entering into its games, by respecting its creativity, by joining in the play of living forms'. As biologist Brian Goodwin says, 'this realization that the ovarall behaviour of complex systems (like the weather, or the brain, or human society) cannot ultimately be predicted has 'enormous consequences'. "Real systems, and particularly living ones such as organisms, ecological systems and societies, are radically unpredictable in their behaviour"'. To progress in these uncertain fields, scientists must accept 'just how unpredicatbly creative the evolutionary process is', adopting a 'playful' attitude of both controlling/understanding and being controlled/challenged by our environment. Kane also cites Geoffrey Miller, whose book The Mating Mind, suggests that from an evolutionary perspective, 'nature commands that humans should play in order to survive and thrive.' With respect to Maslow's pyramid of needs, there is no priority in life that has greater *meaning* to humans than survival itself.
Accepting play as meaningful action can also mean acknowledging play's significant contribution to facilitating our development as people - by recognising play's contribution to selfhood, imagination and identity. Play enables us to fashion not only our early selves, but also throughout our adult development. As Kane writes, 'play is the primal force which built our early selves, and can revivify and infuse our adult selves with a craving for action and innovation.' As neuroscientific studies have shown, our brains 'light up' during play, even through adulthood. As Howard Gardner, the Harvard Educationalist and psychologist says: 'We play to master our self, our anxiety and the world.' So play helps us to develop as people, but also as groups. Playful group rituals such as festivals and parties helps us to bond as communities. On this subject Kane quotes Alessandro Falassi, whose book, Time out of Time, describes collective play as being a 'periodic renewal of the stream of the community by creating new energy ... which gives sanction to its institutions'. Thus collective play is a *meaningful* act of collaboration, encouraging reciprocal altruism, and enhanced civility (though not *always* of course).
So all in all, playful activities (including games) are valuable and meaningful activities, not just during leisure time, but also potentially during working hours and learning environments. Not only can playful activities lighten serious and monotonous work, they can also make it more efficient. And I agree with Kane's manifesto proposition, that play and games are especially necessary in our changing working and learning environments. Raigeleuth's and other's recognition that we have entered an *information age* [see appendix below] requires that we change our attitudes. As Kurt Squire writes, there is a growing recognition that traditional models of instruction, organized by modernist, scientific, rationalist social theory and assembly line metaphors for social organization are failing to work for us in the new economy. Like Reigeluth, Gee, and others, I argue that new organizing metaphors for learning and new models of learning environments are needed to respond to the social and economic realities of the 21st century'. Hence play and games offer one possible way of engaging young 'digital native' learners and the subsequent workers that grow up in the digital age.
Accepting the meaningfulness of play and games in education however, is likely to require some institutional reform, particularly in evaluation methodology. Play will often involve using imaginative and creative processes, but as Kane writes, 'how much 'imagination' can educators allow into the teaching process - when the currriculum is geared towards a competitive jobs market and is based on test reults rather than an open-ended journey towards understanding?'. This concern is likely to polarise educators, until at some point, the digital natives at home in the information age and the 'experience economy' will have their way. Finally, as Kane suggests, 'by recognising our essential ludicism, by dignifying our play with an ethical force, we can begin to create and act, rather than simply consume and spectate.' Let's hope this recognition comes sooner rather than later.
Appendix - Table 1: Changes in Global Economies (Reigeluth, 1999).
INDUSTRIAL AGE INFORMATION AGE
Standardization .......... Customization
Centralized control .......... Autonomy with accountability
Adversarial relationships .......... Cooperative relationships
Autocratic decision making.......... Shared decision making
Compliance .......... Initiative
Conformity .......... Diversity
One-way communications .......... Networking
Compartmentalization .......... Holism
Parts-oriented .......... Process-oriented
Teacher as "King" .......... Learner (customer) as "King"
Wednesday, 7 February 2007
Level 4 - What makes a good game?
Whether it's taking part or just watching, there can be something encapsulating and engaging about a good game. But what makes a good game so appealing? While much of course depends on a person's subjective taste, it is worth considering what makes a game enjoyable, absorbing and rewarding.
First, what exactly is a game?. The concept of a game is notoriously difficult to define. In Philosophical Investigations, Ludvig Wittgenstein demonstrated that 'the elements of games, such as play, rules, and competition, all fail to adequately define what games are' [quoted from wikipedia]. Games and play vary so much that it is difficult to use a catch-all definition. Newman suggest that a common characteristic of games is that there is a rule-system that players must abide by. Without a rule system of sorts, a game ceases to be a game, and constitutes merely playing. Newman cites Callois (2001) and his distinction between paidea and ludus, referring to activities with simple and complex rules respectively. Simple play, say with a Frisbee, lacks the more complex rule systems of games such as rugby, or monopoly. However, there is a grey area between the two distinctions, as there in no real definite line when play becomes a game, and ceases to be just play. In addition, there are times when play is an inherent part of games, and some might argue that while games aren't always involved in play, play is an integral part of all games.
In his book Home Ludens, Johan Huizinga discusses the importance of play in culture society and learning. He proposes that play helps us to form a relationship with the object or person we are playing with. For example, infants are said to be 'playing' with something, when they are experimenting with the object's nature, getting to grips with what it can and cannot do. As Paul Feyerabend writes in his book Conquest of Abundance (1999) (Quoted by Kane) '[Science] is a bricolage of experimentation ... initial playful activity is an essential prerequisite for the final act of understanding'. In addition, Kane writes (in Chapter 2, The Play Ethic): 'The moment of play is identified as a generator of originality, energy and new development'. Hence play is an important and indeed necessary experience for infant and children's learning. Why not for adults too? The inclusion of play and playability engages the player, contibuting significant motivational allure, adding to much of the *appeal* of a game.
Aside from play, what else makes a good game? Competition is often a feature of games, and can be said to add a dimension of thrill to a game. Callois categorises the competition element of a game as *agon*, recognising this as a feature of many rule-based games. The agon element certainly motivates players. Competitiveness has been recognised as a biological trait, linked to the urge for survival, and is likewise linked to winning, a primary motivation for many game players. Thus a game may be more appealing to a player if there is a good chance s/he might win. However if it is too easy, this may prove to make the game seem pointless, and be de-motivating. [In contrast, some players may find that *collaboration* constitutes a more important aspect of appeal. The communication and relationship building that comes from joint effort may for some players be a real attraction.] Callois writes in his book 'Man, Play and Games', that 'the practice of agon presupposes sustained attention, appropriate training, assiduous application, and the desire to win. It implies discipline and perseverance.' These requirements for agon-based games are not only likely to increase motivation, but they are all learnings that are likely to be transferable in other fields, such as work, recreation, and even studying
However, neither competition and collaboration are entirely necessary for a game's appeal. Take Solitaire for example; there is neither competition, nor collaboration, but there *is* a challenge. And this aspect of challenge goes right to the heart of gaming of any sort. A game is not a game unless the proponent(s) is forced to resolve some problem posed within a certain framework of rules. Whether it's how best to stick a ball in a net when 11 people are trying to stop you, ot how best to invest your money into houses and hotels so others lose their money, games will always involve an element of problem-solving within a framework of rules.
And it's this concept of *challenge* that best answers the question what makes a good game? Newman describes Jessen's experiment (1995) who observed children playing Transport Tycoon, finding that 'working out the rules of a videogame constitutes a large part of the fascination and challenge and is a principle motivation for play'. He also cites Rouse (2001), whose identification of why players play games includes challenge and immersion. Although this study focused on videogames, the same could be said for other non-digital games too. People are intrigued by 'novel and exciting situations to experience' and are stimulated by the 'refinement of performance through replay and practice'. Newman refers to Danesi (2002) with respect to puzzle games, stating 'part of the appeal of puzzle games arises from the disruption of order' and the 'reinstating of the equilibrium state' and this resolution of a problem is what Rouse (2001) and Crawford (1984) 'identify as central motivations for play'. When we noticeably improve at something or figure out how best to do something it feels good. The satisfaction of working something out can be a strong motivator. If the challenge is too easy however, this feeling of accomplishment diminishes - likewise, too hard and a player may become quickly de-motivated. As Bernie de Koven writes, 'when the challenge is greater than our abilities, we become anxious and potentially dead. When the challenge is significantly less than that of which we are worthy, we become bored, and potentially dead'. So part of what makes a good game is a challenge that is tough enough to require improvement of skills/knowledge, but not *too* tough.
Rouse (2001) also mentions 'primacy' as a motivation for game-players, where the player gets completely involved in their actions. This aspect of immersement is key to a good game. And this primacy can also extend to avid observers of games, not just the players themselves, as diehard fans and followers of sports teams will confirm. (The dying seconds of a close match will feel like hours while at other times the observer may be completely 'lost' in the game.) This primacy state to some extent mirrors Csikszentmihalyi's 'flow state'. David Farmer quotes how it feels to be in the 'flow' state:
[1] Completely involved, focused, concentrating - with this either due to innate curiosity or as the result of training
[2] Sense of ecstasy - of being outside everyday reality
[3] Great inner clarity - knowing what needs to be done and how well it is going
[4] Knowing the activity is doable - that the skills are adequate, and neither anxious or bored
[5] Sense of serenity - no worries about self, feeling of growing beyond the boundaries of ego - afterwards feeling of transcending ego in ways not thought possible
[6] Timeliness - thoroughly focused on present, don't notice time passing
[7] Intrinsic motivation - whatever produces "flow" becomes its own reward
As Farley (2000) writes (quoted by Newman), 'good gameplay ... makes you forget yourself and the passage of time, not operating consciously but going with the flow'. Any game which is designed well enough to encourage this state in its players is likely to hold great appeal, and as point [7] mentions, becomes an important feature of its attaction.
So there may be many factors that combine to make a good game; some of these will depend on the players interaction with each other, some will depend on the game design and how the particular game develops. For educators, it is worth considering these factors when designing games for teaching purposes. Combining the play aspect with a challenge that is absorbing and engaging is a challenge in itself, but one which may ultimately prove very rewarding for all the players and designers alike.
First, what exactly is a game?. The concept of a game is notoriously difficult to define. In Philosophical Investigations, Ludvig Wittgenstein demonstrated that 'the elements of games, such as play, rules, and competition, all fail to adequately define what games are' [quoted from wikipedia]. Games and play vary so much that it is difficult to use a catch-all definition. Newman suggest that a common characteristic of games is that there is a rule-system that players must abide by. Without a rule system of sorts, a game ceases to be a game, and constitutes merely playing. Newman cites Callois (2001) and his distinction between paidea and ludus, referring to activities with simple and complex rules respectively. Simple play, say with a Frisbee, lacks the more complex rule systems of games such as rugby, or monopoly. However, there is a grey area between the two distinctions, as there in no real definite line when play becomes a game, and ceases to be just play. In addition, there are times when play is an inherent part of games, and some might argue that while games aren't always involved in play, play is an integral part of all games.
In his book Home Ludens, Johan Huizinga discusses the importance of play in culture society and learning. He proposes that play helps us to form a relationship with the object or person we are playing with. For example, infants are said to be 'playing' with something, when they are experimenting with the object's nature, getting to grips with what it can and cannot do. As Paul Feyerabend writes in his book Conquest of Abundance (1999) (Quoted by Kane) '[Science] is a bricolage of experimentation ... initial playful activity is an essential prerequisite for the final act of understanding'. In addition, Kane writes (in Chapter 2, The Play Ethic): 'The moment of play is identified as a generator of originality, energy and new development'. Hence play is an important and indeed necessary experience for infant and children's learning. Why not for adults too? The inclusion of play and playability engages the player, contibuting significant motivational allure, adding to much of the *appeal* of a game.
Aside from play, what else makes a good game? Competition is often a feature of games, and can be said to add a dimension of thrill to a game. Callois categorises the competition element of a game as *agon*, recognising this as a feature of many rule-based games. The agon element certainly motivates players. Competitiveness has been recognised as a biological trait, linked to the urge for survival, and is likewise linked to winning, a primary motivation for many game players. Thus a game may be more appealing to a player if there is a good chance s/he might win. However if it is too easy, this may prove to make the game seem pointless, and be de-motivating. [In contrast, some players may find that *collaboration* constitutes a more important aspect of appeal. The communication and relationship building that comes from joint effort may for some players be a real attraction.] Callois writes in his book 'Man, Play and Games', that 'the practice of agon presupposes sustained attention, appropriate training, assiduous application, and the desire to win. It implies discipline and perseverance.' These requirements for agon-based games are not only likely to increase motivation, but they are all learnings that are likely to be transferable in other fields, such as work, recreation, and even studying
However, neither competition and collaboration are entirely necessary for a game's appeal. Take Solitaire for example; there is neither competition, nor collaboration, but there *is* a challenge. And this aspect of challenge goes right to the heart of gaming of any sort. A game is not a game unless the proponent(s) is forced to resolve some problem posed within a certain framework of rules. Whether it's how best to stick a ball in a net when 11 people are trying to stop you, ot how best to invest your money into houses and hotels so others lose their money, games will always involve an element of problem-solving within a framework of rules.
And it's this concept of *challenge* that best answers the question what makes a good game? Newman describes Jessen's experiment (1995) who observed children playing Transport Tycoon, finding that 'working out the rules of a videogame constitutes a large part of the fascination and challenge and is a principle motivation for play'. He also cites Rouse (2001), whose identification of why players play games includes challenge and immersion. Although this study focused on videogames, the same could be said for other non-digital games too. People are intrigued by 'novel and exciting situations to experience' and are stimulated by the 'refinement of performance through replay and practice'. Newman refers to Danesi (2002) with respect to puzzle games, stating 'part of the appeal of puzzle games arises from the disruption of order' and the 'reinstating of the equilibrium state' and this resolution of a problem is what Rouse (2001) and Crawford (1984) 'identify as central motivations for play'. When we noticeably improve at something or figure out how best to do something it feels good. The satisfaction of working something out can be a strong motivator. If the challenge is too easy however, this feeling of accomplishment diminishes - likewise, too hard and a player may become quickly de-motivated. As Bernie de Koven writes, 'when the challenge is greater than our abilities, we become anxious and potentially dead. When the challenge is significantly less than that of which we are worthy, we become bored, and potentially dead'. So part of what makes a good game is a challenge that is tough enough to require improvement of skills/knowledge, but not *too* tough.
Rouse (2001) also mentions 'primacy' as a motivation for game-players, where the player gets completely involved in their actions. This aspect of immersement is key to a good game. And this primacy can also extend to avid observers of games, not just the players themselves, as diehard fans and followers of sports teams will confirm. (The dying seconds of a close match will feel like hours while at other times the observer may be completely 'lost' in the game.) This primacy state to some extent mirrors Csikszentmihalyi's 'flow state'. David Farmer quotes how it feels to be in the 'flow' state:
[1] Completely involved, focused, concentrating - with this either due to innate curiosity or as the result of training
[2] Sense of ecstasy - of being outside everyday reality
[3] Great inner clarity - knowing what needs to be done and how well it is going
[4] Knowing the activity is doable - that the skills are adequate, and neither anxious or bored
[5] Sense of serenity - no worries about self, feeling of growing beyond the boundaries of ego - afterwards feeling of transcending ego in ways not thought possible
[6] Timeliness - thoroughly focused on present, don't notice time passing
[7] Intrinsic motivation - whatever produces "flow" becomes its own reward
As Farley (2000) writes (quoted by Newman), 'good gameplay ... makes you forget yourself and the passage of time, not operating consciously but going with the flow'. Any game which is designed well enough to encourage this state in its players is likely to hold great appeal, and as point [7] mentions, becomes an important feature of its attaction.
So there may be many factors that combine to make a good game; some of these will depend on the players interaction with each other, some will depend on the game design and how the particular game develops. For educators, it is worth considering these factors when designing games for teaching purposes. Combining the play aspect with a challenge that is absorbing and engaging is a challenge in itself, but one which may ultimately prove very rewarding for all the players and designers alike.
Subscribe to:
Posts (Atom)