Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

Gradual VR Exposure Therapy in Busy Environments

Info: 18752 words (75 pages) Dissertation
Published: 9th Dec 2019

Reference this

Tagged: Information SystemsInformation Technology

Table of Contents

Acknowledgements

List of Tables

List of Figures

List of Accompanying Material

Declaration of Originality

Definitions and Abbreviations

1 Introduction

1.1 Anxiety Disorders

1.2 Anxiety Disorder contextual background

1.2.1 Social Phobia (SP)

1.2.2 Generalised Anxiety Disorder (GAD)

1.2.3 Panic Disorder (PD)

1.3 Treatments for Anxiety Disorders

1.4 Virtual reality

1.4.1 Immersion and Presence

1.5 Project Rationale

1.6 Project aims and Objectives

1.6.1 Project Aim

1.6.2 Project objectives

1.7 Thesis Outline

1.7.1 Chapter 1: Introduction

1.7.2 Chapter 2: Literature Review

1.7.3 Chapter 3: Materials and Methods

1.7.4 Chapter 4: Results (Development Outcomes)

1.7.5 Chapter 5: Evaluation

1.7.6 Chapter 6: Discussion and Conclusion

2 Literature Review

2.1 Introduction

2.2 Exposure Therapy

2.3 VR Gradual Exposure Therapy efficacy for Anxiety disorders

2.3.1 VRET in the specific context

2.4 Conclusion

3 Materials and Method

3.1 Materials

3.1.1 Software

3.1.2 Hardware

3.2 Method

3.2.1 Requirements Analysis

3.2.2 Storyboard

3.2.3 Floorplan

3.3 Main environment creation

3.3.1 Model and Graphics Acquisition

3.3.2 Main Infrastructure

3.3.3 Register/Login system

3.3.4 Workstations

3.3.5 Narrative/Task

3.4 Non-playing characters (NPC) Characters

3.4.1 Character Animation

3.4.2 Character Shaders

3.4.3 Character Navigation

3.4.4 Navigation Implementation

3.5 Sound

3.5.1 Critical Sound Cues acquisition and implementation

3.5.2 Multiple Random Audio Source

3.6 Lighting

3.7 Leap Motion integration

3.8 Oculus Rift integration

3.9 Application Structure

3.9.1 Scenes

4 Results

4.1 Scenes

4.1.1 Login / Register Scene

4.1.2 Main Menu Scene

4.1.3 MedVis Virtual Environment Scene

4.1.4 Ready for Next Level Confirmation Scene

4.1.5 Save/Quit confirmation Scene

5 Evaluation

6 Conclusion

Appendix A: Supplementary Notes

References

List of Tables

List of Figures

Figure 1: Requirement diagram showing hierarchical relationship of the requirements

Figure 2: Storyboard illustrating user flow UI to login or register

Figure 3: Storyboards shows increased character count mirroring elevated exposure levels 1 to 7 in the scene

Figure 4: Floorplan of the MedVis lab at GSA SimVis

Figure 5: Photos of the MedVis lab at GSA SimVis

Figure 6: Diagram detailing Agile-based development process

Figure 7: Initial creation stages of main classroom structure

Figure 8: Shows registration system and validation messages

Figure 9: Different props assembled on each desk to avoid a uniform look

Figure 10: Simple, in-game block building task

Figure 11: Mixamo 3D Model animation UI with model imported from CGTrader

Figure 12: Animator window and Animator controller in the Inspector

Figure 13: Illustration of the repair process to fix faulty shaders on character models

Figure 14: Images illustrating baked NavMesh and waypoints test

Figure 15: Waypoints are shown as red spheres (Left) and baked NavMesh to the right

Figure 16: Waypoints array in the inspector

Figure 17: Script showing part of the patrol random waypoint functionality

Figure 18: Random Sound Script and how it appears in Unity Inspector (inset red box)

Figure 19: Image shows stages of light intensity from weakest (top) to strongest and complete lighting rig (bottom)

Figure 20: Final Login and Register UI design

Figure 21: Custom colour scheme designed for UI butt

Figure 22: Final UI design of Main Menu

Figure 23: Initial view on commencement of exposure in main VE

Figure 24: Shows user interface displayed before proceeding to next exposure level

Figure 25: Shows UI when subject saves exposure level ‘in-game’.

Definitions and Abbreviations

VRET     Virtual Reality Exposure Therapy

VR     Virtual Reality

3D     3 Dimensional

GSA     Glasgow School of Art

OR     Oculus Rift

LM     Leap Motion

AD     Anxiety Disorders

SIF     Social Innovation Fund

FA     Friendly Access

SimVis     School of Simulation and Visualisation

MedVis    Medical Visualisation Lab

SP     Social Phobia

GAD     Generalised Anxiety Disorder

PD     Panic Disorder

HMD     Head Mounted Display

CBT     Cognitive Behaviour Therapy

GET     Gradual Exposure Therapy

HR     Heart Rate

SCL     Skin Conductance Level

UI     User Interface

VE     Virtual Environment

NPC     Non-Playing Character

NavMesh    Navigation Mesh

1         Introduction

This paper documents the life cycle of the design, development and evaluation of a Virtual Reality (VR) Systematic desensitisation, also known as Gradual Exposure Therapy (GET) tool that utilises the Oculus Rift (OR) headset and a Leap Motion (LM) hand tracking device. Moreover, the application will be employed in the treatment of individuals with hidden disabilities, such as anxiety disorders (AD).  This project is part of a broader research project with stakeholders including The European Social Fund Social Innovation Fund (SIF), Friendly Access (FA), School of Simulation and Visualisation at the Glasgow School of Art (SimVis) and Crag3D Ltd (C3D). The research, development and evaluation of this project will guide the central SIF project by generating knowledge to inform the ongoing SIF development. To begin, the following section will present contextual background on Anxiety Disorders (AD) and existing treatments for AD. Additionally it will provide an overview of VR and its relative components of immersion and presence. Finally, the chapter will define the project rationale, aims and objectives and map out the thesis structure.

1.1        Anxiety Disorders

Anxiety is an emotional response to stress which is fundamental to our biological survival mechanism against threats and is experienced by everyone at some point in their lives (Marks and Nesse, 1994). However, an excess of anxiety can become crippling to individuals and can restrict the quality of life. For example, anxiety can force individuals to avoid situations, places and people, thus increasing isolation while forming barriers to accessibility and well-being (Sherbourne et al., 2010) The symptoms of anxiety are revealed in a range of bodily, cognitive and behavioural responses. For instance, common somatic symptoms are accelerated heart rate or palpitations, shortness of breath, or insomnia. Cognitive components and ordinarily worry and uncontrollable overthinking, where an individual ruminates on a particular event or situation. The most common behavioural aspect is avoidance, where a person will break regular routines to avoid specific social situations, objects or individuals. One can state with confidence that when all three of these systems confluence, the individual’s quality of life is notably diminished.

Although AD is a dominant mental health condition, they can be challenging to distinguish from other mental health issues. For instance, some symptoms share characteristics with medical illnesses associated with anxiety; to put it another way, are the medical issues causing the anxiety or is the anxiety the cause of the symptoms.

A previous study by Baxter et al. (2013) established the global pervasiveness of AD and significantly, the investigation suggests that one in 14 people has an AD. Furthermore, the paper ascertained that one in nine people would experience an AD in a given year. However, although the research was wide-ranging and the statistic was striking, it also highlighted substantive cultural, economic and social aspects that could adversely affect the meaningfulness of the results. For example, reported figures for AD are higher in Western European, North American and Australasian populations than any other communities.

1.2        Anxiety Disorder contextual background

The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR; APA, 2000), lists six primary anxiety disorders. However, for the scope of this research, the following are pertinent:

1.2.1       Social Phobia (SP)

Research by Hirsch and Clark (2004) found that SP manifests in a vicious cycle of anticipatory anxiety of a perceived threatening situation. In other words, the individual has a fear of being negatively judged, embarrassed or humiliated due to poor performance in social situations. A consequence of the perceived threat is that the person routinely performs negatively in that situation, which then leads to further anxiety of future situations and so on. Sufferers will become quiet and withdrawn, developing fewer personal relationships in turn leading to isolation (Stein and Stein, 2008).

1.2.2       Generalised Anxiety Disorder (GAD)

There are comparisons between GAD and SP in that sufferers of both do ruminate on social situations. However, individuals with GAD tend to worry about a much broader range of events or activities, day to day routines and localities (Brown, Leary and Barlow, 2001). For instance, people with GAD may experience unrealistic and difficult to control thoughts about money, family, failing an exam or exhibit hypochondriac tendencies without any logical reason to worry.

1.2.3       Panic Disorder (PD)

Although comorbid panic is a somatic condition of both SP and GAD, Panic Disorder is characterised by spontaneous panic attacks and the preoccupation of occurrence (Keeley and Storch, 2009). To put it another way, the individual is tormented by constant worry and is continuously fearful when the next attack will occur. Therefore, sufferers become disabled in their avoidance of events and activities that they perceive to be the trigger for such attacks.

1.3        Treatments for Anxiety Disorders

Treatments of AD utilise diverse approaches and strategies; the most widespread being Behavioural Therapy, Cognitive-Behavioural Therapy and Pharmacotherapy can be useful, used in isolation. Behavioural Therapy options involve exposure, desensitisation and modelling. Cognitive –Behavioural Therapy techniques teach the subject procedures of identifying anxiety triggers and recognising anxious feelings. In addition to this, the subject acquires coping strategies and practices self-reinforcement (Kendall et al., 2005). However, a controlled study by Bandelow et al. (2012), illustrated that applying a combination of behavioural / cognitive-behavioural approaches supplemented with pharmacological methods yielded more favourable outcomes. It would seem multiple methods of treatment reflect the concurrence of Anxiety disorder symptoms and subsequent complexity of treating particular cases.

1.4        Virtual reality

Although there is no universal definition of what Virtual Reality is, the account of Boas, (2013) reports that Virtual Reality has the objective of creating an immersive, virtual world affording opportunities of interaction. Such interactions can provide stimulating feedback to give the experience heightened realism, meaning users can repeatedly engage with the virtual environment in contexts that ordinarily may not be accessible to them (Gorini and Riva, 2008). Virtual Reality technology is no longer the subject of futuristic films and television. What once was relatively expensive and inaccessible hardware such as the VR headset is now mass produced and at a financially within reach of many people. Immersive VR approaches have been embraced in medical therapy and research domains to treat and investigate a variety of conditions. For a comprehensive VR experience, some ingredients are critical. Firstly, a plausible, virtual world, model or simulation. Next, a head-mounted display (HMD) with two screens and stereo sound linked to the computer that fully immerses as users traverse the virtual environment. VR systems have evolved to a point where audio, haptic, olfactory, and motion inputs facilitate more complex, multimodal interactions within the virtual environment to deliver and mimic more natural human behaviours (Turk, 2014) .In recent times, VR technologies have diversified from the domain of video games and entertainment to be widely embraced by Educational and Health and training sectors. Training simulators are one of the most successful examples of VR applications. For example, surgeons can train in VR environments with elevated levels of success (Nagendran et al., 2013). Virtual training simulation permits users to develop existing cognitive, and motor skills through mediated by sensory devices; whereby the experience learned in the virtual environment are directly transferable to the real world, proffering both temporal and financial benefits.  Further, implementing fully immersive VR as a pedagogical tool has shown that younger learners attained elevated levels of engagement and satisfaction according to Wrzesien and Alcañiz Raya, (2010).

1.4.1       Immersion and Presence

Immersion can be classified on some levels which comprise non-immersion, semi-immersion and full immersion. The most common example of a non-immersive system is video games or movies where simulation confronts the user on a single flat screen. Semi-immersive systems such as CAVE (Cave Automatic Virtual Environment) comprise of a series of projectors that broadcast to enveloping screens creating the illusion of immersion with a broad field of view (Muhanna, 2015). However, latter iterations of this type of semi-immersive experience have demonstrated that it can be more effective in some contexts (North and North, 2016). Fully-immersive VR environments allow users to interface with the simulation via an HMD (Head Mounted Display) and integration with multimodal inputs. Therefore, levels of immersion are intimately connected to the objective characteristics of the system and the degree of sensorimotor stimulation within the virtual environment (Bohil, Alicea and Biocca, 2011), (Slater and Wilbur, 1997), that is to say, the extent and fidelity of multimodal stimulus. Consequently, virtual environments are an excellent platform for research that requires integration of the study of human responses to sight, sound and touch as a unified experience by placing the user inside the experience.

When discussing VR, the notion of presence or the sense of being there is pervasive in the literature. Previous research (Price et al., 2011) proposes that effectuating an elevated feeling of presence should be a consideration when developing a VR system. According to  (Cummings and Bailenson, 2016), higher levels of presence are believed to result stronger emotional responses to stimuli and interactions that mirror real-world scenario. However, presence is a highly subjective phenomenon that is experienced to varying degrees from person to person and is contingent on narrative, contextual understanding and prior experiences. According to Slater and Wilbur, (1997) a sense of presence is a person’s subjective and objective state when evaluating an environment. The subjective aspect correlates to their appraisal of “being there,” the extent to which that person deems the virtual environment (VE) as “place-like”. Objectivity is a discernible behavioural phenomenon, the degree to which individuals function in a VE like the way they would behave in comparable conditions in everyday contexts.

1.5        Project Rationale

Individuals with hidden disabilities such as anxiety disorders or mental health problems are far more likely to experience high levels of discrimination, isolation, fear, anxiety and poverty. Environmental stressors such as sounds, light and crowds can become barriers to independent living. This study proposes the design and development of a virtual reality tool to facilitate gradual sensory exposure to virtual experiences in an a safe and controlled educational context. In doing so, we can ascertain the effectiveness of Virtual Reality Exposure Therapy (VRET) in educational contexts relative to existing desensitisation strategies.

1.6        Project aims and Objectives

1.6.1       Project Aim

Determine the effectiveness of VRET to familiarise individuals with hidden disabilities to specific environmental stressors within a higher education classroom context.

1.6.2       Project objectives

  • Review the evidence of existing implementations of VRET used to treat AD and other mental health issues.
  • Design, develop, test and evaluate a VRET familiarisation tool for treatment of AD
  • Formulate improvements to future VRET interventions via the evaluation of the tool created for this project.

1.7        Thesis Outline

1.7.1       Chapter 1: Introduction

This chapter provides the reader with contextual background information on anxiety disorders. It identifies some of the most common strands of the disability and a general overview to historical treatments for AD. Further, it offers an overview to virtual reality and relative components of immersion and presence. Finally, it identifies the project rationale, aims and objectives.

1.7.2       Chapter 2: Literature Review

This chapter examines antecedent methods of exposure therapy and proceeds to critically evaluate current implementations of VRET in the treatment of hidden disabilities, specifically anxiety disorders. The research then analyses instances of VRET used to treat anxiety disorders in the university context.

1.7.3       Chapter 3: Materials and Methods

This chapter identifies the software, hardware and workflow employed to design and develop the project. It will identify and present functional and non-functional requirements by way of requirements analysis table and a Unified Modelling Language (UML) diagram. Further, it recounts a conceptual model of the software design in a UML class diagram. The project workflow will be laid out, along with a detailed description of the development process and concludes with an inventory of all third-party assets.

1.7.4       Chapter 4: Results (Development Outcomes)

This chapter describes the outcome of the case study to develop a VRET tool for treatment of AD in the university context.

1.7.5       Chapter 5: Evaluation

This chapter details the methodology and describes the procedure for testing and evaluation of the efficacy VRET in classroom contexts. It analyses the data and presents the findings and implications in.

1.7.6       Chapter 6: Discussion and Conclusion

Chapter 6 revisits the overall aim and specific objectives of this research Project with the findings summarized and related to the research objectives. Conclusions from this research work are derived and linked to the research objectives, and based on these conclusions, recommendations are made.

2         Literature Review

2.1        Introduction

Immersive VR approaches have been embraced in medical therapy and research domains to treat and investigate a variety of conditions. This chapter will explore traditional Exposure Therapy techniques and the proliferation of VR Exposure Therapy and its efficaciousness in treating anxiety disorders.

2.2        Exposure Therapy

Gradual exposure Therapy (GET) is a strand of Cognitive Behaviour Therapy (CBT). CBT enables individuals to target negative thoughts or behaviours, and to supersede them with positive ones. CBT is the most widespread and commonly used therapy for the treatment of depression and phobia. A principal aim of CBT is to reduce anxiety by eliminating beliefs or behaviours that help to maintain irrational emotions (Ougrin, 2011). In GET, the primary strategy is repeated and graduated exposure to a stimulus that can evoke fear and anxiety symptoms. The stimulus comprises of people or animals. Other stimuli can comprise, objects, events (real and imagined), or cognitive components such as memories and inappropriate thoughts. Prior non-computer-mediated variations of GET include in vivo (i.e., exposure conducted “in person”), where the subject directly confronts the source of anxiety. Here, the individual can confront their fear by personified experience, for example, someone who has a fear of snakes may handle one. Another variant is Imaginal Exposure where one is encouraged to strongly imagine an event, object or person that elicits anxiety. For example, a person who has a fear of public speaking may be asked to reflect on and discuss the last time they had to undertake that activity. On occasion, in vivo and imaginal methods are integrated into the same treatment plans to improve outcomes, however the study yielded less positive results for treatments that did not also incorporate a pharmacological component (Öst, Thulin and Ramnerö, 2004). The dissemination of in vivo GET as a treatment for some AD has accelerated due to the efficiency of teaching to therapists. Research by Rauch, Eftekhari and Ruzek, (2012) argues that a straightforward transfer of knowledge means therapists with or without a background in CBT can easily absorb the structures and methods to administer the treatment subsequently. However, the fact that those without expert knowledge of the theory that underpins GET can deliver the treatment could cause an error in some aspects of the treatment. For instance, the timings, order, or intensity of certain stimuli. Furthermore, there appears to be no universal framework for a variety of factors in exposure therapy, for example, creating a hierarchy of stressors or setting out guidelines for other family members to follow during the treatments which (Phillip C Kendall) states in previous research. Although the literature shows that overall outcomes from exposure therapy are positive, there is still some argument that in vivo exposure to stressors can further exacerbate anxiety symptoms. This was evidenced in a study by Andersson et al., (2006) where almost half of the participants failed to show for the second in vivo session which suggests avoidance behaviour. Contrastingly, a study by Hembree et al., (2003) demonstrated that treatment programs that included exposure therapy did not increase dropout rates compared to treatments that did not include exposure, reinforcing the effectiveness of exposure as a tool to combat AD. However, logistically organising and administering realistic interventions in some situations or contexts can be a challenge considering variables of location, cost and safety (Diemer, Pauli and Mühlberger, 2015).

2.3        VR Gradual Exposure Therapy efficacy for Anxiety disorders

VR Exposure Therapy (VRET), which builds on the traditional strategies of exposure therapy by presenting a virtual 3D simulation where the patient can be fully immersed and exposed to anxiety-inducing stimuli. Earlier in this paper, it was noted that in some cases subjects were reticent to undergo in vivo exposure treatment due to their avoidance strategies of real stimuli. Therefore, by employing a VR approach, the real-life stimuli are replaced with virtual, computer-generated simulations where the level of control and intensity can be finely monitored, enabling those who are too afraid of in vivo to obtain treatment. A study by Grynszpan et al., (2014) evidenced overall effectiveness for technology-based interventions in the treatment of hidden disabilities. They concluded that the results of their meta-analysis should be the motivation for further research, design and development of technology-based interventions. In a meta-analysis by Powers and Emmelkamp, (2008), it was acknowledged that until that point, VRET had been successful in treating some AD, but there had been little statistical research undertaken to compare traditional in vivo treatment with the delivery of VRET. Despite the study revealing that VR is extremely effective in the treatment of AD and statistically more effective than in vivo, the case studies Powers and Emmelkamp analysed in their research tended to focus on specific types of AD, for example, fear of flying or acrophobia. These types of phobias can be well replicated in the virtual domain and lend themselves to VRET.

However, for other types of AD, there is a paucity of available research. For example, Social Anxiety Disorder, which is the fear of negative evaluation which is not as easily replicated and evoked in a VR simulation. In a 2013 randomised controlled trial Anderson et al., (2013) compared VRGET to ‘gold standard’ exposure group therapy (EGT) for treatment of Social Anxiety disorder (SAD), specifically, the fear of public speaking. The VR aspect of the experiment had three settings for increasing levels of exposure:

1.    Virtual conference room with up to 5 members

2.    Classroom with 35 members

3.    Auditorium with 100+ members

Besides controlling the number of members in each simulation, other components such as audience reaction exhibiting signs of boredom, interest or hostility could be intensified. The virtual audience could also interrupt the subject with questions which could be tailored to the individual.

Moreover, distractions to the speaker, including mobile phones ringtones where employed as stressors. Participants were gradually exposed to the stressors mentioned above based on their fear hierarchy. In contrast, the EGT participants were videotaped making a speech in front of smaller groups of up to five people. The experiment revealed similar positive metrics for both approaches including peak anxiety and behavioural avoidance. Then again, the research did illustrate the challenges faced in creating a digital representation of something so intangible as ‘being negatively evaluated.’ All things considered, it seems reasonable to assume that future simulations of this type may yield better results if a more concrete notion of how to simulate ‘negative evaluation’ can be reached.

Furthermore, in the public speaking context, the VRGET seems a less flexible approach compared to face-to-face EGT dialog with the therapist and other group members which could negatively impact results. For example, simulated conversational or dialogue components of VRGET cannot be as erudite or flexible as in vivo dialogue with a human actor in EGT which may limit efficacy (Kampmann et al., 2016). That said, VRET does offer more control in other aspects of exposure.

VRET presents further advantages of finely tuned control for the therapist. Aspects of fidelity, frequency and intensity are easily controlled, and if the process overcomes the patient, then the therapist can terminate treatment quickly. Moreover, the repeatability of the process enables subjects to experience more focused fear scenarios over and over which can lead to increased confidence in a shorter time. Repetition in this way might not be possible in some situations, for instance, fear of flying; thus, utilising VRET enables the user to repeatedly take off or land or experience the specific part of a flight that elicits anxiety experience, resulting in fewer sessions which translates to less monetary outlay for treatment. Further, VRGET can target other sensory modalities which can heighten immersiveness. The combination of visual, auditory and haptic cues affords an elevated sense of presence or ‘being there’, which can elicit feelings of anxiety, resulting in a more compelling VR exposure.

Lombard and Ditton, (2006) define the sense of presence as the perceptual illusion of non-mediation. An illusion of non-mediation happens when users fail to identify the presence of a medium, for example, virtual reality, in their immediate environment and react as they would if the medium were not present (CHECK!!!  Lombard & Ditton, 1997). The distinction between presence (human responses to a system) and immersion (physical reality) in VEs was argued by Slater and Wilbur, (1997). Presence is a subjective phenomenon, such as the feeling of being in VEs, and immersion is an objective description of components of the system, such as audio or graphics resolution.

In their recent work, Diemer et al., (2015) explored perceptual fear cues, emotion and presence concerning VR therapy. Presence is acknowledged to comprise three components; spatial awareness, engagement and realism. In the virtual environment, all three elements combine and can elicit the same emotions and reactions that occur in the real world. With that in mind, it seems fair to say that the amount of fear or emotions experienced during immersion in VR is related to presence, with which (Alsina-Jurnet, Gutiérrez-Maldonado and Rangel-Gómez, 2011) concur. A similar position is also taken by Diemer et al., who argue that as soon as emotions are engaged, presence is elevated. Despite all of this, the literature also contains conflicting points of view on the correlation between presence and anxiety whereby many studies that have examined the relationship have focussed on a narrow set of specific phobias. Furthermore, many of the studies had methodological limitations, for example, small sample sizes or poor qualitative data.

Another study by Diemer et al., (2014), attempted to elucidate whether VRET could evoke common psychophysiological signs of anxiety, for example, heart rate (HR) or skin conductance level (SCL). They examined 38 earlier articles, many focusing on habituation effects. Habituation is a decrease in reactions to a stimulus that is repeatedly presented to the subject. Moreover, the researchers examined previous work which explored a broad range of stressors and phobias including, claustrophobia, post-traumatic stress disorder, social anxiety, performance anxiety and social phobia. Diemer et al., demonstrated that during increasing stressor intensity, HR and SCL were above normal levels, although SCL increases were more significant that HR and this occurred in both healthy subjects and those with anxiety disorders. However, the effects of VRET on psychophysiological habituation were inconclusive, but despite this, the results suggest that VRET is an effective tool across the spectrum of AD. However, although the authors endeavoured to include many phobias and disorders, the study was limited due to the heterogeneity of variables and methodologies from preceding work they reviewed. For example, stressor intensity and amount of sessions. The study would appear to confirm consensus in much of the literature that there is a lack of a common framework for research on the efficacy of VRET on AD, an opinion which is also held by (CITE) and (CITE).

Research by Slater et al., (2006) examined the efficacy of VRET on individuals with a fear of public speaking. Forty participants took part including twenty confident speakers and 20 with the phobia. All participants completed a PRCS (Personal Report of Confidence as a public Speaker) survey to determine each person’s level of anxiety. Subjects were in a moderately low-definition simulation of a seminar room with only furniture, or with furniture and five virtual audience characters which could show subtle facial expressions, for example, boredom, interest or frowning. Each participant’s heart rate and somatic responses were recorded, and results determined that the confident speakers have no significant difference in somatic response with or without virtual audience members.

On the other hand, the phobic group all displayed increased heart rate, and subjective somatic responses were also elevated during exposure to the virtual audience. Furthermore, the phobic group registered significantly lower heart rate when presented with the VE with only furniture inside. The research suggests despite low representational fidelity; simulation can yield strong emotional response and sense of presence which is contrary with the account of Lombard and Ditton, (2006) who argued that realism be a core element of presence.

Implementing VRET over traditional methods of treatment delivery provides a way of enabling relatively natural interactions with the system to occur while simultaneously recording brain and physiological activity in a controlled way.  VR environments permit researchers to manage multimodal stimulus inputs, so the user’s sensorimotor illusion of ‘being there’ within the simulation is intensified (Bohil, Alicea and Biocca, 2011). By providing realistic stimulation to multiple sensory channels at once, VR engages the sensorimotor system more robustly than stimulus employed in many other treatments, elevating elicitation of genuine psychological and behavioural reactions.

2.3.1       VRET in the specific context

Context-specific phobia and anxiety can have negative consequences for young people. For example, anxiety in the classroom can seriously impact educational outcomes and consequent employment opportunities. Early intervention with VRET in the classroom setting can improve coping skills which can lead to elevated academic results and better rates of employment. A study by Maskey et al., (2014) explored the efficacy of VRET in their Blue Room experiment where subjects are completely immersed in a context-specific room with realistic images seamlessly projected on to the walls and ceiling. Participants have autonomy to move around the VE as no headset is required. The results of the research demonstrated that VRET resulted in positive and lasting effects for 8 out of 9 subjects in specific contexts. Despite the positive outcomes, it is noted that the evaluation method in the experiment was focussed on a general level of anxiety.  A more granular study in terms of anxiety sub-types would benefit future work as would addressing the small sample size.

2.4        Conclusion

The review of literature has evidenced that VRET is a successful method of delivering treatment to individuals with hidden disabilities such as Anxiety disorders. The proliferation of VRET as a treatment tool has increased due to the monetary, control, intensity, contextual and repeatability advantages it has over traditional types of intervention.  However, the literature does reveal a relatively narrow set of common disorders and phobias that are examined and moreover, there is consistent reference to the limitations in experiment sample sizes and lack of commonly agreed framework of research. There has been a lack of research into the efficacy of VRET in the higher education classroom context. Therefore, a design, development and evaluation of a VR intervention for anxiety disorders in the classroom context would be a worthwhile endeavour.

3         Materials and Method

This chapter presents the software and hardware used in the application creation process. Also, the workflow for the application development is outlined, the initial storyboard for the application is detailed, and the application creation process is described.

3.1        Materials

3.1.1       Software

Table 1: Software used for the development process and the area of use.

Logo Software Description Area of Use
A close up of a logo

Description generated with very high confidence Unity® software. Free Personal Version: 2017.3.Of3 (64-bit).

www.store.unity.com

A game development engine created by Unity Technologies. For creation of 3D and 2D cross-platform simulations and games which can be deployed across a range of platforms. Including, desktop, mobile, VR/AR, a range of game consoles and the web Environment creation and application development.
A picture containing clipart

Description generated with very high confidence Microsoft Visual Studio® 2017 Version:16.6.1

www.visualstudio.microsoft.com/vs/

Microsoft Visual Studio is an integrated development environment (IDE) used to develop computer programs, as well as web sites, web apps, web services and mobile applications. C# Scripting for Unity.
A close up of a sign

Description generated with very high confidence Microsoft Visio 2016

www.products.office.com/en-gb/visio/flowchart-software

Visio 2016 is a powerful diagramming application that enables simple visualisation of complex information. Software development. I.e., Use case diagrams, Class diagrams, domain modelling.
Oculus for Unity Software Developer Kit (SDK)

www.developer.oculus.com/downloads/unity/

Software Developer Kit to integrate Oculus Rift VR headset with Unity 3D Virtual reality integration with Unity 3D.
A picture containing object

Description generated with high confidence Leap Motion SDK for Unity. www.developer.leapmotion.com/unity/ Software Developer Kit to integrate Leap Motion sensor with Unity 3D Hard Tracking integration with Unity 3D.
A sign on the screen

Description generated with high confidence Adobe Photoshop CC

www.adobe.com

Photo retouching and manipulation software. Creation of textures for use in Unity3D environment
A sign on the screen

Description generated with high confidence Adobe Illustrator CC

www.adobe.com

Vector graphic creation and manipulation software Creation of Unity 3D user Interface (UI) elements
A close up of a sign

Description generated with very high confidence Adobe Audition CC

www.adobe.com

Sound editing software Adjust volume and length of sound clips
A close up of a sign

Description generated with very high confidence Adobe Mixamo

www.mixamo.com

3d Model rigging software Generate animations for rigged models

3.1.2       Hardware

Table 2: Details regarding hardware use, and a brief description of the hardware as well as the area of use.

Logo Hardware Description Area of Use
Dell Inspiron 15 7559 Gaming Laptop.

www.dell.com

VR-ready laptop with Nvidia GPU and 16GB RAM All software applications utilised in  project life cycle
Oculus Rift VR headset

www.oculus.com

Virtual reality (VR) headset that enables full immersion in 3D graphical environments Visualization of 3D application in Virtual reality.
A picture containing object

Description generated with high confidence Leap Motion gesture tracking sensor Sensory tracks hand and finger movements that can be replicated in the 3D environment Hand and finger interactions within the 3D VR environment to handle UI events.

3.1.3       Requirements Analysis

The project is part of a SIF project investigating a VRET application for familiarisation and desensitisation to a higher education context for people with hidden disabilities, for example, anxiety disorders. The development process is driven by a procedure of requirement analysis which yields functional and non-functional requirements, outlining the requisites the system must meet to fulfil the needs of the user. Requirements verification will be via usability testing to ascertain if the system upholds the mandatory requirements. Although most requirements are compulsory, some of them were marked as ‘good to have’ as they were non-essential to the research objectives. During production, some requirements may be dropped due to the iterative nature of the development process and subsequent revisions of requirements.

Figure 1: Requirement diagram showing hierarchical relationship of the requirements

3.1.4       Storyboard

Storyboarding presents an effective way to quickly visualise and map out a very low fidelity prototype making the development process clearer going forward. Some UI elements and user journey to enter the application are illustrated below in Figure 5 followed by a wireframe of some sample exposure level layouts (Figure 6).

Figure 2: Storyboard illustrating user flow UI to login or register

A screenshot of a video game

Description generated with high confidence

Figure 3: Storyboards shows increased character count mirroring elevated exposure levels 1 to 7 in the scene

3.1.5       Floorplan

A simple floor plan (Figure 3) of the proposed classroom scenario was designed which could be referenced during the rendering of the Main classroom environment using Unity3D. Photos of the MedVis lab at GSA SimVis were also taken for referral during the development of the virtual environment shown in Figure 4.

Figure 4: Floorplan of the MedVis lab at GSA SimVis

A group of people in a kitchen

Description generated with high confidence

Figure 5: Photos of the MedVis lab at GSA SimVis

3.2        Method

The agile-style development process of the application begins by creating a register and login User Interface (UI), followed by building the main 3D classroom simulation. A second UI menu is developed which can save the Users’ exposure level within the simulation. Scripts for the character animations and patrols AI of the application are written. Following which, integration for VR integration and finally motion tracking is implemented concluding with user testing. Workflow illustrated in figure 6 below.

Figure 6: Diagram detailing Agile-based development process

 

Table 3: Table of requirements for VRET application

ID Name Description Rationale Related to: Order
PC The system must be functional on PC This requirement will permit users to experience higher graphical representation of the virtual environment through an Oculus. Children: 2, 7, 8, 14 Compulsory
Oculus Rift headset The system must be able to render the simulation with VR headset The users will require a VR Headset to view the digital environments in fully immersive 3D Parent: 1

Children: 13

Related: 6, 7, 8, 14, 41

Compulsory
Datastore The system must provide a datastore in which User ID and level of exposure can be saved The system must provide the researcher with a facility where all users can save login/register details in order that that system can ascertain user’s level of exposure. Children: 10, 11, 27, 28, 29

Related: 30, 31, 35

Compulsory
Interface The system must have an easy to use interface. In order gather good quality data for the research, there is need to make the UI as simple as possible for the user reducing cognitive load. This way the user is not stressed before the exposure begins. Children: 30, 31, 35, 38

Related: 1, 6, 39, 40, 41

Compulsory
Environment The system must simulate a digital environment that represents a real environment The system must realistically simulate the MedVis lab at GSA SimVis to heighten immersion. Children: 45, 9, 12, 10

Related: 2

Compulsory
Interaction The system must afford the user a degree of interaction within the virtual environment The user can complete a simple task to further replicate the experience of being in a classroom Children:39, 41

Related:2, 4,

Compulsory
Immersion The system must provide fully immersive simulation When using the system with a VR Headset, the user must experience full immersion. This includes visual and audio immersion. Parent: 1

Related:2, 6, 8, 14, 41

Compulsory
Presence The system must ensure the user experience high degree of presence Narrative and interaction components must facilitate the psychological involvement of the user within the virtual environment Related: 2, 6, 7, 14, 41, 45 Compulsory
Models The system must display a collection of low poly graphical assets that populate the virtual environment The virtual environment will provide context appropriate, graphical 3D models. These assets should remain as close as possible to the kinds of items and furniture typically found in classroom environments. The assets can be added/removed relative to the exposure level. Parent: 5

Children:15, 16, 17

Compulsory
Register The system must allow the user to register before use First-time participants will register to the system – through the menu interface, they will be required to register with their username, email and password. Registration will be used to create a User record in the datastore Parent: 3

Related: 11, 27, 28, 29

Good to Have
Login The system must allow the user login before use Revisiting users can resume their last level of exposure on subsequent visits to the system Parent: 3

Related: 10, 27, 28, 29

Good to Have
Animation The system shall be able to trigger animations associated to certain character models within the virtual environments Animations assigned to 3D avatar will contribute to increase the level of realism of the virtual environments Realism is expected to increase the degree of presence of the user within the simulation. Parent: 5

Children: 18

Related: 9

Compulsory
First Person View The system must render the virtual environment to the user as a first person vies The system will afford the user with an embodied point of view as if through their own eyes, enabling conditions for presence. Parent:2 Compulsory
Gradual Exposure The system must provide gradual exposure to sensory stimulus during simulation The system will emulate a gradual exposure to critical sensory cues to decrease reactions to feared objects or situations (habituation) and/or weaken previously learned associations between feared objects, activities or situations and bad outcomes (extinction). Parent: 1

Related: 2, 7, 8, 12, 46, 47

Compulsory
Appliances The system must display models of classroom equipment within the virtual environment To present an accurate environment, the system must present a variety of digital representations of appliances that can be found in the real-world environments. Parent: 9

Related: 16, 17

Compulsory
Students The system must display animated students within the virtual environment The system must populate the environments with the appropriate animated characters for realism purposes Parent: 9

Related: 15, 17

Compulsory
Lecturer The system must display animated teachers within the virtual environment The system must populate the environments with the appropriate animated characters for realism purposes Parent: 9

Related: 15, 16

Compulsory
Walking The system shall be able to trigger walking animations associated character models within the virtual environment Assigning animations to 3D models will heighten the level of realism in the virtual environment. Moreover, it should increase the degree of presence. Parent: 12 Good to have
Visual cues The system must display visual cues that are commonly unsettling for users The system will be able to provide visual sound cues gradually throughout the level of complexity as part of the gradual exposure process. Parent: 45

Related: 20

Compulsory
Sound cues The system must allow the reproduction of the sound cues that are stressors to users The system will be able to provide critical sound cues gradually throughout the level of complexity as part of the gradual exposure process. Parent: 45

Related:19

Compulsory
People speaking The system should contain sounds of humans talking within the virtual environment Typical classroom sounds will facilitate a more realistic environment to heighten presence in the virtual environment Parent: 20

Related: 22, 23, 24, 25, 26, 42, 43, 44

Compulsory
People laughing The system should contain sounds of humans laughing within the virtual environment Typical classroom sounds will facilitate a more realistic environment to heighten presence in the virtual environment Parent: 20

Children:

Related:

Compulsory
Ambient sound The system must be able to reproduce ambient sounds of the classroom within the virtual environment Typical classroom sounds will facilitate a more realistic environment to heighten presence in the virtual environment Parent: 20

Related: 21, 22, 24, 25, 26, 42, 43, 44

Compulsory
Footstep sounds The system should contain sounds of people’s footsteps within the virtual environment Within a school/classroom environment, footsteps are a common sound. This adds to the realism of the virtual environment. Parent: 20

Related: 21, 22, 23, 25, 26, 42, 43, 44

Good to have
Footstep sounds The system should contain sounds of people’s footsteps within the virtual environment Within a school/classroom environment, footsteps are a common sound. This adds to the realism of the virtual environment. Parent: 20

Related: 21, 22, 23, 24, 26, 42, 43, 44

Good to have
Door closing The system should contain sounds of doors closing within the virtual environment Within a school/classroom environment, doors opening and closing are commonly heard. This adds to the realism of the virtual environment. Parent: 20

Related: 21, 22, 23, 24, 25, 26, 42, 43, 44

Good to have
Questionnaire VR The system must allow the user to fill a series of question in VR when completing a level. When the user completes a level, the system will require the user to answer a series of questions in a style like a Likert scale. Results will be stored for that completed level into the datastore. Parent: 3

Related:10, 11, 28, 29

Good to have
Ratings The system must record the marks of the VR Questionnaire When the user completes a stage of complexity, the system will require the user to complete a questionnaire based on a Likert scale. This will allow the researcher to measure anxiety and assess the mood of the user throughout the experimental procedure. Parent: 3

Related:10, 11, 27, 29

Good to have
Rehearsal Time The system must show the exposure time for each level The system must record cumulatively the time each user dedicated rehearsing into each of the environments proposed by the system. Time should be recorded in the datastore when exiting/saving the level. Parent: 3

Related: 10, 11, 27, 28

Good to have
Login/Register UI The system must display a UI for the User to register and Sign in A form must be presented to enable data captured for the database Parent: 4

Related: 3, 31, 35

Good to have
Main Menu UI The system must display a Menu UI to display all application options to the user. The system shows the user all UI choices available before exposure begins Children: 32, 33, 34

Related: 3, 4, 30, 35,

Good to have
Start Button The system must enable the user to start a new exposure session Users can begin their treatment Parent: 31

Related:33, 34

Compulsory
Resume Button The system must enable the user to resume a previous exposure session Revisiting users can resume their last level of exposure on subsequent visits to the system Parent: 31

Related:32, 34

Good to have
Quit App Button The system must enable the user to quit the application Users can quit the application without starting or resuming a previous session Parent: 31

Related:32, 33

Compulsory
In-level UI The system must afford the user a means to save or stop the exposure during treatment The user can complete a simple task to further replicate the experience of being in a classroom Parent: 4

Children: 36, 37

Related:3, 30, 31

Compulsory
In-exposure Save Button The system must enable the user to start a save the current exposure session Users must be able to save the exposure session quickly if they have to Parent:

Children:

Related:

Compulsory
In-exposure quit Button The system must enable the user to quit the current exposure session Users must be able to exit the exposure session quickly if they have to Parent: 35

Related: 37

Compulsory
Exposure Level Indication The system must inform the user of the current level when commencing simulation When user starts or resumes a simulation, the system will inform the user graphically of their current level of exposure. Parent: 4 Good to have
Leap Motion The system must integrate with Leap Motion hand motion tracker The user should be able to have some simple multimodal interactions, mediated by motion tracking within the app to elevate presence Parent: 6

Children: 40

Related:4, 31, 41

Good to have
Narrative The system must provide the user with a simple task to complete. A simple task within the virtual environment will help strengthen the narrative of the simulation and increase presence. Parent: 39

Related:4, 6

Good to have
Req. 41 Head tracking The system must be able to utilize information from the android phone’s gyroscope to reproduce head movement in 360degrees This allows for a more immersive experience and increased realism within the virtual environment as the user can have 360degree head movement. Parent: 6

Children:

Related: 2, 4, 7, 8, 14

Compulsory
Req. 42 Mobile Phone sound The system should contain sounds of mobile phone sounds within the virtual environment Typical classroom sounds will facilitate a more realistic environment to heighten presence in the virtual environment Parent: 20

Related: 21, 22, 23, 24, 25, 26, 43, 44

Compulsory
Req. 43 Thunder The system should contain sounds of thunder within the virtual environment Typical Background sounds will facilitate a more realistic environment to heighten presence in the virtual environment Parent: 20

Related: 21, 22, 23, 24, 25, 26, 42, 44

Good to have
Req. 44 Alarm The system should contain sounds of fire alarm within the virtual environment Typical school sounds will facilitate a more realistic environment to heighten presence in the virtual environment Parent: 20

Related: 21, 22, 23, 24, 25, 26, 42, 43

Compulsory
Req. 45 Accurate The system should be realistic Realism adds to immersion which heightens presence Parent: 5

Related: 8, 9, 12, 47,

Compulsory
Req. 46 Volume The system should have degrees of volume Volume based volume levels add to realism of sound critical cues Related: 12, 14, 20, 47 Compulsory
Req. 47 Critical Cues The system must enable the reproduction of critical sensory cues within the environment The primary objective of the tool is to desensitise the user to critical Therefore the system must be able to display or replicate these critical cues whether visual or audio. Parent: 5

Children: 19, 20

Related:9, 12, 14, 45, 46

Compulsory

3.3        Main environment creation

3.3.1       Model and Graphics Acquisition

A variety of 3D models and design assets were crucial for creating a realistic representation of the MedVis lab at GSA SimVis; four main sources were utilised. The primary origins were Unity Asset Store, Mixamo.com, CGTrader.com and Turbosquid.com. Details of all assets are listed in Table 4 below.

Table 4: Downloaded graphical asset images and source references.

Image Source
C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.Wordcomputerclassroom.png 3D Everything (2017). Computer Classroom [model] Available at: https://assetstore.unity.com/packages/3d/props/interior/computer-classroom-51124

[Accessed 29 May 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.WordChair.png Jake Sullivan (2016). Floor materials pack [model] Available at: https://assetstore.unity.com/packages/3d/props/interior/old-office-props-free-53735

[Accessed 30 May 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.Wordfloors.png DEV.KROS (2018). Floor Materials Pack [material] Available at: https://assetstore.unity.com/packages/2d/textures-materials/floor-materials-pack-117563

[Accessed 29 May 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.Wordcarpets.png Nobiax/ Yughues (2015). Yughues Free Fabric Materials [material] Available at: https://assetstore.unity.com/packages/2d/textures-materials/fabric/yughues-free-fabric-materials-13002

[Accessed 1 July 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.Wordskybox.png ColorSkies (2018). ColorSkies [skybox] Available at: https://assetstore.unity.com/packages/2d/textures-materials/sky/colorskies-91541

[Accessed 30 June 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.WordMIxamo3.png Mixamo (n.d). Liam [model] Available at:

https://www.mixamo.com

[Accessed 2 July 2018]. Vertices Not available

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.WordMIxamo1.png Mixamo (n.d). Regina [model] Available at:

https://www.mixamo.com

[Accessed 2 July 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.WordMixamo2.png Mixamo (n.d). Malcolm [model] Available at:

https://www.mixamo.com

[Accessed 3 July 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.WordMixamo4.png Mixamo (n.d). Stefani [model] Available at:

https://www.mixamo.com

[Accessed 3 July may. 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.Wordcgt-1.png Luis77 (2017). Young Hipster 3D model [model] Available at: https://www.cgtrader.com/3d-models/character/man/young-hipster

[Accessed 7 July 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.Wordcgt-2.png Sapsan008 (2018). Teenager or dude low-poly 3D [model] Available at: https://www.cgtrader.com/3d-models/character/man/teenager-or-dude

[Accessed 30 June 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.Wordcgt-3.png 3dCap (2017). Indian Business Man Sitting low-poly 3D [model] Available at: https://www.cgtrader.com/3d-models/character/man/business-man-804

[Accessed 29 June 2018].

Mixamo (n.d). Andromeda [model] Available at:

https://www.mixamo.com

[Accessed 12 July may. 2018].

Mixamo (n.d). Shae [model] Available at:

https://www.mixamo.com

[Accessed 12 July may. 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.Wordcgt-6.png realbeam (2017) WOMAN low-poly 3D model [model] Available at: https://www.cgtrader.com/3d-models/character/woman/woman-3367c5a0-904b-448d-b9ed-a019494e3943

[Accessed 30 June 2018].

Mixamo (n.d). Remy [model] Available at:

https://www.mixamo.com

[Accessed 3 July 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.WordPhones.png Vertex studio (2017) Free Smartphone [model] Available at: https://assetstore.unity.com/packages/3d/props/electronics/free-smartphone-90324

[Accessed 3 July 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.WordUIPack.png MICHSKY (2018) Modern UI Pack [sprites] Available at: https://assetstore.unity.com/packages/tools/gui/modern-ui-pack-114792

[Accessed 6 May 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.Word	extMesh.png Unity Technologies (2018) TextMesh Pro [sprites] Available at: https://assetstore.unity.com/packages/essentials/beta-projects/textmesh-pro-84126

[Accessed 5 May 2018].

C:UsersTomAppDataLocalMicrosoftWindowsINetCacheContent.Wordpost.png Unity Technologies (2017) Post Processing Stack [lighting] Available at: https://assetstore.unity.com/packages/essentials/post-processing-stack-83912

[Accessed 1 July 2018].

3.3.2       Main Infrastructure

Based on the drawings of the floorplan (Figure 2) and storyboard (Figures 3 & 4), a rudimentary version of the environment was blocked out using primitive shapes in Unity3D. The window frames, desks and whiteboard were sourced from the Unity Asset Store Classroom Package. Some primary materials for colouring, including wall and floor components were also added as illustrated in Figure 7.

Figure 7: Initial creation stages of main classroom structure

3.3.3       Register/Login system

A registration system was designed to capture user details of username, email and password. This was so that the user’s exposure level within the application could be saved and resumed at a later point when the same user logs back in using their saved details.  A separate script for Register and Login were made for the better organisation of code with both scripts utilising the System.IO namespace that permits writing to, and reading from files that the application creates locally. For security purposes, the password field is encrypted at registration then unencrypted at login. The encryption works by looping through the user’s Password which is treated as an array but the starting index (i=1 first letter of the password) is first cast a as type (char), multiplied by itself plus 1, meaning the Password written to the datastore is encrypted. The login Script decrypts the Password by implementing the inverse of this logic by dividing the current index plus one by itself. (SEE APPNDIX XX). Both forms also have validation which is illustrated in Figure XX below.

A screenshot of a cell phone

Description generated with very high confidence

Figure 8: Shows registration system and validation messages

3.3.4       Workstations

Workstations incorporating a computer, monitor, keyboard and additional props including pens and books were constructed at each desk. The models for each workstation came from the Computer classroom kit (Table xx) from the Unity Asset Store. Each typical workstation presents a distinct configuration of props to prevent the look of the VE from becoming excessively uniform as illustrated in Figure XX.

Figure 9: Different props assembled on each desk to avoid a uniform look

3.3.5       Narrative/Task

A simple in-exposure task was applied so that Users do not simply sit in the environment doing nothing which is not lifelike classroom behaviour. In vivo, individuals are ordinarily involved in a task or activity of some kind whether working, talking, concentrating or reading. To place the subject inside the VE without even the simplest of tasks could compromise the quality of the data captured. Subsequent evaluation of the tool could yield inaccurate results; therefore, a simple task to separate coloured blocks into towers of a single colour was implemented (Figure XX). A screen shot of a computer

Description generated with very high confidence

Figure 10: Simple, in-game block building task

3.4        Non-playing characters (NPC) Characters

3.4.1       Character Animation

The rigged NPCs sourced from Mixamo could access a significant number of compatible animations. Mixamo provides an easy to use interface (Figure XX) where you can upload your rigged 3D model and apply the desired animation type. It is possible to finely tune the speed and motion of the animation to suit the needs of an application. Once the desired animation has been created, it can be previewed before downloadingA screenshot of a social media post

Description generated with very high confidence

Figure 11: Mixamo 3D Model animation UI with model imported from CGTrader

Many character models from CGTrader.com were compatible with Mixamo.com animations. However, others were not and had to implement with the embedded animations included with the asset.  3D character assets were imported to Unity 3D via the Unity Assets Menu (Assets > Import New Asset > [Model Name]). Mixamo models come in the .fxb format which is compatible for Unity3D and after importing, an Animator Controller is created for the character then dragged to the relevant slot in the Inspectors Animation component, illustrated in Figure XX below. Double-clicking the Animator Controller then opens the Animator window. From here, the animation is dragged from the Model Prefab to the Animator window to complete the process.

Figure 12: Animator window and Animator controller in the Inspector

3.4.2       Character Shaders

Some characters exhibited shader issues which required to be adjusted for proper rendering. Characters appeared to look inside out, and after some research, a solution was sourced and implemented. In the materials tab for the original character prefab, the option to extract textures is chosen, and after this, a folder is created in a new folder along with the model for organisational purposes. After this, the material and textures can be applied to the model in the usual manner and finally, in the Inspector the shader type should be set to Legacy Shaders > Diffuse. See Figure 13 below.A screenshot of a cell phone

Description generated with high confidence

Figure 13: Illustration of the repair process to fix faulty shaders on character models

3.4.3       Character Navigation

So that characters may traverse the environment, a Navigation system comprising of several waypoints and a NavMesh was constructed, and a NavMeshAgent was added to the target character. A simple pilot scene to test the functionality was created before adding it to the primary development shown in Figure XX.A screenshot of a computer

Description generated with very high confidence

Figure 14: Images illustrating baked NavMesh and waypoints test

3.4.4       Navigation Implementation

A Patrol script was attached to the target character which provides an array for Waypoints accessible in the Inspector. The UnityEngine.AI namespace was added so that the script could target the NavMeshAgent component. The Patrol script enables the NavMeshAgent to move to between a set of pre-defined Waypoints that signify transform/positions on the NavMesh. Waypoints are empty GameObjects, shown as red spheres in Figure 14, which are dragged to the Patrol script waypoints array in the Inspector (Figure15). The NavMeshAgent then references these as checkpoints, and the script also has functionality that enables the NPC to traverse randomly or wait between waypoints, with an element of probability that can be set in the Inspector (Figure 16).

Figure 15: Waypoints are shown as red spheres (Left) and baked NavMesh to the right

A screenshot of a cell phone

Description generated with very high confidence

Figure 16: Waypoints array in the inspector

A screenshot of a cell phone

Description generated with very high confidence

Figure 17: Script showing part of the patrol random waypoint functionality

3.5        Sound

3.5.1       Critical Sound Cues acquisition and implementation

In attempting to create an accurate representation of the sounds in the virtual classroom, audio volume was arranged correlating to the position of non-playing characters (NPC) from the User’s point of view. That is, the further from the user, the lower the volume of the audio source. A significant bank of sounds had to be sourced for the project as one of the fundamental objectives is to expose users to increasing levels of noise, including speech, phones, laughing and ambient sounds. Therefore, to create a plausible audio simulation, specific characters had to have multiple audio sources to avoid repetition. A full list of audio sources, volumes and origin, as well as the character or object it is attached to is catalogued below in Table XX.

Table 5: Audio assets and pertaining 3d models, exposure levels and volumes.

Character or Object Appears in exposure level Sound Clips Used Audio Source Volume
A picture containing person, man, ground, board

Description generated with very high confidence 1, 2, 3, 4, 5, 6, 7 SFX Bible (2008). Human voice teacher class please be seated.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 04 July. 201].

SFX Bible (2008). Human hum ponder.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 04 July. 2018].

SFX Bible (2008). Human voice teacher good morning class.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 04 July. 2018].

0.25
A person wearing a costume

Description generated with very high confidence 1, 2, 3, 4, 5, 6, 7 Blastwave FX (n.d). Girl says awesome.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 04 July. 2018].

Blastwave FX (n.d). Girl says cool.wav [WAV Online] Available at: https://www.soundsnap.com [Accessed 04 July. 2018].

Blastwave FX (n.d). Girl says hi long.wav [WAV Online] Available at: https://www.soundsnap.com [Accessed 04 July. 2018].

0.3
A person wearing a suit and tie

Description generated with very high confidence 1, 2, 3, 4, 5, 6, 7 Airbourne Sound (n.d) A sick human male coughing and wheezing.wav. [WAV Online]  Available at: https://www.soundsnap.com [Accessed 07 July. 2018]. 0.5
4, 5, 6, 7 Blastwave FX (n.d). Cell phone ringtone loop.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 07 July. 2018]. 0.6
6, 7 FX Source (n.d). soft-laughing.wav [WAV Online] Available at: https://www.soundsnap.com [Accessed 04 July. 2018].

SFX Source (n.d). S_Snap_FemaleGiggle3.wav [WAV Online] Available at: https://www.soundsnap.com [Accessed 04 July. 2018].

SFX Source (n.d). teenage_girl_giggling.mp3 [MP3 Online] Available at: https://www.soundsnap.com [Accessed 04 July. 2018].

0.35
5, 6, 7 Blastwave FX (n.d). Girl says take a hike.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 13 July. 2018].

Blastwave FX (n.d). Girl says what-s the matter with you.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 13 July. 2018].

Blastwave FX (n.d). Girl says in a cartoon voice thank you very much.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 13 July. 2018].

Blastwave FX (n.d). Young girl says bummer.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 13 July. 2018].

0.5
A close up of a person

Description generated with very high confidence 3, 4, 5, 6, 7 Big Room Sound (n.d). Typing on a computer keyboard long.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 04 July. 2018]. 0.5
A picture containing person, ground, floor

Description generated with very high confidence 7 DelaCruz, P. Acts Macho.mp3, Disgusted.mp3, Excitement.mp3, Frustration.mp3, Question Laughter.mp3 (2016). Available at: https://assetstore.unity.com/packages/audio/sound-fx/voices/voices-of-a-teenage-boy-59464  [Accessed 03 July. 2018]. 0.6
A person in a black shirt

Description generated with high confidence 4, 5, 6, 7 Blastwave FX (n.d). Old man says give me a break.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 13 July. 2018].

Blastwave FX (n.d). Twenty year old man says bummer.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 13 July. 2018].

Blastwave FX (n.d). Fifty year old man says blah blah blah.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 13 July. 2018].

0.6
A person standing in a room

Description generated with very high confidence  7 Cole, B (n.d). Thunder strike in city.wav [WAV Online] Available at: https://www.soundsnap.com [Accessed 04 July. 2018]. 1
A picture containing person, clothing, woman, ground

Description generated with high confidence 1, 2, 3, 4, 5, 6, 7 Girl says hi long.wav (n.d). Available at: https://www.soundsnap.com [Accessed 04 July. 2018].

Girl says nice.wav (n.d). Available at: https://www.soundsnap.com [Accessed 04 July. 2018].

Sneeze element female young gir.wav (n.d). Available at: https://www.soundsnap.com [Accessed 07 July. 2018].

0.7
A close up of a phone

Description generated with high confidence 1, 2, 3, 4, 5, 6, 7 Kai Paquin (n.d), Cell Phone Vibrate Smart Phone Receiving Call_000570.wav. [WAV Online] Available at: https://www.soundsnap.com [Accessed 03 July. 2018]. 0.7

3.5.2       Multiple Random Audio Source

A Unity object can have multiple audio sources; however, it will play them in order, and there is a lack of fine control if the desired effect is random numerous audio sources. To that end, a script was written which enabled such functionality. The script creates an array and allows any number of clips to be added by using the Size field in the Inspector. Within the script’s CallAudio() method, the second parameter in the Invoke() method allows control of how many seconds delay there is between each clip. The RandomSoundness() method, picks a random clip based on the array length.

Figure 18: Random Sound Script and how it appears in Unity Inspector (inset red box)

3.6        Lighting

There are three intensities of light which are displayed depending on the level of exposure the player is currently at. The first light representation is seen in the environment as emissive light panels on the ceiling which are coupled with spotlights and a point light set with an intensity of XX. As the User raises the exposure level, a second point light with intensity XX is added to the current light rig. As the exposure level reaches maximum, a third and final point light with intensity XX is attached to the scene. For exterior lighting, a skybox from the ColorSkies asset pack was employed (Table XX). All lighting levels are illustrated in Figure XX below.

A picture containing indoor, water, building, table

Description generated with very high confidence

Figure 19: Image shows stages of light intensity from weakest (top) to strongest and complete lighting rig (bottom)

3.7        Leap Motion integration

3.8        Oculus Rift integration

Integrating the latest Oculus Rift headset with Unity requires the Oculus Integration package from the Unity Asset Store. The Hardware also requires the Oculus desktop app to be downloaded to the computer. Need to do a bit more here when you get it working!!!

3.9        Application Structure

3.9.1       Scenes

There are five scenes utilised in the application. The primary scene is the MedVis Environment, whereas the other scenes are primarily for navigating around the application and for capturing and reading data to and from the datastore. Three main classes handle the Register, Login and Main Menu events functionality. Need to do a bit on the save button here when you get it working!!!

3.9.1.1      Register/Login Scene

The Register/Login scene was created using Unity’s UI functionality. This scene is the starting point of the application, and it has the purpose of capturing users’ registration details to the datastore file before they can proceed to the central exposure environment of the application. The reason for this is so in the MedVis scene; the user can save or quit their current level of exposure by writing it to the data store via the in-level UI. The current level, which is an integer is saved to the file and overwrites the current number held on the record. The UI canvas comprises two sectioned forms for Login and Register. To Register, the user must provide Username, Email, and Password credentials and once the form is complete, they can repeat the Username and Password details to the form fields of the Login form. They then proceed to the Main Menu scene.

3.9.1.2      Main Menu Scene

The Main Menu scene comprises some simple instructions for the user to utilise the application as well as navigation to either start a new exposure, resume a previous exposure or Quit the current session altogether.

3.9.1.3      MedVis Virtual Environment Scene

The MedVis Environment scene is the main scene of the application where exposure to critical visual and audio cues takes place. The scene begins with the subject having a first-person view at the rear of the classroom in front of their workstation with a computer and task instructions on the screen in front of them.  The user can carry out the simple task stack some blocks, via Leap Motion Integration on the desk in front of them. On starting the first exposure, the application’s minimum level of critical visual and audio cues exposed to them so as not to overwhelm them. The in-level UI shows a timer clock with the countdown from two minutes to zero, and the save and quit buttons are also visible always. When the timer reaches zero, and the Exposure level is complete the application changes to the next level confirmation scene. More to come here on Reticle interaction, World Space Canvas when Oculus is integrated etc…

3.9.1.4      Ready for Next Level Confirmation Scene

This scene is a simple method to let the user remain in control of the application by asking them if they are prepared to proceed with the next level of exposure. The user can take a minute to compose themselves if required before continuing with treatment. Once they are ready, they can click the proceed button to carry on to the next level with a higher number of critical cues than the previous level. This scene is presented at the end of every level of exposure up until the final level with the maximum critical cues. This follows good user experience practices to let the user where they are in the system always.

3.9.1.5      Save/Quit confirmation Scene

This simple scene is mainly utilised for good user experience practices so that the user can see that have either saved or quit their session and have left the central VRET environment. They are presented with a short goodbye message and a UI button from which they can navigate back to the Login Register Menu to begin a new session if desired.

4         Results

This chapter presents the outcomes of the application development of a Virtual Reality Exposure Tool ‘VR Park’ set in the classroom context.  Further, it incorporates usability and presence testing with Oculus Rift Headset.  The application comprises five key scenes; four were primarily for navigation and the fifth being the central VR exposure environment.

4.1        Scenes

4.1.1       Login / Register Scene

The final UI for the Login/Register scene employed a calming colour scheme of blue tones, illustrated in Figure 20. Further, a complimentary custom colour palette (Figure 21) for UI call-to-actions was also designed. The rationale was to provide the user with a calming, clear and simple UI with which to negotiate the application. For example, primary and additive action buttons would be coloured green, whereas destructive actions would implement a red button.

A screenshot of a cell phone

Description generated with very high confidence

Figure 20: Final Login and Register UI design

A screenshot of a cell phone

Description generated with very high confidence

Figure 21: Custom colour scheme designed for UI butt

4.1.2       Main Menu Scene

Figure 22: Final UI design of Main Menu

The Main Menu scene provides the user easy-to-recognise actions to navigate around the application. The menu maintains the same style guide in accord with good UX principles. Selecting the start button enters the user into the Main VE scene.

4.1.3       MedVis Virtual Environment Scene

A screen shot of a computer

Description generated with very high confidence

Figure 23: Initial view on commencement of exposure in main VE

The main VE scene begins with the user facing their virtual screen and with instructions to complete the simple task in front of them. The user can also look around the environment and the head-tracking capabilities of the Oculus Rift headset will enable them to be fully Immersed in simulation, experiencing all critical cues in the current level of exposure. Leap Motion integration will mediate the simple task of separating the blocks into two colours on the corresponding markers seen on the virtual desk (Figure 23).

4.1.4       Ready for Next Level Confirmation Scene

A screenshot of a cell phone

Description generated with very high confidence

Figure 24: Shows user interface displayed before proceeding to next exposure level

On the Next Level Confirmation scene UI, a different hue for the background is implemented to expand the visual language of the application. The more reddish hue signifies a warning of sorts. This way, the user can ascertain they are ready to move to the next exposure level. Clicking the Start button will facilitate this.

4.1.5       Save/Quit confirmation Scene

Once again, the goodbye scene expands the visual language of the application to demonstrate to the user that they have left the main VE (Figure 25). Displaying a more relaxing blue hue is employed to gently exit the user from their exposure session.  A button is provided to return the user to the login area of the application if the user has accidentally come out of the session meaning they can easily return to their previous session.

Figure 25: Shows UI when subject saves exposure level ‘in-game’.

5         Evaluation

May be merged with conclusion chapter

6         Conclusion

Appendix A: Supplementary Notes

If required.

References

Alsina-Jurnet, I., Gutiérrez-Maldonado, J. and Rangel-Gómez, M. V. (2011) ‘The role of presence in the level of anxiety experienced in clinical virtual environments’, Computers in Human Behavior. Elsevier Ltd, 27(1), pp. 504–512. doi: 10.1016/j.chb.2010.09.018.

Anderson, P. L. et al. (2013) ‘Virtual reality exposure therapy for social anxiety disorder: A randomized controlled trial’, Journal of Consulting and Clinical Psychology, 81(5), pp. 751–760. doi: 10.1037/a0033559.

Andersson, G. et al. (2006) ‘Internet-based self-help with therapist feedback and in vivo group exposure for social phobia: A randomized controlled trial’, Journal of Consulting and Clinical Psychology, 74(4), pp. 677–686. doi: 10.1037/0022-006X.74.4.677.

Bandelow, B. et al. (2012) ‘Guidelines for the pharmacological treatment of anxiety disorders, obsessive-compulsive disorder and posttraumatic stress disorder in primary care’, International Journal of Psychiatry in Clinical Practice, 16(2), pp. 77–84. doi: 10.3109/13651501.2012.667114.

Baxter, A. J. et al. (2013) ‘Global prevalence of anxiety disorders: A systematic review and meta-regression’, Psychological Medicine, 43(5), pp. 897–910. doi: 10.1017/S003329171200147X.

Boas, Y. (2013) ‘Overview of Virtual Reality Technologies’, Mms.Ecs.Soton.Ac.Uk. Available at: http://mms.ecs.soton.ac.uk/2013/papers/yavb1g12_25879847_finalpaper.pdf.

Bohil, C. J., Alicea, B. and Biocca, F. A. (2011) ‘Virtual reality in neuroscience research and therapy’. Nature Publishing Group, 12(December). doi: 10.1038/nrn3122.

Brown, T. A., Leary, T. A. O. and Barlow, D. H. (2001) ‘Generalized Anxiety Disorder’, Anxiety, (1), pp. 154–208.

Cummings, J. J. and Bailenson, J. N. (2016) ‘How Immersive Is Enough? A Meta-Analysis of the Effect of Immersive Technology on User Presence’, Media Psychology, 19(2), pp. 272–309. doi: 10.1080/15213269.2015.1015740.

Diemer, J. et al. (2014) ‘Virtual reality exposure in anxiety disorders: Impact on psychophysiological reactivity’, World Journal of Biological Psychiatry, 15(6), pp. 427–442. doi: 10.3109/15622975.2014.892632.

Diemer, J. et al. (2015) ‘The impact of perception and presence on emotional reactions: A review of research in virtual reality’, Frontiers in Psychology, 6(JAN), pp. 1–9. doi: 10.3389/fpsyg.2015.00026.

Diemer, J., Pauli, P. and Mühlberger, A. (2015) Virtual Reality in Psychotherapy. Second Edi, International Encyclopedia of the Social & Behavioral Sciences: Second Edition. Second Edi. Elsevier. doi: 10.1016/B978-0-08-097086-8.21070-2.

Gorini, A. and Riva, G. (2008) ‘Virtual reality in anxiety disorders: The past and the future’, Expert Review of Neurotherapeutics, 8(2), pp. 215–233. doi: 10.1586/14737175.8.2.215.

Grynszpan, O. et al. (2014) ‘Innovative technology-based interventions for autism spectrum disorders: A meta-analysis’, Autism, 18(4), pp. 346–361. doi: 10.1177/1362361313476767.

Hembree, E. A., Foa, E. B., Dorfan, N. M., Street, G. P., Kowalski, J., T. X. (2003) ‘Do patients dropout prematurely from exposure therapy for PTSD?’, J Trauma Stress, 16(6), pp. 555–562.

Hirsch, C. R. and Clark, D. M. (2004) ‘Information-processing bias in social phobia’, Clinical Psychology Review, 24(7), pp. 799–825. doi: 10.1016/j.cpr.2004.07.005.

Kampmann, I. L. et al. (2016) ‘Exposure to virtual social interactions in the treatment of social anxiety disorder: A randomized controlled trial’, Behaviour Research and Therapy. Elsevier Ltd, 77, pp. 147–156. doi: 10.1016/j.brat.2015.12.016.

Keeley, M. L. and Storch, E. A. (2009) ‘Anxiety Disorders in Youth’, Journal of Pediatric Nursing. Elsevier Inc., 24(1), pp. 26–40. doi: 10.1016/j.pedn.2007.08.021.

Kendall, P. C. et al. (2005) ‘Considering CBT with anxious youth? Think exposures’, Cognitive and Behavioral Practice, 12(1), pp. 136–148. doi: 10.1016/S1077-7229(05)80048-3.

Lombard, M. and Ditton, T. (2006a) ‘At the Heart of It All: The Concept of Presence’, Journal of Computer-Mediated Communication. Wiley/Blackwell (10.1111), 3(2), pp. 0–0. doi: 10.1111/j.1083-6101.1997.tb00072.x.

Lombard, M. and Ditton, T. (2006b) ‘At the Heart of It All: The Concept of Presence’, Journal of Computer-Mediated Communication. Wiley/Blackwell (10.1111), 3(2), pp. 0–0. doi: 10.1111/j.1083-6101.1997.tb00072.x.

Marks, I. f. M. and Nesse, R. M. (1994) ‘Fear and fitness: An evolutionary analysis of anxiety disorders’, Ethology and Sociobiology, 15(5–6), pp. 247–261. doi: 10.1016/0162-3095(94)90002-7.

Maskey, M. et al. (2014) ‘Reducing specific phobia/fear in young people with autism spectrum disorders (ASDs) through a virtual reality environment intervention’, PLoS ONE, 9(7). doi: 10.1371/journal.pone.0100374.

Muhanna, M. A. (2015) ‘Virtual reality and the CAVE: Taxonomy, interaction challenges and research directions’, Journal of King Saud University – Computer and Information Sciences. King Saud University, 27(3), pp. 344–361. doi: 10.1016/j.jksuci.2014.03.023.

Nagendran, M. et al. (2013) ‘Virtual reality training for surgical trainees in laparoscopic surgery’, Cochrane database of systematic reviews (Online), (8), pp. 1–45. doi: 10.1002/14651858.CD006575.pub3.www.cochranelibrary.com.

North, M. M. and North, S. M. (2016) Virtual Reality Therapy, Computer-Assisted and Web-Based Innovations in Psychology, Special Education, and Health. Elsevier Inc. doi: 10.1016/B978-0-12-802075-3.00006-1.

Öst, L. G., Thulin, U. and Ramnerö, J. (2004) ‘Cognitive behavior therapy vs exposure in vivo in the treatment of panic disorder with agrophobia’, Behaviour Research and Therapy, 42(10), pp. 1105–1127. doi: 10.1016/j.brat.2003.07.004.

Ougrin, D. (2011) ‘Efficacy of exposure versus cognitive therapy in anxiety disorders: systematic review and meta- analysis Efficacy of exposure versus cognitive therapy in anxiety disorders: systematic review and meta-analysis’, BMC Psychiatry, 11(December), p. 200. doi: 10.1186/1471-244X-11-200.

Powers, M. B. and Emmelkamp, P. M. G. (2008) ‘Virtual reality exposure therapy for anxiety disorders: A meta-analysis’, Journal of Anxiety Disorders, 22(3), pp. 561–569. doi: 10.1016/j.janxdis.2007.04.006.

Price, M. et al. (2011) ‘Does engagement with exposure yield better outcomes? Components of presence as a predictor of treatment response for virtual reality exposure therapy for social phobia’, Journal of Anxiety Disorders. Elsevier Ltd, 25(6), pp. 763–770. doi: 10.1016/j.janxdis.2011.03.004.

Rauch, S. A. M., Eftekhari, A. and Ruzek, J. I. (2012) ‘Review of exposure therapy: A gold standard for PTSD treatment’, The Journal of Rehabilitation Research and Development, 49(5), p. 679. doi: 10.1682/JRRD.2011.08.0152.

Sherbourne, C. D. et al. (2010) ‘Functioning and disability levels in primary care out-patients with one or more anxiety disorders’, Psychological Medicine, 40(12), pp. 2059–2068. doi: 10.1017/S0033291710000176.

Slater, M. et al. (2006) ‘An Experimental Study on Fear of Public Speaking Using a Virtual Environment’, CyberPsychology & Behavior, 9(5), pp. 627–633. doi: 10.1089/cpb.2006.9.627.

Slater, M. and Wilbur, S. (1997) ‘A Framework for Immersive Virtual Environments(FIVE)- Speculations on the role of presence in virtual environments’, Presence: Teleoperators and Virtual Environments, 6(6), pp. 1–17. doi: 10.1162/pres.1997.6.6.603.

Stein, M. B. and Stein, D. J. (2008) ‘Social anxiety disorder.’, Lancet, 371(9618), pp. 1115–1125. doi: 10.1016/S0140-6736(08)60488-2.

Turk, M. (2014) ‘Multimodal interaction: A review’, Pattern Recognition Letters. Elsevier B.V., 36(1), pp. 189–195. doi: 10.1016/j.patrec.2013.07.003.

Wrzesien, M. and Alcañiz Raya, M. (2010) ‘Learning in serious virtual worlds: Evaluation of learning effectiveness and appeal to students in the E-Junior project’, Computers and Education. Elsevier Ltd, 55(1), pp. 178–187. doi: 10.1016/j.compedu.2010.01.003.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

Related Content

All Tags

Content relating to: "Information Technology"

Information Technology refers to the use or study of computers to receive, store, and send data. Information Technology is a term that is usually used in a business context, with members of the IT team providing effective solutions that contribute to the success of the business.

Related Articles

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: