SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts

SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts


SceneMaker: Intelligent Multimodal Visualisation
of Natural Language Scripts

Eva Hanser

School of Computing & Intelligent Systems

Faculty of Computing & Engineering
University of Ulster, Magee

Derry/Londonderry BT48 7JL

Northern Ireland

E-mail: hanser-e@email.ulster.ac.uk

100 Day Review Report Febuary SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts, 2009

Supervisors: Prof. Paul Mc Kevitt, Dr. Tom Lunney, Dr. Joan Condell
Abstract
Performing plays or creating films/animations is a complex and thus expensive process involving various professionals and media. This research project proposes SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts to augment this process by automatically interpreting film/play scripts and generating animated scenes from them. Therefore a web based software prototype, SceneMaker, will be implemented. During the generation of SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts the story content, particular attention will be given to emotional aspects and their reflection in the execution of all types of modalities (fluency and manner of action/behaviour, speech, gaze duration SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts and direction, scene composition, timing, lighting, music, camera, set/stage, costumes). Literature on related research areas of Natural Language Processing (NLP) with regard to personality and emotion detection, embodied agents, modeling affective behaviour SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts, visualisation of 3D scenes and digital cinematography is reviewed. Technologies and software relevant for the development of SceneMaker are analysed. The project’s aims, objectives and development plan are presented. How the scene SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts and actor behaviour changes when emotional states are taken into account (e.g. a happy versus a sad state) will be investigated. Potential unique contributions of this research are the generation of complete SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts scenes from play scripts, the development of a methodology which combines all relevant modalities, influences of expressivity on all modalities and deployment on mobile devices. In conclusion, SceneMaker will reduce production SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts time, save costs and enhance communication of ideas providing quick pre-visualisations of scenes.

Keywords: Natural Language Processing, Intelligent Multimodal Interfaces, Film Making/Theatre Production, Affective Agents, Emotional Body Posture SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts Modeling, 3D Visualisation, SceneMaker
1Introduction
The production of plays or movies is an expensive process involving planning and rehearsal time, actors, technical equipment for lighting, sound and special effects. It is also a creative act which SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts might not always be straightforward, but requires experimentation, visualisation of ideas and their communication between everyone involved (e.g. play writers, directors, actors, camera man, orchestra, managers, costume and set SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts designer). This research proposes a web based software prototype, SceneMaker, which will assist in this production process. SceneMaker will provide a facility for everyone involved in the creation of dynamic/animated scenes SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts to test and pre-visualise scenes before putting them into action. Users input a natural language text scene script and automatically receive multimodal 3D visualisations taking into account considerations such as aesthetics and SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts emotions. The user can refine the output through an interface which facilitates the control of character personality, emotional states, modalities of output, actions and cinematographic settings (e.g. lighting and camera). Such SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts technology could be applied in the training of film/drama directors without having to continuously utilise expensive actors and actresses. Alternatively it could be used in advertising agencies that regularly need to visualise SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts numerous ideas and concepts. At the Ohio State University a virtual theatre interface for teaching drama students about lighting, positioning on stage and different view points (Virtual Theatre, 2004) was SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts considered very beneficial and had a significant impact on training methods.


SceneMaker will be accessible over the internet and thus will be an easily available tool for script writers, animators, directors, actors SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts or drama students to creatively and inexpensively express their ideas and prove their effectiveness in achieving their desired effect whilst writing or advising directors on set. Successful example scenes can be saved and shared with SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts other scene producers in an online gallery classified by different film/drama genres and scene topics. A Graphical User Interface (GUI) suitable for mobile devices is intended to facilitate SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts the use of SceneMaker on stage or on set. The SceneMaker prototype will be developed using appropriate multimodal technology and will extend an existing software prototype, CONFUCIUS (Ma, 2006), which performs automated conversion of natural language SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts to 3D animation.


SceneMaker focuses on the precise representation of emotional expression in all modalities available for scene production and especially on most human-like modeling of body language as SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts it is the most expressive modality in human communication, delivering 60-80 percent of our messages. Actual words only present 7-10 percent of all modalities delivering a message in conversation (Su et al., 2007). Further modalities include SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts voice tone, volume, facial expression, gaze, gestures, body posture, spatial behaviour and aspects of appearance. These facts show the importance of the visualisation of body language in film/play production, but also SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts point out the challenges in deriving information for animation from scripts containing mostly dialogues. Much research is dedicated to detailed modeling of emotion and facial expressions, gaze and хэнд gestures SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts (Kopp et al., 2008; Sowa, 2008), but body posture has yet to be addressed extensively (Gunes and Piccardi, 2006).
^ 1.1Research Aims and Objectives
This research aims to solve three research questions: How can emotional information SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts be computationally inter-preted from screenplays and structured for visualisation purposes? How can emotional states be synchronised in presenting all relevant modalities? Can compelling, life-like animations be achieved? Therefore this research aims to implement SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts an automated animation system, with a user interface for manual manipulation, catering for affective actor modeling and scene production based on personality, social and narrative roles and emotions. The objective is to SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts give directors or animators a reasonable idea of what the scene they are planning will look like. The software prototype, SceneMaker, will be a multimodal content generation system, accessible on SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts mobile devices, which can be applied seamlessly for testing and customizing performances according to the producers’ intentions. SceneMaker will provide a unique training facility for those involved in scene production. It may also SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts be useful for advertising agencies, which constantly need rapid visualisations of various ideas and concepts.


Section 2 of this report gives an overview of current research on scene production for multimodal SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts and interactive storytelling, virtual theatre and affective agents. In section 3, the project proposal and prototype, SceneMaker, are described in detail. SceneMaker is compared to related multimodal visualisation applications in section 4. Section 5 concludes SceneMaker: Intelligent Multimodal Visualisation of Natural Language Scripts the report.

schitalka-zayac-belij-kuda-begal.html
schitayu-neobhodimim-izmenit-poryadok-opovesheniya-zhitelej-dopolniv-polozhenie-drugimi-sposobami-informirovaniya.html
schms-obshaya-harakteristika.html