Virtual museums enable Internet users to explore museum collections online. The question is how to enhance the viewer's experience and learning in such environments. In the Sarajevo Survival Tools virtual museum we introduced a new concept of interactive digital storytelling that will enable the visitors to explore the virtual exhibits - objects from the siege of Sarajevo - guided by a digital story. This way the virtual museum visitors will learn about the context of the displayed objects and be motivated to explore all of them. In this paper we present the virtual environment we developed and our experience with it. The results from three empirical studies we conducted, indicate the positive influence of digital storytelling and sound effects on visitors' perceptual response, resulting in increased motivation and enjoyment, and more effective information conveyance.
We know that computer assisted educational curricula are much more attention captivating and interesting to children compared with a classic paper and pencil approach to teaching. Educational computer games can easily engage students, captivate and maintain their attention allowing them both learning with teachers and practicing on their own time without the teacher’s direct attention. Overall, computer based instruction increases the motivation and results in faster acquisition of skills. Also, teaching children with developmental disabilities requires special set of tools and methods, due to decreased level of attention towards stimuli presented and lessened capability to learn in the ways typical children do. Therefore, computer based instruction seems to be a good match for these diverse learners because it offers multiple exemplars, interesting and interactive practice with constant feedback, multiplied learning opportunities without direct teacher engagement, and customization to each child’s needs. In this paper we present the expanded LeFCA framework that was proven successful for teaching children with autism basic skills and concepts, and we now tested it across various levels of learners with and without disabilities across 3 different languages: Bosnian-Croatian-Serbian (BHS), Italian and English (US). Within the pilot project, we produced four games for teaching matching, pointing out (based on visual and auditory stimuli) and labeling skills, which are considered to be primary skills needed for learning. We then expanded the frame with adding four more games that teach sorting, categorizing, sequencing and pattern making. The results of our user study, done with 20 participants in three different languages, showed that the created software in native languages was completely clear and user friendly for kids with and without special needs, and that is systematically and developmentally appropriately sequenced for learning. Additionally, we found that children were able to generalize learned skills, through a transfer to a new mediums or environments and their teacher reported that children were very motivated and enjoyed playing the games.
Current advancements in handheld computers in terms of hardware, opened the development possibilities for a lot of applications that demand high system performance. The following work focuses on enhancing the integration of augmented reality objects, and proposes improvements over the previous platform [1], such as: exposing the render engine to developers, introducing user interactivity components and optimising 3D objects' database. The major contribution of our work is introducing better object rendering with high frame rate comparing to other commercial and non-commercial AR platforms.
Bosnia and Herzegovina always has been a place where the East meets the West. Over 1000 years, different cultures, religions and civilizations have left their remains in this small country in Western Balkans. Despite all wars and tragic destructions, today in the heart of Sarajevo one can find mosques, Catholic and Orthodox churches and Jewish synagogues next to each other and people of different nations and religions living together in mutual respect and friendship. Multiethnic spirit of Bosnia and Herzegovina lives through its cultural heritage. Therefore our task is to ensure its presentation and preservation using Information and Communications Technologies (ICT). So far researchers have achieved significant results by creating several virtual museums. In this paper we will present the Museum of Bosnian Traditional Objects, Digital Catalogue of Stecaks and the Virtual Museum of Sarajevo Assassination, giving an overview of the process of creating virtual environments from multiple data sources based on various 3D digitization technologies: some based on traditional 3D modeling, other based on laser scanning or photogrametric techniques.
In recent years research in the three‐dimensional sound generation field has been primarily focussed upon new applications of spatialized sound. In the computer graphics community the use of such techniques is most commonly found being applied to virtual, immersive environments. However, the field is more varied and diverse than this and other research tackles the problem in a more complete, and computationally expensive manner. Furthermore, the simulation of light and sound wave propagation is still unachievable at a physically accurate spatio‐temporal quality in real time. Although the Human Visual System (HVS) and the Human Auditory System (HAS) are exceptionally sophisticated, they also contain certain perceptional and attentional limitations. Researchers, in fields such as psychology, have been investigating these limitations for several years and have come up with findings which may be exploited in other fields. This paper provides a comprehensive overview of the major techniques for generating spatialized sound and, in addition, discusses perceptual and cross‐modal influences to consider. We also describe current limitations and provide an in‐depth look at the emerging topics in the field.
In recent years research in the three‐dimensional sound generation field has been primarily focussed upon new applications of spatialized sound. In the computer graphics community the use of such techniques is most commonly found being applied to virtual, immersive environments. However, the field is more varied and diverse than this and other research tackles the problem in a more complete, and computationally expensive manner. Furthermore, the simulation of light and sound wave propagation is still unachievable at a physically accurate spatio‐temporal quality in real time. Although the Human Visual System (HVS) and the Human Auditory System (HAS) are exceptionally sophisticated, they also contain certain perceptional and attentional limitations. Researchers, in fields such as psychology, have been investigating these limitations for several years and have come up with findings which may be exploited in other fields. This paper provides a comprehensive overview of the major techniques for generating spatialized sound and, in addition, discusses perceptual and cross‐modal influences to consider. We also describe current limitations and provide an in‐depth look at the emerging topics in the field.
Despite the complexity of the Human Visual System (HVS), research over the last few decades has highlighted a number of its limitations. These limitations can be exploited in computer graphics to significantly reduce computational cost and thus required rendering time, without a viewer perceiving any difference in resultant image quality. Furthermore, crossmodal interaction between different modalities, such as the influence of audio on visual perception, has also been shown as significant both in psychology and computer graphics. In this paper we investigate the effect of beat rate on temporal visual perception, i.e. frame rate perception. For the visual quality and perception evaluation, a series of psychophysical experiments was conducted and the data analysed. The results indicate that beat rates in some cases do affect temporal visual perception and that certain beat rates can be used in order to reduce the amount of rendering required to achieve a perceptual high quality. This is another step towards a comprehensive understanding of auditory-visual crossmodal interaction and could be potentially used in highfidelity interactive multi-sensory virtual environments.
Serious games are becoming increasingly popular in education, science, medicine, religion, engineering, and some other fields. Additionally, serious heritage games, including virtual reconstructions and museums, provide a good environment for a synthesis of serious games and cultural heritage. This may be used for education in the form of edutainment, comprising various techniques, such as storytelling, visual expression of information, interactivity and entertainment \cite{vast09STAR}. This paper demonstrates a new concept of using live virtual guides in a Flash environment for cultural heritage virtual reconstruction. A pilot user study compares another approach using the X3D environment, highlighting the advantages and disadvantages of our concept. The introduced results can be easily adopted for serious games development.
Generating high-fidelity images in real-time at reasonable frame rates, still remains one of the main challenges in computer graphics. Furthermore, visuals remain only one of the multiple sensory cues that are required to be delivered simultaneously in a multi-sensory virtual environment. The most frequently used sense, besides vision, in virtual environments and entertainment, is audio. While the rendering community focuses on solving the rendering equation more quickly using various algorithmic and hardware improvements, the exploitation of human limitations to assist in this process remain largely unexplored. Many findings in the research literature prove the existence of physical and psychological limitations of humans, including attentional, perceptual and limitations of the Human Sensory System (HSS). Knowledge of the Human Visual System (HVS) may be exploited in computer graphics to significantly reduce rendering times without the viewer being aware of any resultant image quality difference. Furthermore, cross-modal effects, that is the influence of one sensory input on another, for example sound and visuals, have also recently been shown to have a substantial impact on viewer perception of virtual environment. In this thesis, auditory-visual cross-modal interaction research findings have been investigated and adapted to graphics rendering purposes. The results from five psychophysical experiments, involving 233 participants, showed that, even in the realm of computer graphics, there is a strong relationship between vision and audition in both spatial and temporal domains. The first experiment, investigating the auditory-visual cross-modal interaction within spatial domain, showed that unrelated sound effects reduce perceived rendering quality threshold. In the following experiments, the effect of audio on temporal visual perception was investigated. The results obtained indicate that audio with certain beat rates can be used in order to reduce the amount of rendering required to achieve a perceptual high quality. Furthermore, introducing the sound effect of footsteps to walking animations increased the visual smoothness perception. These results suggest that for certain conditions the number of frames that need to be rendered each second can be reduced, saving valuable computation time, without the viewer being aware of this reduction. This is another step towards a comprehensive understanding of auditory-visual cross-modal interaction and its use in high-fidelity interactive multi-sensory virtual environments.
The quality of real-time computer graphics has progressed enormously in the last decade due to the rapid development in graphics hardware and its utilisation of new algorithms and techniques. The computer games industry, with its substantial software and hardware requirements, has been at the forefront in pushing these developments. Despite all the advances, there is still a demand for even more computational resources. For example, sound effects are an integral part of most computer games. This paper presents a method for reducing the amount of effort required to compute the computer graphics aspects of a game by exploiting movement related sound effects. We conducted a detailed psychophysical experiment investigating how camera movement speed and the sounds affect the perceive smoothness of an animation. The results show that walking (slow) animations were perceived as smoother than running (fast) animations. We also found that the addition of sound effects, such as footsteps, to a walking/running animation affects the animation smoothness perception. This entails that for certain conditions the number of frames that need to be rendered each second can be reduced saving valuable computation time. Our approach will enable the computed frame rate to be decreased, and thus the computational requirements to be lowered, without any perceivable visual loss of quality.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više