High-fidelity virtual reconstructions can be used as accurate 3D representations of historical environments. After modelling the site to high precision, physically-based and historically correct light models must be implemented to complete an authentic visualisation. Sunlight has a major visual impact on a site; from directly lit areas to sections in deep shadow. The scene illumination also changes substantially at different times of the day. In this paper we present a virtual reconstruction of the Panagia Angeloktisti; a Byzantine church on Cyprus. We investigate lighting simulations of the church at different times of the day, making use of Image-Based Lighting, using High Dynamic Range Environment Maps of photographs and interpolated spectrophotometer data collected on site. Furthermore, the paper also explores the benefits and disadvantages of employing unbiased rendering methods such as Path Tracing and Metropolis Light Transport for cultural heritage applications.
Knowledge of the Human Visual System (HVS) may be exploited in computer graphics to significantly reduce rendering times without the viewer being aware of any resultant image quality difference. Furthermore, cross-modal effects, that is the influence of one sensory input on another, for example sound and visuals, have also recently been shown to have a substantial impact on viewer perception of image quality. In this paper we investigate the relationship between audio beat rate and video frame rate in order to manipulate temporal visual perception. This represents an initial step towards establishing a comprehensive understanding for the audio-visual integration in multisensory environments.
High-fidelity rendering is computationally demanding and has only recently become achievable at interactive frame rates on high-performance desktop PCs. Research on visual perception has demonstrated that parts of the scene that are not in the focus of viewer's attention may be rendered at much lower quality without this quality difference being perceived. It has also been shown that cross-modal interaction between visual and auditory stimuli can have a significant influence on perception. This paper investigates the limitations of the human visual System and the impact cross-modal interactions has on perceivable rendering thresholds. We show that by exploiting cross-modal interaction, significant savings in rendering quality and hence computational requirements can be achieved, while maintaining the same overall perceptual high quality of the resultant image.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više