Logo

Publikacije (173)

Nazad
E. Makalic, D. Schmidt

This paper examines the problem of simultaneously testing many independent multiple hypotheses within the minimum encoding framework. We introduce an efficient coding scheme for nominating the accepted hypotheses in addition to compressing the data given these hypotheses. This formulation reveals an interesting connection between multiple hypothesis testing and mixture modelling with the class labels corresponding to the accepted hypotheses in each test. An advantage of the resulting method is that it provides a posterior distribution over the space of tested hypotheses which may be easily integrated into decision theoretic post-testing analysis.

Mark H. Greene, P. Guénel, C. Haiman, Per Hall, U. Hamann, Christopher R. Hake, Wei He, Jane Heyworth et al.

E. Makalic, D. Schmidt

In this note expressions are derived that allow computation of the Kullback-Leibler (K-L) divergence between two first-order Gaussian moving average models in O n(1) time as the sample size n ¿ ¿. These expressions can also be used to evaluate the exact Fisher information matrix in On(1) time, and provide a basis for an asymptotic expression of the K-L divergence.

Ingrid Zukerman, P. Ye, K. Gupta, E. Makalic

This paper describes a probabilistic mechanism for the interpretation of sentence sequences developed for a spoken dialogue system mounted on a robotic agent. The mechanism receives as input a sequence of sentences, and produces an interpretation which integrates the interpretations of individual sentences. For our evaluation, we collected a corpus of hypothetical requests to a robot. Our mechanism exhibits good performance for sentence pairs, but requires further improvements for sentence sequences.

D. Schmidt, E. Makalic

This paper considers the problem of constructing information theoretic universal models for data distributed according to the exponential distribution. The universal models examined include the sequential normalized maximum likelihood (SNML) code, conditional normalized maximum likelihood (CNML) code, the minimum message length (MML) code, and the Bayes mixture code (BMC). The CNML code yields a codelength identical to the Bayesian mixture code, and within O(1) of the MML codelength, with suitable data driven priors.

M. Jenkins, A. Cust, D. Schmidt, E. Makalic, E. Holland, Helen Scmid, R. Kefford, G. Giles et al.

Ingrid Zukerman, E. Makalic, M. Niemann

We describe a probabilistic reference disambiguation mechanism developed for a spoken dialogue system mounted on an autonomous robotic agent. Our mechanism processes referring expressions containing intrinsic features of objects (lexical item, colour and size) and locative expressions, which involve more than one concept. The intended objects are identified in the context of the output of a simulated scene analysis system, which returns the colour and size of the seen objects and a distribution for their type. The evaluation of our system shows high resolution performance across a range of spoken referring expressions and simulated vision accuracies.

Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!

Pretplatite se na novosti o BH Akademskom Imeniku

Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo

Saznaj više