Seminar: "A Bayesian approach for material identification and 3D scene reconstruction with multispectral light RADAR data"

Yoann Altmann, Heriot-Watt University

Wednesday 19th April at 2.15pm in CM S.01

Attendees are encouraged to continue discussions in a more informal environment over coffee and biscuits in the common room after the seminar.

Abstract

This talk presents a new Bayesian approach to remotely identify the materials present in a scene by using multispectral light RADAR (LIDAR) data. This is achieved by adopting a statistical source separation strategy coupled with a reversible jump MCMC algorithm that exploits the unique reflectivity properties of each material to reconstruct a 3D map of the observed scene, with detailed information about the materials composing each location of the map.

To a first approximation, each LIDAR waveform consists of a main peak, whose position depends on the target distance and whose amplitude depends on the wavelength of the laser source considered and on the target reflectivity. When considering multiple wavelengths, it becomes possible to use spectral information inorder to identify and quantify the main materials in the scene, in addition to estimation of the LIDAR-based range profiles. Due to its anomaly detection capability, the proposed hierarchical Bayesian model, coupled with an efficient Markov chain Monte Carlo algorithm, allows robust estimation of depth images together with abundance and outlier maps associated with the observed 3D scene. The results demonstrate the possibility to "unmixing" spectral responses constructed from extremely sparse photon counts (1 per pixel and band), to provide confidence limits and are extremely encouraging for long-range and fast hyperspectral imaging.

 

Seminar: Situated Intelligent Interactive Systems

Zhou Yu, Carnegie Mellon University, USA

Tuesday 25th April at 2.15pm (room TBC)

Abstract

Communication is an intricate dance, an ensemble of coordinated individual actions.  Imagine a future where machines interact with us like humans, waking us up in the morning, navigating us to work, or discussing our daily schedules in a coordinated and natural manner.

Current interactive systems being developed by Apple, Google, Microsoft, and Amazon attempt to reach this goal by combining a large set of single-task systems. But products like Siri, Google Now, Cortana and Echo still follow pre-specified agendas that cannot transition between tasks smoothly and track and adapt to different users naturally. My research draws on recent developments in speech and natural language processing, human-computer interaction, and machine learning to work towards the goal of developing situated intelligent interactive systems.

These systems can coordinate with users to achieve effective and natural interactions. I have successfully applied the proposed concepts to various tasks, such as social conversation, job interview training and movie promotion. My team's proposal on engaging social conversation systems was selected to receive $100,000 from Amazon Inc. to compete in the Amazon Alexa Prize Challenge (https://developer.amazon.com/alexaprize).

Bio

I am a graduating PhD student at the Language Technology Institute under School of Computer Science, Carnegie Mellon University, working with Professor Alan W Black and Professor Alexander I. Rudnicky. I interned with Professor David Suendermann-Oeft in ETS San Francisco Office on cloud based multimodal dialog systems in 2015 summer and 2016 summer. I interned with Dan Bohus and Eric Horvitz in Microsoft Research on human-robot interaction in 2014 Fall.

Prior to CMU, I received a B.S. in Computer Science and a B.A. in Linguistics from Zhejiang University in 2011. I worked with Professor Xiaofei He and Professor Deng Cai on Machine Learning and Computer Vision, and Professor Yunhua Qu on Machine Translation.