My thesis is on unsupervised learning in the context of multimodality and mobile robotics. The goal is to build an internal representation of the internal state of the robot and its environment from sensory experiences, to later on being able to imagine (dream) new or similar sensory experiences from generative modelling approaches. I specialize in object-based representation of temporal signals (e.g. sound, inertial sensors) and spatial signals (e.g. images, optical flow), as well as generative modelling (e.g. restricted Boltzmann machine). I am also interested in the neuroscience and neurocomputational aspects, so how the brain could implement similar mechanisms.