Videos

The Inevitability of Probability: Near-Optimal Probabilistic Inference in Generic Neural Networks Trained with Non-Probabilistic Feedback - Wei Ji Ma New York University

August 19, 2015
Keywords:
  • Bayesian inference
  • Neural networks
  • McGurk effect
  • Neural likelihood function
  • Brain stimulus
Abstract
Animals have been shown to perform near-optimal probabilistic inference in a wide range of psychophysical tasks, from causal inference to cue combination to visual search. On the face of it, this is surprising because optimal probabilistic inference in each case is associated with highly non-trivial behavioral strategies. Yet, typically animals receive little to no feedback during most of these tasks and the received feedback is generally not probabilistic in nature. How can animals learn such non-trivial behavioral strategies from scarce non-probabilistic feedback? Here, we show that generic feed-forward and recurrent neural networks trained with very few non-probabilistic examples using simple error-based learning rules can perform near-optimal probabilistic inference. The trained networks implement fully probabilistic strategies as evidenced by the fact that the precision of relevant posteriors can be reliably read out from the pooled activities of subsets of neurons in the network. In many cases, the trained networks also display remarkable generalization to stimulus conditions not seen during training. Our results suggest that far from being difficult to learn, optimal probabilistic inference emerges naturally and robustly in generic neural networks trained with error-based learning rules, even when neither the training objective nor the training examples are probabilistic.