Learning a Deep Generative Model like a Program: the Free Category Prior
Published in AAAI Fall 2020 Symposium on Conceptual Abstraction and Analogy in Natural and Artificial Intelligence, 2020
Recommended citation: @article{sennesh2020learning, title={Learning a Deep Generative Model like a Program: the Free Category Prior}, author={Sennesh, Eli}, journal={arXiv preprint arXiv:2011.11063}, year={2020} } http://esennesh.github.io/files/freecat_aaai_symposium_2020.pdf
Humans surpass the cognitive abilities of most other animals in our ability to "chunk" concepts into words, and then combine the words to combine the concepts. In this process, we make "infinite use of finite means", enabling us to learn new concepts quickly and nest concepts within each-other. While program induction and synthesis remain at the heart of foundational theories of artificial intelligence, only recently has the community moved forward in attempting to use program learning as a benchmark task itself. The cognitive science community has thus often assumed that if the brain has simulation and reasoning capabilities equivalent to a universal computer, then it must employ a serialized, symbolic representation. Here we confront that assumption, and provide a counterexample in which compositionality is expressed via network structure: the free category prior over programs. We show how our formalism allows neural networks to serve as primitives in probabilistic programs. We learn both program structure and model parameters end-to-end.
Recommended citation: @article{sennesh2020learning, title={Learning a Deep Generative Model like a Program: the Free Category Prior}, author={Sennesh, Eli}, journal={arXiv preprint arXiv:2011.11063}, year={2020} }