Posts by Collection

portfolio

publications

Structured Gotos are (Slightly) Harmful

Published in Proceedings of the 31st Annual ACM Symposium on Applied Computing, 2016

We take up the questions of if and how “structured goto” statements impact defect proneness, and of which what concept of size yields a superior metric for defect prediction. Read more

Recommended citation: Sennesh, E., & Gil, Y. (2016). Structured gotos are (slightly) harmful. In Proceedings of the 31st Annual ACM Symposium on Applied Computing (pp. 1784–1789). http://esennesh.github.io/files/p1784-sennesh.pdf

Créer de tête de nombreux mots de passe inviolables et inoubliables

Published in ALGOTEL 2018 - 20èmes Rencontres Francophones sur les Aspects Algorithmiques des Télécommunications, 2018

Nous présentons un générateur de mots de passe nommé Cue-Pin-Select qui est sécurisé, durable, adaptable à tous les ensembles de contraintes usuelles et aisément exécutable de tête ou avec un papier et un stylo. Ce générateur extrait de manière pseudo-aléatoire une suites de caractères à partir d’une phrase facile à mémoriser, d’indices locaux et d’un PIN à quatre chiffres. Les mots de passe sont indépendamment sécurisés, et résistent même lorsqu’un adversaire obtient un ou plusieurs mots de passe déjà créés par le générateur. Read more

Recommended citation: Nicolas Blanchard, Leila Gabasova, Ted Selker, Eli Sennesh. Créer de tête de nombreux mots de passe inviolables et inoubliables. ALGOTEL 2018 - 20èmes Rencontres Francophones sur les Aspects Algorithmiques des Télécommunications, May 2018, Roscoff, France. 2018. http://esennesh.github.io/files/creer-de-tete_algotel.pdf

Combinators for Modeling and Inference

Published in First International Conference on Probabilistic Programming, 2018

We develop a combinator library for the Probabilistic Torch framework. Combinators are functions accept and return models. Combinators enable compositional interleaving of modeling and inference operations, which streamlines model design and enables model-specific inference optimizations. Model combinators define a static graph from (possibly dynamic) model components. Examples of patterns that can be expressed as combinators are mixtures, state-space models, and models with global and local variables. Inference combinators preserve model structure, but alter model evaluation. Operations that we can represent as combinators include enumeration, importance sampling, resampling, and Markov chain Monte Carlo transitions. We validate our approach on variational methods for hidden Markov models. Read more

Recommended citation: Sennesh, E., Wu, H., & van de Meent, J.-W. (2018). Combinators for Modeling and Inference. http://esennesh.github.io/files/probprog_2018_combinators.pdf

Composing Modeling and Inference Operations with Probabilistic Program Combinators

Published in All of Bayesian Nonparametrics (Especially the Useful Bits) @ NeurIPS 2018, 2018

Probabilistic programs with dynamic computation graphs can define measures over sample spaces with unbounded dimensionality, which constitute programmatic analogues to Bayesian nonparametrics. Owing to the generality of this model class, inference relies on “black-box” Monte Carlo methods that are often not able to take advantage of conditional independence and exchangeability, which have historically been the cornerstones of efficient inference. We here seek to develop a “middle ground” between probabilistic models with fully dynamic and fully static computation graphs. To this end, we introduce a combinator library for the Probabilistic Torch framework. Combinators are functions that accept models and return transformed models. We assume that models are dynamic, but that model composition is static, in the sense that combinator application takes place prior to evaluating the model on data. Combinators provide primitives for both model and inference composition. Model combinators take the form of classic functional programming constructs such as map and reduce. These constructs define a computation graph at a coarsened level of representation, in which nodes correspond to models, rather than individual variables. Inference combinators implement operations such as importance resampling and application of a transition kernel, which alter the evaluation strategy for a model whilst preserving proper weighting. Owing to this property, models defined using combinators can be trained using stochastic methods that optimize either variational or wake-sleep style objectives. As a validation of this principle, we use combinators to implement black box inference for hidden Markov models. Read more

Recommended citation: Sennesh, E., Scibior, A., Wu, H., & van de Meent, J.-W. (2018). Composing Modeling and Inference Operations with Probabilistic Program Combinators. https://drive.google.com/file/d/1bv8g7KTgpgRLsx3ZcaPzIlhGzSa-QkhQ/view

Neural Topographic Factor Analysis for fMRI Data

Published in arXiv, 2019

Neuroimaging experiments produce a large volume (gigabytes) of high-dimensional spatio-temporal data for a small number of sampled participants and stimuli. Analyses of this data commonly compute averages over all trials, ignoring variation within groups of participants and stimuli. To enable the analysis of fMRI data without this implicit assumption of uniformity, we propose Neural Topographic Factor Analysis (NTFA), a deep generative model that parameterizes factors as functions of embeddings for participants and stimuli. We evaluate NTFA on a synthetically generated dataset as well as on three datasets from fMRI experiments. Our results demonstrate that NTFA yields more accurate reconstructions than a state-of-the-art method with fewer parameters. Moreover, learned embeddings uncover latent categories of participants and stimuli, which suggests that NTFA takes a first step towards reasoning about individual variation in fMRI experiments. Read more

Recommended citation: Sennesh, E., Khan, Z., Dy, J., Satpute, A. B., Hutchinson, J. B., & van de Meent, J.-W. (2019). Neural Topographic Factor Analysis for fMRI Data. ArXiv E-Prints, arXiv:1906.08901. https://arxiv.org/abs/1906.08901

talks

Neuroscience, Sloppiness, and Ground Truth

Published:

I discuss the relationships between different levels of explanation in neuroscience, and in the philosophy of science. I focus on the paper “Could a neuroscientist understand a microprocessor?”, with reference to the concept of sloppiness and how it relates to reductionism. Read more

teaching

Teaching Assistant

Undergraduate course, Technion, Faculty of Computer Science, 2014

Teaching assistant of Programming Languages course (234319). Wrote and graded exercises, and held office hours for students. Read more