Technological advances and the information era allow the collection of massive amounts of data at unprecedented resolution. Making use of this data to gain insights into complex phenomena requires characterizing the relationships among a large number of variables. Probabilistic graphical models explicitly capture the statistical relationships between the variables of interest in the form of a network. Such a representation, in addition to enhancing interpretability of the model, often enables computationally efficient inference. My group studies graphical models and develops theory, methodology and algorithms to allow application of these models to scientifically important novel applications. In particular, our work to date has broken new grounds on providing a systematic approach to studying Gaussian graphical models, a framework that is rich enough to capture broad phenomena, but also allows systematic statistical and computational investigations. More generally, we study models with linear constraints on the covariance matrix or its inverse, as they arise in various applications and allow efficient computation. We use a holistic approach that combines ideas from machine learning, mathematical statistics, convex optimization, combinatorics, and applied algebraic geometry. For example, by leveraging the inherent algebraic and combinatorial structure in graphical models, we have uncovered statistical and computational limitations and developed new algorithms for learning directed graphical models to perform causal inference.
Building on our theoretical work, my group also develops scalable algorithms with provable guarantees for applications to genomics, in particular for learning gene regulatory networks. Recent technological developments have led to an explosion of singlecell imaging and sequencing data. Since most experiments require fixing a cell, one can only obtain one data modality per cell, take one snapshot of a particular cell in time, and observe a cell either before or after a perturbation (but not both). Hence a major computational challenge going forward is how to integrate the emerging singlecell data to identify regulatory modules in health and disease. Towards solving this challenge, my group has developed the first provably consistent causal structure discovery algorithms that can integrate observational and interventional data from gene knockout or knockdown experiments. In addition, we recently developed methods based on autoencoders and optimal transport to integrate and translate between different singlecell data modalities and data measured from different time points of a biological process. Since autoencoders have become key tools for representation learning in biological applications, my group studies their theoretical properties, and in recent work we characterized their inductive biases. Together with our geometric models that link the packing of the DNA in the cell nucleus to gene expression, our methods have led to new biomarkers for early cancer prognosis based on singlecell images of DNA stained cell nuclei.
In what follows, we provide more details about our work in the abovementioned areas and some selected publications. A complete list of publications can be found here or in my CV.
Thanks to NSF, ONR, DARPA, IBM, the Sloan Foundation and the Simons Foundation for supporting my research!
Theory and Methodology for Causal Inference
Causal inference is a cornerstone of scientific discovery. Genomics provides a unique opportunity for method development in causal inference, since through the development of genome editing technologies it is for the first time possible to obtain largescale interventional data sets. Answering a central question regarding experimental design, we showed, quite surprisingly, that soft interventions (such as gene knockdown experiments) provide the same amount of structural causal information as hard interventions (such as gene knockout experiments), despite being less invasive. This line of work resulted in the first provably consistent algorithms for learning causal graphs that can make use of soft and hard interventions, and nonGaussian and zeroinflated data (as is common for singlecell RNAseq). Most recently, we started developing optimal experimental design schemes for proposing perturbation experiments in genomics. The ultimate goal is to develop robust strategies for iteratively planning, performing, and learning from interventions for example to reprogram cells or direct the differentiation of pluripotent cells towards a specific cell type.

A. Radhakrishnan, L. Solus and C. Uhler: Counting Markov equivalence classes by number of immoralities. Proceedings of the ThirtyThird Conference on Uncertainty in Artificial Intelligence (UAI 2017).

C. Uhler, G. Raskutti, P. Bühlmann and B. Yu: Geometry of faithfulness assumption in causal inference. Annals of Statistics 41 (2013), pp. 436463.

G. Raskutti and C. Uhler: Learning directed acyclic graphs based on sparsest permutations. Stat 7 (2018), e183.

Y. Wang, L. Solus, K.D. Yang and C. Uhler: Permutationbased causal inference algorithms with interventions. Advances in Neural Information Processing (NIPS 2017).

K.D. Yang, A. Katcoff, A. and C. Uhler: Characterizing and learning equivalence classes of causal DAGs under interventions. Proceedings of Machine Learning Research 80 (ICML 2018), pp. 55375546.

R. Agrawal, T. Broderick and C. Uhler: Minimal IMAP MCMC for scalable structure discovery in causal DAG models. Proceedings of Machine Learning Research 80 (ICML 2018), pp. 8998

Y. Wang, C. Squires, A. Belyaeva, A. and C. Uhler: Direct estimation of differences in causal graphs. Advances in Neural Information Processing Systems 31 (2018).

D. KatzRogozhnikov, K. Shanmugam, C. Squires and C. Uhler: Size of interventional Markov equivalence classes in random DAG models. Proceedings of Machine Learning Research 89 (AISTATS 2019), pp. 32343243.

R. Agrawal, C. Squires, K.D. Yang, K. Shanmugam and C. Uhler: ABCDStrategy: Budgeted experimental design for targeted causal structure discovery. Proceedings of Machine Learning Research 89 (AISTATS 2019), pp. 34003409.

D. I. Bernstein, B. Saeed, C. Squires and C. Uhler, Orderingbased causal structure learning in the presence of latent variables, Proceedings of Machine Learning Research 108 (AISTATS 2020), pp. 40984108.

B. Saeed, A. Belyaeva, Y. Wang and C. Uhler: Anchored causal inference in the presence of measurement noise. Proceedings of the ThirtySixth Conference on Uncertainty in Artificial Intelligence (2020).

C. Squires, Y. Wang and C. Uhler, Permutationbased causal structure learning with unknown intervention targets, Proceedings of the ThirtySixth Conference on Uncertainty in Artificial Intelligence (2020).

B. Saeed, S. Panigrahi and C. Uhler: Causal structure discovery from distributions arising from mixtures of DAGs, Proceedings of Machine Learning Research 119 (ICML 2020).

L. Solus, Y. Wang and C. Uhler: Consistency guarantees for permutationbased causal inference algorithms. To appear in Biometrika.
Representation Learning: OverParameterization and Generalization
Deep learning models have reached stateoftheart accuracy in a number of tasks from image classification to machine translation; yet, the underlying mechanisms driving these successes are not yet well understood. Specifically, while deep networks used in practice are often overparameterized, i.e., large enough to perfectly fit training data, these networks perform well on test data, which seemingly contradicts the notion of overfitting. Since representation learning is key in many biological applications, we have focused on neural networks used in generative modeling (autoencoders, GANs, flowbased models) to understand this phenomenon. In recent work, we showed that while overparameterized autoencoders have the capacity to learn the identity map, they are selfregularizing and instead learn functions that are locally contractive at the training examples. In particular, we showed that increasing depth makes the autoencoders more contractive around the training examples, while increasing width increases the basins of attraction, thereby suggesting that wider, more shallow networks may be preferable in practice. We used this insight to develop autoencoders for embedding data from largescale drug screens and identify drug signatures that are generalizable across different cell types.

A. Radhakrishnan, M. Belkin and C. Uhler: Overparameterized neural networks implement associative memory. Proceedings of the National Academy of Sciences U.S.A., 117 (2020), pp. 2716227170.

E. Nichani, A. Radhakrishnan and C. Uhler: Do deeper convolutional networks perform better? Submitted.

A. Belyaeva, L. Cammarata, A. Radhakrishnan, C. Squires, K.D. Yang, G.V. Shivashankar and C. Uhler: Causal network models of SARSCoV2 expression and aging to identify candidates for drug repurposing, to appear in Nature Communications.

C. Squires, D. Shen, A. Agarwal, D. Shah and C. Uhler: Causal imputation via synthetic interventions. Submitted.
Total Positivity for Modeling Positive Dependence
In many applications, it is of interest to model positive dependence (e.g. coregulated genes, correlated stocks). MTP2 (multivariate total positivity of order 2), an important notion in probability theory and statistical physics introduced in the 1970s, is the strongest known form of positive dependence. While the MTP2 constraint is very restrictive and has not been used for modeling, we recently showed that MTP2 is implied by latent tree models and that, quite surprisingly, MTP2 distributions appear in practice in a variety of domains ranging from phylogenetics to finance. In addition, we showed that MTP2 has intriguing properties as an implicit regularizer for modeling in the highdimensional setting: We showed that computing the MLE in MTP2 exponential families is a convex optimization problem. In particular, for quadratic exponential families (such as Gaussian or Ising models) it leads to sparsity of the underlying graph without the need of a tuning parameter. In fact, highdimensional sparsistency in Gaussian graphical models under MTP2 can be obtained without any tuning parameters. More recently, we also studied MTP2 in the nonparametric setting and we exploited the MTP2 property for portfolio selection (assets are often positively dependent) and genomics (for identifying spatially clustered coregulated genes).

S. Fallat, L. Lauritzen, K. Sadeghi, C. Uhler, N. Wermuth and P. Zwiernik: Total positivity in Markov structures. Annals of Statistics 45 (2017), pp. 11521184.

S. Lauritzen, C. Uhler and P. Zwiernik: Maximum likelihood estimation in Gaussian models under total positivity. Annals of Statistics, 47 (2019), pp. 18351863.

Y. Wang, U. Roy and C. Uhler: Learning highdimensional Gaussian graphical models under total positivity without tuning parameters. Proceedings of Machine Learning Research, 108 (AISTATS 2020), pp. 26982708.

R. Agrawal, U. Roy and C. Uhler: Covariance matrix estimation under total positivity for portfolio selection, Journal of Financial Econometrics (2020), nbaa018.

E. Robeva, B. Sturmfels, N. Tran and C. Uhler: Maximum likelihood estimation for totally positive logconcave densities. To appear in Scandinavian Journal of Statistics.

S. Lauritzen, C. Uhler and P. Zwiernik: Total positivity in structured binary distributions. To appear in Annals of Statistics.
3D Genome Organization and Gene Regulation
The proximity of chromosomes and genome regions are critical for gene activity. We introduced a new geometric model for the organization of chromosomes based on the theory of packing in mathematics. We modeled a chromosome arrangement as a minimal overlap configuration of ellipsoids of various sizes and shapes inside an ellipsoidal container, the cell nucleus. To find locally optimal ellipsoid packings, we devised a bilevel optimization procedure that provably converges to these optima. By applying this model, we were able to predict the organization of chromosomes in different cell types and how it effects gene regulation during health and disease.

C. Uhler and S.J. Wright: Packing ellipsoids with overlap. SIAM Review 55 (2013), pp. 671706 (selected as Research Spotlight)

Y. Wang, M. Nagarajan, C. Uhler and G.V. Shivashankar: Orientation and repositioning of chromosomes correlate with cell geometrydependent gene expression. To appear in Molecular Biology of the Cell 28 (2017), pp. 19972009.

C. Uhler and G.V. Shivashankar: Chromosome intermingling: Mechanical hotspots for genome regulation. Trends in Cell Biology 27 (2017), pp. 810819 (invited review).

C. Uhler and G.V. Shivashankar: Regulation of genome organization and gene expression by nuclear mechanotransduction. Nature Reviews Molecular Cell Biology 18 (2017), pp. 717727 (invited review).

A. Belyaeva, S. Venkatachalapathy, M. Nagarajan, G.V. Shivashankar and C. Uhler: Network analysis identifies chromosome intermingling regions as regulatory hotspots for transcription. Proceedings of the National Academy of Sciences, U.S.A. 114 (2017), pp. 1371413719.

C. Uhler and G.V. Shivashankar: Mechanogenomic regulation of coronaviruses and its interplay with ageing. Nature Reviews Molecular Cell Biology 21 (2020), pp. 247248.
Autoencoders and optimal transport for early cancer detection
Abnormalities in nuclear and chromatin organization are hallmarks of many diseases including cancer. We built on our packing models using neural networks to detect subtle changes in nuclear morphometrics in singlecell fluorescence images. Importantly, our method highlights the features indicative of each class in an image, thereby yielding high classification accuracy and interpretable image features that can aid the pathologist. Early cancer detection requires studying disease progression by observing single cells over time and integrating different data modalities. However, experimental techniques such as imaging or sequencing usually destroy the cell, making controlled timecourse experiments unfeasible. To overcome this problem, we developed a framework based on autoencoders to integrate and translate different data modalities and combined it with optimal transport (OT) to infer cellular trajectories. Our results demonstrate the promise of computational methods based on autoencoding and OT in settings where existing experimental strategies fail.

A. Radhakrishnan, D. Damodaran, A.C. Soylemezoglu, C. Uhler and G.V. Shivashankar: Machine learning for nuclear mechanomorphometric biomarkers in cancer diagnosis. Scientific Reports 7 (2017), article nr. 17946.

C. Uhler and G.V. Shivashankar: Nuclear mechanopathology and cancer diagnosis. Trends in Cancer 4 (2018), pp. 320331 (invited review).

A. Radhakrishnan, C. Durham, A. Soylemezoglu and C. Uhler: PatchNet: Interpretable neural networks for image classification. Machine Learning for Health (ML4H) Workshop, Neural Information Processing Systems (2018).

K.D. Yang and C. Uhler: Scalable unbalanced optimal transport using generative adversarial networks. International Conference on Learning Representations (ICLR 2019).

K.D. Yang, K. Damodaran, S. Venkatchalapathy, A.C. Soylemezoglu, G.V. Shivashankar and C. Uhler: Autoencoder and optimal transport to infer singlecell trajectories of biological processes. PLoS Computational Biology, 16 (2020), e1007828.

K.D. Yang, A. Belyaeva, S. Venkatchalapathy, K. Damodaran, A. Radhakrishnan, A. Katcoff, G.V. Shivashankar and C. Uhler. Multidomain translation between singlecell imaging and sequencing data using autoencoders, Nature Communications 12 (2021), article 31.
Gaussian Graphical Models
We uncovered the deep interplay between mathematical statistics, applied algebraic geometry, and convex optimization with regards to Gaussian graphical models. By developing a geometric understanding for maximum likelihood estimation, we obtained new results on the minimum number of observations needed for existence of the maximum likelihood estimator in Gaussian graphical models. In the Bayesian treatment of Gaussian graphical models, the GWishart distribution plays an important role, since it serves as the conjugate prior. It has been unknown whether the normalizing constant of the GWishart distribution for a general graph could be represented explicitly, and a considerable body of computational literature emerged that attempted to avoid this apparent intractability. We solved this 20year old problem by providing an explicit representation of the GWishart normalizing constant for general graphs.

B. Sturmfels and C. Uhler: Multivariate Gaussians, semidefinite matrix completion and convex algebraic geometry. Annals of the Institute of Statistical Mathematics 62 (2010), pp. 603638.

C. Uhler: Geometry of maximum likelihood estimation in Gaussian graphical models. Annals of Statistics 40 (2012), pp. 238261.

L. Solus, C. Uhler and R. Yoshida: Extremal positive semidefinite matrices for graphs without K5 minors. Linear Algebra and its Applications 509 (2016), pp. 247275.

M. Michalek, B. Sturmfels, C. Uhler and P. Zwiernik: Exponential varieties. Proceedings of the London Mathematical Society 112 (2016), pp. 2756.

C. Uhler, A. Lenkoski and D. Richards: Exact formulas for the normalizing constants of Wishart distributions for graphical models. Annals of Statistics 46 (2018), pp. 90118.

C. Uhler: Gaussian graphical models: An algebraic and geometric perspective. Book chapter in Handbook on Graphical Models (edited by M. Drton, S. Lauritzen, M. Maathuis and M. Wainwright), CRC Press (2018).