Table of Links Abstract and 1. Introduction 2. Background 2.1 Amortized Stochastic Variational Bayesian GPLVM 2.2 Encoding Domain Knowledge through Kernels 3. Our Model and Pre-Processing and Likelihood 3.2 Encoder 4. Results and Discussion and 4.1 Each Component is Crucial to Modifies Model Performance 4.2 Modified Model achieves Significant Improvements over Standard Bayesian GPLVM and is Comparable to SCVI 4.3 Consistency of Latent Space with Biological Factors 4. Conclusion, Acknowledgement, and References A. Baseline Models B. Experiment Details C. Latent Space Metrics D. Detailed Metrics 3.2 ENCODER In the encoder analysis, we compare a simple encoder comprised of linear layers followed by SoftPlus activations (Simple NN) with the scVI’s more complex encoder (scVI NN). scVI NN incorporates batch information as input to the nonlinear mapping, so incorporating this encoder into the BGPLVM may help address batch effects observed in the raw count data. Additionally, the scVI encoder architecture includes batch normalizations, contributing to a more stable optimization process, which we leverage for our GPLVM implementation. This paper is available on arxiv under CC BY-SA 4.0 DEED license. Authors: (1) Sarah Zhao, Department of Statistics, Stanford University, (smxzhao@stanford.edu); (2) Aditya Ravuri, Department of Computer Science, University of Cambridge (ar847@cam.ac.uk); (3) Vidhi Lalchand, Eric and Wendy Schmidt Center, Broad Institute of MIT and Harvard (vidrl@mit.edu); (4) Neil D. Lawrence, Department of Computer Science, University of Cambridge (ndl21@cam.ac.uk). Table of Links Abstract and 1. Introduction Abstract and 1. Introduction 2. Background 2.1 Amortized Stochastic Variational Bayesian GPLVM 2.1 Amortized Stochastic Variational Bayesian GPLVM 2.2 Encoding Domain Knowledge through Kernels 2.2 Encoding Domain Knowledge through Kernels 3. Our Model and Pre-Processing and Likelihood 3. Our Model and Pre-Processing and Likelihood 3.2 Encoder 3.2 Encoder 4. Results and Discussion and 4.1 Each Component is Crucial to Modifies Model Performance 4. Results and Discussion and 4.1 Each Component is Crucial to Modifies Model Performance 4.2 Modified Model achieves Significant Improvements over Standard Bayesian GPLVM and is Comparable to SCVI 4.2 Modified Model achieves Significant Improvements over Standard Bayesian GPLVM and is Comparable to SCVI 4.3 Consistency of Latent Space with Biological Factors 4.3 Consistency of Latent Space with Biological Factors 4. Conclusion, Acknowledgement, and References 4. Conclusion, Acknowledgement, and References A. Baseline Models A. Baseline Models B. Experiment Details B. Experiment Details C. Latent Space Metrics C. Latent Space Metrics D. Detailed Metrics D. Detailed Metrics 3.2 ENCODER In the encoder analysis, we compare a simple encoder comprised of linear layers followed by SoftPlus activations (Simple NN) with the scVI’s more complex encoder (scVI NN). scVI NN incorporates batch information as input to the nonlinear mapping, so incorporating this encoder into the BGPLVM may help address batch effects observed in the raw count data. Additionally, the scVI encoder architecture includes batch normalizations, contributing to a more stable optimization process, which we leverage for our GPLVM implementation. This paper is available on arxiv under CC BY-SA 4.0 DEED license. This paper is available on arxiv under CC BY-SA 4.0 DEED license. available on arxiv Authors: (1) Sarah Zhao, Department of Statistics, Stanford University, (smxzhao@stanford.edu); (2) Aditya Ravuri, Department of Computer Science, University of Cambridge (ar847@cam.ac.uk); (3) Vidhi Lalchand, Eric and Wendy Schmidt Center, Broad Institute of MIT and Harvard (vidrl@mit.edu); (4) Neil D. Lawrence, Department of Computer Science, University of Cambridge (ndl21@cam.ac.uk). Authors: Authors: (1) Sarah Zhao, Department of Statistics, Stanford University, (smxzhao@stanford.edu); (2) Aditya Ravuri, Department of Computer Science, University of Cambridge (ar847@cam.ac.uk); (3) Vidhi Lalchand, Eric and Wendy Schmidt Center, Broad Institute of MIT and Harvard (vidrl@mit.edu); (4) Neil D. Lawrence, Department of Computer Science, University of Cambridge (ndl21@cam.ac.uk).