Vector Markov processes (also known as population Markov processes) are an important class of stochastic processes that have been used to model a wide range of technological, biological, and socioeconomic systems. The dynamics of vector Markov processes are fully characterized, in a stochastic sense, by the state transition probability matrix P . In most applications, P has to be estimated based on either incomplete or aggregated process observations. Here, in contrast to established methods for estimation given aggregate data, we develop Bayesian formulations for estimating P from asynchronous aggregate (longitudinal) observations of the population dynamics. Such observations are common, for example, in the study of aggregate biological cell population dynamics via flow cytometry. We derive the Bayesian formulation, and show that computing estimates via exact marginalization are, generally, computationally expensive. Consequently, we rely on Monte Carlo Markov chain sampling approaches to estimate the posterior distributions efficiently. By explicitly integrating problem constraints in these sampling schemes, significant efficiencies are attained. We illustrate the algorithm via simulation examples and show that the Bayesian estimation schemes can attain significant advantages over point estimates schemes such as maximum likelihood.