# A Bayesian Algorithm for Functional Mapping of Dynamic Complex Traits

^{1}

^{2}

^{3}

^{4}

^{*}

*Keywords:*Bayesian; functional mapping; MCMC; Quantitative trait loci

Next Article in Journal / Special Issue

Previous Article in Journal / Special Issue

Department of Statistics, University of Florida, Gainesville, FL 32611, USA

Genome Institute of Singapore, 60 Biopolis Street 02-01 Genome, Singapore

Department of Public Health Sciences, Pennsylvania State College of Medicine, Hershey, PA 17033, USA

Department of Statistics, Pennsylvania State University, University Park, PA 16802, USA

Author to whom correspondence should be addressed.

Received: 8 January 2009
/
Revised: 6 March 2009
/
Accepted: 24 March 2009
/
Published: 21 April 2009

(This article belongs to the Special Issue Algorithms and Molecular Sciences)

Functional mapping of dynamic traits measured in a longitudinal study was originally derived within the maximum likelihood (ML) context and implemented with the EM algorithm. Although ML-based functional mapping possesses many favorable statistical properties in parameter estimation, it may be computationally intractable for analyzing longitudinal data with high dimensions and high measurement errors. In this article, we derive a general functional mapping framework for quantitative trait locus mapping of dynamic traits within the Bayesian paradigm. Markov chain Monte Carlo techniques were implemented for functional mapping to estimate biologically and statistically sensible parameters that model the structures of time-dependent genetic effects and covariance matrix. The Bayesian approach is useful to handle difficulties in constructing confidence intervals as well as the identifiability problem, enhancing the statistical inference of functional mapping. We have undertaken simulation studies to investigate the statistical behavior of Bayesian-based functional mapping and used a real example with F2 mice to validate the utilization and usefulness of the model.

Because of polygenic control and environmental modification, many traits of agricultural, biological and biomedical importance vary in a quantitative way [1]. For this reason, the genetic analysis of these so-called quantitative traits has been one of the most difficult issues in genetic research for many decades. With the advent of powerful molecular biotechnologies and statistical methodologies, it has now been possible to precisely dissect the phenotypic variation of a quantitative trait into individual loci at the molecular level (known as quantitative trait loci or QTLs) and understand the effects of interactions between QTLs and environments on the trait phenotype [2,3,4]. Thanks to tremendous efforts by geneticists worldwide, hundreds of thousands of QTLs have now been identified for a variety of complex quantitative traits in plants, animals, and humans. Meanwhile, a vast body of literature has been available about the development of statistical methods for QTL mapping [5,6,7], although most of these methods ignore the developmental or dynamic features of a trait in time course.

More recently, a library of statistical models, called functional mapping, has been developed to map QTLs that control dynamic traits [8,9]. Functional mapping incorporates fundamental biological principles behind trait growth and development into a mapping framework. It capitalizes on the mathematical aspect of a ubiquitous growth law that every biological trait experiences a growth and developmental change with time through altering the ratio of the anabolic or metabolic rate and the rate of catabolism [10]. The advantages of this approach lie in its increased biological relevance by embedding biological principles into the estimation process and its flexibility to generate a number of quantitative testable hypotheses about the developmental and genetic regulation of growth. From a statistical perspective, functional mapping estimates parameters that determine the shape of a genotype-specific growth curve and/or the covariance structure, thus strikingly increasing its statistical power for QTL detection.

Functional mapping was originally constructed within the maximum likelihood (ML) context, incorporated by a finite mixture model and implemented with the EM algorithm [11,12]. Although ML-based approaches have many favorable statistical properties for parameter estimation, they may often face some significant drawbacks when applied to functional mapping. First, functional mapping usually uses nonlinear mathematical equations to model the mean and covariance structures, it is extremely difficult or eventually impossible to derive log-likelihood equations for nonlinear parameters in the maximization step. Second, functional mapping concerns genetic analyses of longitudinal data whose intrinsic high-dimensional complexity makes computation increasingly prohibitive, especially when permutation tests are used to determine critical thresholds. Third, but not specific to functional mapping, ML-based approaches do not automatically provide confidence interval estimates of the estimators, and thus affect the inference of parameter estimation.

Many of the problems described above for ML can be overcome by using a Bayesian method. In ML, the unknown parameters are treated as unknown variables (unobservables) and the likelihood function is maximized in these variables. In the Bayesian paradigm, each unobservable parameter is given a prior distribution, and we then infer the posterior distribution of each unobservable conditional on the data (the observables). The summary statistics of the posterior distribution, e.g., the mean, the mode or the median, can be regarded as Bayesian estimates of unobservables [13]. The interval estimate can be obtained simply by examining the posterior distribution. The mean of this marginal posterior distribution is a candidate Bayesian estimator of an unknown parameter. Although this marginal distribution rarely has an explicit form, and numerical integration is often prohibited because of high dimensionality of parameters, a Markov chain Monte Carlo (MCMC) algorithm can be used to simulate the sample from the joint posterior distribution. The potential of the Bayesian approach implemented with the Gibbs sampler or Metropolis-Hastings algorithm for QTL mapping has been explored for various genetic designs [14,15,16,17,18]. Yang and Xu [19] incorporated a Bayesian approach to map QTLs involved in a dynamic trait based on nonparametric Legendre polynomials, although they did not take a full advantage of functional mapping in quantifying biologically meaningful hypothesis tests about the genetic control of development and modeling the structure of the covariance matrix. For such biological processes as growth curve, HIV dynamics and pharmacodynamics, in which explicit mathematical functions exist to specify their dynamic changes, functional mapping based on parametric modeling has proven to be powerful for asking and addressing the biological questions at the interplay between genetic actions and developmental patterns.

In longitudinal data analysis, parametric covariance modeling has several advantages compared to conventional multivariate approaches ignoring the covariance structure. The most significant advantage is that parametric modeling generally results in more efficient estimation of the covariance matrix (and therefore the mean structure). Also, it can deal more effectively with data when the number of measurement times is relatively large. Lastly, it can handle the data more effectively in the cases that the measurement times are not common across subjects. For those reasons, the development of explicit parametric models for the variance-covariance structure has been an important topic over the last two decades [20,21].

In this article, we will develop a general Bayesian framework for functional mapping of complex dynamic traits based on parametric modeling of the mean-covariance structures. This framework is constructed by a mixture model in which multiple mixture components corresponding to the genotypes of the underlying QTLs are involved. We will implement the MCMC algorithm to estimate the posterior distribution of each parameter contained within the mixture model including those that define curve shapes and the covariance structure. A real example for the F${}_{2}$ mouse progeny is used to demonstrate the utilization of the model and validate its usefulness in a practical genomic project of dynamic QTLs. We perform simulation studies to investigate the statistical properties of the Bayesian functional mapping model in terms of its convergence rate, estimation precision and power for QTL detection.

Suppose there is an F${}_{2}$ population of size n, initiated with two inbred lines. In this F${}_{2}$ population, many markers are genotyped to construct a genetic linkage map, aimed to identify the QTLs that affect growth curves. For each progeny, a particular growth trait, such as body weight, tail length or cell number, is measured at a series of time points ($1,\cdots ,T$). Consider a putative QTL with genotypes $qq$ (denoted as 0), $Qq$ (denoted as 1) and $QQ$ (denoted as 2) that affects the shape of growth curves. At a specific time point t, the phenotypic value of the growth trait for progeny i due to the QTL may be expressed by a linear model, i.e.,
where ${\xi}_{ij}$ is an indicator variable for progeny i carrying a QTL genotype and defined as 1 if a particular QTL genotype j is indicated and 0 otherwise, ${u}_{j}\left(t\right)$ is the expected phenotypic value for QTL genotype j at time t, and ${e}_{i}\left(t\right)$ is independently and identically distributed as $N(0,{\sigma}^{2}\left(t\right))$. Note that measurements within an individual are likely to be correlated across times, with covariance $\sigma ({t}_{1},{t}_{2})$ between times ${t}_{1}$ and ${t}_{2}$ (${t}_{1},{t}_{2}=1,\dots ,T)$. These variances and covariances form a ($T\times T$) matrix Σ.

$$\begin{array}{c}\hfill \begin{array}{c}{\displaystyle \phantom{\rule{5.69054pt}{0ex}}{y}_{i}\left(t\right)=\sum _{j=0}^{2}{\xi}_{ij}{u}_{j}\left(t\right)+{e}_{i}\left(t\right),}\hfill \end{array}\end{array}$$

Functional mapping models ${u}_{j}\left(t\right)$ by a biologically meaningful mathematical equation and the time-dependent covariance matrix composed of ${\sigma}^{2}\left(t\right)$ and $\sigma ({t}_{1},{t}_{2})$ by statistically robust approaches. For example, the growth of a living entity can be defined as the irreversible increase of size with time. A series of mathematical models have been proposed to describe growth curves. Among these models, the sigmoidal (or logistic) growth function is regarded as being nearly universal in living systems to capture age-specific changes [10]. Specifically, this S-shaped curve for a QTL genotype j can be mathematically expressed as:
where ${\Theta}_{j}=({\alpha}_{j},{\beta}_{j},{\gamma}_{j})$ is set of parameters that determine curve shape of genotype j. Parameter α is the limiting value of growth as time t goes to infinity, $\alpha /(1+\beta )$ is the initial value of growth, and γ is the relative growth rate. Many biologically important features of growth curves, such as the time of maximal growth rate and the duration of maximal growth, can be described by these parameters or their combinations.

$$\begin{array}{c}\hfill {g}_{j}\left(t\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\Theta}_{j})=\frac{{\alpha}_{j}}{1+{\beta}_{j}{e}^{-{\gamma}_{j}t}}\end{array}$$

A number of statistical methods have been derived to model the structure of covariance between longitudinal measurements [21]. The first-order autoregressive (AR(1)) model has been applied to model the structure of the within-subject covariance matrix for functional mapping. This model uses two simplified assumptions, i.e., variance stationarity – the residual variance (${\sigma}^{2}$) is constant over time, and covariance stationarity – the correlation between different measurements decreases proportionally (in ρ) with increased time interval. These two assumptions facilitate the computation of functional mapping. Wu et al. [22] embedded Carrol and Rupert’s [23] transform-both-sides (TBS) model into the growth-incorporated finite mixture model, in order to reduce the heteroscedasticity of the residual variance and preserve original biological means of curve parameters after the data are transformed. $\mathrm{N}\stackrel{\xb4}{\mathrm{u}}\tilde{\mathrm{n}}\text{ez}$-$\text{Ant}\stackrel{\xb4}{\mathrm{o}}\mathrm{n}$ and colleagues proposed a series of so-called structured antedependence (SAD) models to approximate the age-specific change of correlation in the analysis of longitudinal variables [20]. The SAD model has been employed in several studies and displays many favorable properties for genetic mapping of dynamic traits [24].

Although only phenotypic values and marker genotypes are observable, the probability distribution of the QTL genotypes in a mapping population can be expressed in terms of the location of the putative QTL, the marker genotypes, and the distance between the markers. Let ${\mathbf{y}}_{i}$ be the phenotypic value of progeny i at different time points, ${\mathbf{M}}_{\mathbf{i}}={\left\{{M}_{i}k\right\}}_{k=1}^{m}$ be the m-marker genotype of progeny i, λ be the QTL position measured by the distance of the QTL from the first marker of an ordered linkage group, and $\mathbf{D}={\left\{{D}_{k}\right\}}_{k=1}^{m}$ be the distances between markers 1 and k. Suppose this QTL is located between markers k and $k+1$. Then, we use ${\omega}_{j|i}$ to denote the distribution of the QTL genotype of progeny i, i.e.,
which is the conditional probability of QTL genotype j given the marker genotype of progeny i.

$$\begin{array}{c}\hfill {\omega}_{j|i}=\mathrm{Prob}({Q}_{i}=j|\lambda ,{M}_{ik},{M}_{i(k+1)},{D}_{k},{D}_{k+1}),\end{array}$$

The likelihood of parameters λ, Θ and Σ given observations can be written as:
where $\mathbf{y}={\left\{{\mathbf{y}}_{\mathbf{i}}\right\}}_{\mathbf{i}=\mathbf{1}}^{\mathbf{n}}$, $\Theta ={\left\{{\Theta}_{j}\right\}}_{j=0}^{2}$ and π is assumed to follow a multivariate normal distribution with the genotype-specific mean vector expressed as:
and covariance matrix Σ.

$$\begin{array}{c}\hfill L(\lambda ,\Theta ,\Sigma |\mathbf{y})=\prod _{i=1}^{n}\left[\sum _{j=0}^{2}{\omega}_{j|i}\xb7\pi \left({\mathbf{y}}_{i}\right|{Q}_{i}=j,{\Theta}_{j},\Sigma )\right],\end{array}$$

$$\begin{array}{ccc}\hfill {\mathbf{u}}_{j}& =& g\left(\mathbf{t}\right|{\Theta}_{j})\hfill \end{array}$$

$$\begin{array}{ccc}\hfill & =& {\left\{{u}_{j}\left(t\right)\right\}}_{t=1}^{T}\hfill \end{array}$$

$$\begin{array}{ccc}& =& {\left\{{\displaystyle \frac{{\alpha}_{j}}{1+{\beta}_{j}{e}^{{\gamma}_{j}t}}}\right\}}_{t=1}^{T}\hfill \end{array}$$

We will derive a Bayesian approach to estimate the unknown parameters. This approach needs specifying prior distributions for the unknowns. Given the data and the priors specified, the posterior distribution over all unknown parameters can be obtained by using Bayes’ theorem. Let $\mathbf{Q}={\left\{{Q}_{i}\right\}}_{i=1}^{n}$ denote the QTL genotypes for all n progeny. Then, the posterior density of λ, Θ, Σ, and $\mathbf{Q}$ is given by:
where $\pi \left(\mathbf{y}\right|\mathbf{Q},\Theta ,\Sigma )=\prod \pi \left({\mathbf{y}}_{i}\right|{Q}_{i}=j,{\Theta}_{j},\Sigma )$ denotes the probability mass of the observation $\mathbf{y}$ given the QTL genotypes, $\pi \left(\mathbf{Q}\right|\lambda )=\prod \pi \left({Q}_{i}\right|\lambda )$ is the probability mass of the QTL genotypes of all n progeny given their marker genotypes ($\mathbf{M}$) and the QTL position (λ), and $\pi (\lambda ,\Theta ,\Sigma )$ is the prior imposed on the genetic parameters. It is reasonable to assume the priors are independent for the parameters. Thus, we have:

$$\begin{array}{c}\hfill \pi (\lambda ,\Theta ,\Sigma ,\mathbf{Q}|\mathbf{y})\propto \pi (\mathbf{y}|\mathbf{Q},\Theta ,\Sigma )\xb7\pi \left(\mathbf{Q}\right|\lambda )\xb7\pi (\lambda ,\Theta ,\Sigma ),\end{array}$$

$$\begin{array}{c}\hfill \pi (\lambda ,\Theta ,\Sigma )=\pi \left(\lambda \right)\xb7\pi (\Sigma )\xb7\prod _{j=0}^{2}\pi \left({\Theta}_{j}\right).\end{array}$$

The priors can be chosen from related studies. In principle, if there is reliable information for parameters, such as Θ, priors with a small variance may be used. For a parameter without enough information used to determine the prior, such as Σ and λ, noninformative priors or priors with a large variance may be utilized. Here, we choose a uniform distribution on $[0,{\mathbf{D}}_{\mathbf{m}}]$ as a prior of λ. Multivariate normal priors with moderate variances are used for ${\Theta}_{j}$ can be obtained. The standard prior distribution for the inverse of the covariance matrix ${\Sigma}^{-1}$ is the Wishart $(\mathbf{R},\rho )$ [25,26], where the so-called scale matrix $\mathbf{C}={\mathbf{R}}^{-1}$ represents prior structural information about Σ and ρ is the degree of freedom, greater than $T-1$. A small value of ρ gives a relative flat distribution. The Wishart prior with low degrees of freedom and a specified $\mathbf{R}$ is regarded as a reference (or noninformative) proper prior. Despite less flexibility, this distribution offers the advantage of being a conjugate prior, leading to a relative simple form for the posterior.

Theoretically, the marginal posteriors of the unknown parameters can be obtained from the joint posterior (6) by integrating over the other unknowns. Unfortunately, in practice, the evaluation of such high-dimensional integrals in a closed form is not possible. However, it is straightforward to derive either full conditional posterior distributions for some parameters, or the explicit expressions that are proportional to the corresponding full conditional posterior distributions for other parameters, i.e.,
and
where ${\Theta}_{-j}=\{{\Theta}_{{j}^{\prime}}:{j}^{\prime}=0,1,2,{j}^{\prime}\ne j\}$, ${\mathbf{y}}_{ij}$ contains the observations from those individuals with genotype j, and $\mathbf{D}={\mathbf{R}}^{-1}+{\sum}_{j=0}^{2}{\sum}_{i=1}^{{n}_{j}}({\mathbf{y}}_{ij}-g\left(\mathbf{t}\right|{\Theta}_{j})){({\mathbf{y}}_{ij}-g\left(\mathbf{t}\right|{\Theta}_{j}))}^{\prime}$.

$$\begin{array}{ccc}\hfill & & \phantom{\rule{1.em}{0ex}}\pi \left({\Theta}_{j}\right|\mathbf{y},{\Theta}_{-j},\Sigma ,\mathbf{Q},\lambda )\hfill \end{array}$$

$$\begin{array}{ccc}\hfill \propto & & \pi \left(\mathbf{y}\right|\Theta ,\Sigma ,\mathbf{Q},\lambda )\xb7\pi \left({\Theta}_{j}\right)\hfill \end{array}$$

$$\begin{array}{ccc}\propto & & \mathrm{exp}\left\{-\frac{1}{2}\sum _{i=1}^{{n}_{j}}\left[{({\mathbf{y}}_{ij}-g\left(\mathbf{t}\right|{\Theta}_{j}))}^{\prime}{\Sigma}^{-1}({\mathbf{y}}_{ij}-g\left(\mathbf{t}\right|{\Theta}_{j}))\right]-\frac{1}{2}{({\Theta}_{j}-\eta )}^{\prime}{\Lambda}^{-1}({\Theta}_{j}-\eta )\right\}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill & & \pi \left({\Sigma}^{-1}\right|\mathbf{y},\Theta ,\mathbf{Q},\lambda )\hfill \end{array}$$

$$\begin{array}{ccc}\hfill & =& \pi \left(\mathbf{y}\right|\Theta ,{\Sigma}^{-1},\mathbf{Q},\lambda )\xb7\pi \left({\Sigma}^{-1}\right)\hfill \end{array}$$

$$\begin{array}{ccc}\propto & & |{\Sigma}^{-1}{|}^{\frac{n+\rho +T+1}{2}}\xb7\mathrm{exp}\left\{-\frac{1}{2}tr\left[{\mathbf{R}}^{-1}\phantom{\rule{0.166667em}{0ex}}{\Sigma}^{-1}+\sum _{j=0}^{2}\sum _{i=1}^{{n}_{j}}({\mathbf{y}}_{ij}-g\left(\mathbf{t}\right|{\Theta}_{j})){({\mathbf{y}}_{ij}-g\left(\mathbf{t}\right|{\Theta}_{j}))}^{\prime}\right]\right\}\hfill \end{array}$$

$$\begin{array}{ccc}\backsim & & Wi({\mathbf{D}}^{-1},n+\rho )\hfill \end{array}$$

Within the Bayesian framework, with the explicit expressions described in the preceding section, a Markov chain Monte Carlo (MCMC) technique can be used to draw samples from the joint posterior distributions of unknown parameters (6). We will use a hybrid scheme of Gibbs sampler and Metroplolis-Hastings (M-H) algorithm [27,28]. In particular, Gibbs sampling steps update ${\Sigma}^{-1}$, while the M-H algorithm updates ${\Theta}_{j}$ and the QTL position. After generating a random sequence of states $({\lambda}^{0},{\mathbf{Q}}^{0},{\Theta}^{0},{\Sigma}^{0})$, $({\lambda}^{1},{\mathbf{Q}}^{1},{\Theta}^{1},{\Sigma}^{1})$, ⋯, $({\lambda}^{N},{\mathbf{Q}}^{N},{\Theta}^{N},{\Sigma}^{N})$, the final MCMC samples are collected, from which we are able to make the inference about the unknown parameters. The construction of the Markov chain process was described in Appendix A.

For parameters λ and Θ, the empirical means of their marginal posteriors are the Bayesian estimators. As justified by Tierney [28], for any unknown parametric family f, which is a square integrable with respect to the stationary distribution π, we have:
under the assumption that $({\lambda}^{\left(k\right)},{\Theta}^{\left(k\right)},{\mathbf{Q}}^{\left(k\right)},{\Sigma}^{\left(k\right)})$ are the samples from the Markov chain. In other words, the empirical averages of the corresponding MCMC samples may be regarded as the consistent estimators for the unknown parameters.

$$\begin{array}{c}\hfill {\overline{f}}_{N}=\frac{1}{N}\sum _{k=1}^{N}f({\lambda}^{\left(k\right)},{\Theta}^{\left(k\right)},{\mathbf{Q}}^{\left(k\right)},{\Sigma}^{\left(k\right)})\to {E}_{\pi}\left[f(\lambda ,\Theta ,\mathbf{Q},\Sigma \phantom{\rule{0.166667em}{0ex}}|\phantom{\rule{0.166667em}{0ex}}\mathbf{y})\phantom{\rule{0.166667em}{0ex}}\right],\end{array}$$

The marginal posterior densities of these parameters are determined from the kernel density estimator [29], since the closed form of their full conditional posteriors are not available. For example, the histogram kernel density estimator for λ is given by:

$$\begin{array}{c}\hfill \widehat{\pi}\left(\lambda \phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}\mathbf{y})=\frac{1}{Nh}\sum _{j=0}^{{D}_{m}/h}I(jh<\phantom{\rule{0.166667em}{0ex}}\lambda \phantom{\rule{0.166667em}{0ex}}\le \phantom{\rule{0.166667em}{0ex}}(j+1)h)\sum _{k=0}^{N}I(0<\phantom{\rule{0.166667em}{0ex}}\frac{{\lambda}^{\left(k\right)}}{h}-j\phantom{\rule{0.166667em}{0ex}}\le \phantom{\rule{0.166667em}{0ex}}1).\end{array}$$

A second important estimation issue is to obtain the confidence intervals for the unknowns. Box and Tao [30] suggested that highest posterior density (HPD) regions can be constructed to give the confidence intervals for the parameters of interest and the detailed method for developing an approximated HPD via MCMC samples can be seen in Ritter and Tanner [31]. Alternatively, we can obtain the approximate HPD for the parameters directly by from their corresponding smooth density estimators.

Finally, in order to estimate the Monte Carlo error via the central limit theorem, Geyer [32] suggested three types of consistent estimators, the window estimators, the method of standardized time series and the specialized Markov Chain estimators. Among them, the windows estimators probably provides the best estimates, although it requires stronger regularity conditions for consistency.

In longitudinal data analysis, statistical modeling of the covariance structure is generally desirable, given that time-dependent variances and correlations follow a certain pattern. Below, we incorporate two commonly used approaches for the covariance structure in functional mapping into the Bayesian framework.

When a stationary AR(1) model is used, we specify the structure of covariance matrix Σ by constant variance and covariance, i.e.,

$$\begin{array}{ccc}\hfill \mathrm{var}\left(y\right(t\left)\right)& =& {\sigma}^{2},\phantom{\rule{1.em}{0ex}}\forall \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1\le t\le T\hfill \end{array}$$

$$\begin{array}{ccc}\hfill \mathrm{cov}(y\left({t}_{1}\right),y\left({t}_{2}\right))& =& {\sigma}^{2}{\rho}^{|{t}_{1}-{t}_{2}|},\phantom{\rule{1.em}{0ex}}\forall \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1\le {t}_{1}\ne {t}_{2}\le T.\hfill \end{array}$$

We use the inverse gamma prior for ${\sigma}^{2}$ and an informative prior restricted on $[-1,1]$ for ρ. Assuming that the priors of these two parameters are independent, the posterior density of $(\lambda ,\Theta ,{\sigma}^{2},\rho )$ is given as:
where $\pi \left({\sigma}^{2}\right)=IG(\alpha ,\beta )$ and $\pi \left(\rho \right)=Uniform(-1,1)$.

$$\begin{array}{c}\hfill \pi (\lambda ,\Theta ,\mathbf{Q},\rho ,{\sigma}^{2}|\mathbf{y})\propto \pi \left(\mathbf{y}\right|\Theta ,\mathbf{Q},{\sigma}^{2},\rho )\xb7\pi \left(\mathbf{Q}\right|\lambda )\xb7\pi \left(\lambda \right)\xb7\pi \left({\sigma}^{2}\right)\xb7\pi \left(\rho \right)\xb7\prod _{j=0}^{2}\pi \left({\Theta}_{j}\right),\end{array}$$

The closed forms of the full conditional posterior distributions of ${\sigma}^{2}$ and ρ are not available, but the explicit expressions that are proportional to these posteriors can be derived, which are:
for ${\sigma}^{2}$, and:
for ρ. Based on these expressions, the corresponding Metroplolis-Hastings steps can be developed to update ${\sigma}^{2}$ and ρ within the MCMC estimation scheme (Appendix B).

$$\begin{array}{ccc}\hfill \pi \left({\sigma}^{2}\right|\mathbf{y},\lambda ,\mathbf{Q},\Theta ,\rho )& \propto & \pi \left(\mathbf{y}\right|\lambda ,\mathbf{Q},\Theta ,{\sigma}^{2},\rho )\xb7\pi \left({\sigma}^{2}\right)\hfill \end{array}$$

$$\begin{array}{ccc}& \propto & |\Sigma ({\sigma}^{2},\rho ){|}^{-\frac{n}{2}}\xb7\frac{{\beta}^{\alpha}}{\Gamma \left(\alpha \right)}\xb7{\sigma}^{2(-\alpha -1)}\hfill \end{array}$$

$$\begin{array}{ccc}& \xb7& \mathrm{exp}\{-\frac{\beta}{{\sigma}^{2}}-\frac{1}{2}\sum _{j=0}^{2}\sum _{i=1}^{{n}_{j}}{({\mathbf{y}}_{ij}-g\left(\mathbf{t}\right|{\Theta}_{j}))}^{\prime}{\Sigma}^{-1}({\sigma}^{2},\rho )({\mathbf{y}}_{ij}-g\left(\mathbf{t}\right|{\Theta}_{j}))\}\hfill \end{array}$$

$$\begin{array}{ccc}& & \pi \left(\rho \right|\mathbf{y},\lambda ,Q,\Theta ,{\sigma}^{2})\propto \pi \left(\mathbf{y}\right|\lambda ,\mathbf{Q},\Theta ,{\sigma}^{2},\rho )\xb7\pi \left(\rho \right)\hfill \end{array}$$

$$\begin{array}{ccc}& =& \frac{1}{2}\xb7{|\Sigma ({\sigma}^{2},\rho )|}^{-\frac{n}{2}}\xb7\mathrm{exp}\{-\frac{1}{2}\sum _{j=0}^{2}\sum _{i=1}^{{n}_{j}}{({\mathbf{y}}_{ij}-g\left(\mathbf{t}\right|{\Theta}_{j}))}^{\prime}{\Sigma}^{-1}({\sigma}^{2},\rho )({\mathbf{y}}_{ij}-g\left(\mathbf{t}\right|{\Theta}_{j}))\}\hfill \end{array}$$

According to Gabriel [33], the error at the current time depends on those at the previous times and the innovative error generated at the current time. Such a structured antedependence (SAD) model can be described by antedependence coefficients and innovation variances. The first-order SAD or SAD(1) model only contains one antedependence coefficient (ϕ). Assuming a constant innovation variance (${\nu}^{2}$), Jaffrézic et al. [34] derived a SAD(1)-structured covariance as:
It can be seen that neither the variance nor the correlation function is stationary for the SAD(1) model, i.e. the variance can change with time, and the correlation does not depend only on lag time $|{t}_{1}-{t}_{2}|$.

$$\begin{array}{c}\hfill \begin{array}{cc}\phantom{\rule{5.69054pt}{0ex}}\hfill & {\displaystyle \mathrm{var}\left(t\right)=\frac{1-{\varphi}^{2t}}{1-{\varphi}^{2}}{\nu}^{2},\phantom{\rule{1.em}{0ex}}\forall \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1\le t\le T}\hfill \\ & {\displaystyle \mathrm{corr}({t}_{1},{t}_{2})={\varphi}^{{t}_{1}-{t}_{2}}\frac{1-{\varphi}^{2{t}_{2}}}{1-{\varphi}^{2}}{\nu}^{2},\phantom{\rule{1.em}{0ex}}\forall \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1\le {t}_{1}\ne {t}_{2}\le T.}\hfill \end{array}\end{array}$$

We pose an inverse-gamma prior on innovation variance ${\nu}^{2}$ and a normal prior on antedependence coefficient ϕ. In a real data analysis, the priors are selected as $\pi \left({\nu}^{2}\right)=IG(\alpha ,\beta )=IG(1,1)$ and $\pi \left(\varphi \right)=N({\mu}_{\varphi},{\eta}_{\varphi})=N(0,10)$. The explicit expressions that are proportional to the full conditional posteriors of these two parameters can be derived as:
for ${\nu}^{2}$, and:
for ϕ.

$$\begin{array}{ccc}\hfill \pi \left({\nu}^{2}\right|\mathbf{y},\lambda ,\mathbf{Q},\Theta ,\varphi )& \propto & \pi \left(\mathbf{y}\right|\lambda ,\mathbf{Q},\Theta ,{\nu}^{2},\varphi )\xb7\pi \left({\nu}^{2}\right)\hfill \end{array}$$

$$\begin{array}{ccc}& \propto & |\Sigma ({\nu}^{2},\varphi ){|}^{-\frac{n}{2}}\xb7\frac{{\beta}^{\alpha}}{\Gamma \left(\alpha \right)}\xb7{\nu}^{2(-\alpha -1)}\hfill \end{array}$$

$$\begin{array}{ccc}& \xb7& \mathrm{exp}\{-\frac{\beta}{{\nu}^{2}}-\frac{1}{2}\sum _{j=0}^{2}\sum _{{\Omega}_{i}=1}^{{n}_{j}}({\mathbf{y}}_{ij}-\mathbf{f}\left({\Theta}_{j}\right)){\Sigma}^{-1}({\nu}^{2},\varphi )({\mathbf{y}}_{ij}-\mathbf{f}\left({\Theta}_{j}\right))\}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill \pi \left(\varphi \phantom{\rule{0.277778em}{0ex}}\right|\phantom{\rule{0.277778em}{0ex}}\mathbf{y},\lambda ,\mathbf{Q},\Theta ,{\nu}^{2})& \propto & \pi \left(\mathbf{y}\phantom{\rule{0.277778em}{0ex}}\right|\phantom{\rule{0.277778em}{0ex}}\lambda ,\mathbf{Q},\Theta ,{\nu}^{2},\varphi )\xb7\pi \left(\varphi \right)\hfill \end{array}$$

$$\begin{array}{ccc}& =& \frac{1}{2}\xb7{|\Sigma ({\nu}^{2},\varphi )|}^{-\frac{n}{2}}\hfill \end{array}$$

$$\begin{array}{ccc}& \xb7& \mathrm{exp}\{-\frac{1}{2{\eta}_{\varphi}}{(\varphi -{\mu}_{\varphi})}^{2}-\frac{1}{2}\sum _{j=0}^{2}\sum _{{\Omega}_{i}=1}^{{n}_{j}}{({\mathbf{y}}_{ij}-\mathbf{f}\left({\Theta}_{j}\right))}^{\prime}{\Sigma}^{-1}({\nu}^{2},\varphi )({\mathbf{y}}_{ij}-\mathbf{f}\left({\Theta}_{j}\right))\}\hfill \end{array}$$

The estimate of the number of QTLs is a very important but difficult issue for QTL mapping. In ML, the QTL number can be estimated on the basis of an assumed number by changing the dimensionality of the mode. The best QTL number is chosen using some model-selection criterion such as Akaike’s AIC or by hypothesis testing. However, the theory underlying AIC could not apply in the mixture mode context because of the absence of a single dominating local maximum in the likelihood. The unsuitability of the theory used to construct the null distribution of the likelihood ratio statistic for mixture model means that hypothesis testing is not straightforward in this context. Moreover, because the number of QTL (κ) cannot be estimated as a parameter, it is not possible to estimate the sampling error and confidence interval of the estimate for QTL number. But in the Bayesian paradigm, inference is based on the posterior distribution of the parameters. Inference for κ is then based on the marginal posterior distribution, $Pr(\kappa =l|\mathbf{y}),l=1,2,...,.$

Models $\kappa ={l}_{1}$ and $\kappa ={l}_{2}$ may be compared via the ratio of their posterior probabilities:
where the ratio of marginal probabilities of **y**, ${B}_{{l}_{1}{l}_{2}}=\frac{P\left(\mathbf{y}\right|\kappa ={l}_{1})}{P\left(\mathbf{y}\right|\kappa ={l}_{2})}$, is known as the Bayes factor for comparing $\kappa ={l}_{1}$ with $\kappa ={l}_{2}$. The Bayes factor does not depend on the prior distribution of κ. Recently, the Bayesian analysis of mixtures with an unknown number of components has received great attention [13,35,46]. In practice, a Bayes factor larger than 100 can often be regarded as an evidence for the preference of ${l}_{1}$ QTLs over ${l}_{2}$ QTLs.

$$\begin{array}{c}\hfill \frac{P(\kappa ={l}_{1}|\mathbf{y})}{P(\kappa ={l}_{2}|\mathbf{y})}=\frac{P\left(\mathbf{y}\right|\kappa ={l}_{1})Pr(\kappa ={l}_{1})}{P\left(\mathbf{y}\right|\kappa ={l}_{2})Pr(\kappa ={l}_{2})}={B}_{{l}_{1}{l}_{2}}\frac{Pr(\kappa ={l}_{1})}{P(\kappa ={l}_{2})},\end{array}$$

Cheverud et al. [36] constructed a linkage map with 76 microsatellite markers for 535 F${}_{2}$ mice derived from two strains, the Large (LG/J) and Small (SM/J). The total length of this map is ∼1,500 cM (in Haldane’s units) and an average marker interval length is ∼27.5 cM. The same experiment was repeated by Vaughn et al. [37] in which 502 F${}_{2}$ mice were generated and a linkage map of 1,780 cM long was constructed. In both experiments, each F${}_{2}$ progeny was measured for their body mass at 10 weekly intervals starting at age seven days. The raw weights were corrected for the effects of each covariate due to dam, litter size at birth, parity and sex. We combine these two F${}_{2}$ populations for QTL mapping of growth trajectories. Overall, about $10\%$ of the marker genotypes were randomly missing. The mice with missing data were excluded from the analyses.

Zhao et al. [23 38] first analyzed Vaughn et al.’s [37] data set by using functional mapping with maximum-likelihood based methods. They showed that body masses in the F${}_{2}$ mice follow a logistic curve which can be described by Equation 2, but display substantial variation in the shape of curves. Bayesian-based functional mapping was used to genome-wide search for the QTLs that control mouse growth curves by estimating the QTL location (λ), genotype-specific curve parameters (Θ), and the structure or unstructured covariance matrix (Σ). The prior for the locations of s QTLs (${\lambda}_{1},...,{\lambda}_{s}$) along each chromosome is simply assumed to be uniform over $[0,{D}_{m}]$. Maximum likelihood estimates of growth curve parameters by Zhao et al. [24,37] provide the information about priors of these parameters. The priors for ${\Theta}_{{j}_{1}{j}_{2}...{j}_{s}}\phantom{\rule{3.33333pt}{0ex}}({j}_{1},...,{j}_{s}=0,1,2)$ are a multivariate normal, centered at ${(30,10,0.6)}^{T}$ with a large dispersion $\mathrm{diag}(9,4,1)$. The prior for the covariance matrix is set to be inverse-Wishart$({R}^{-1},\rho )$, where R is given by the sample covariance matrix.

We used the distance of 40 cM from the first marker on each chromosome as the initial value of λ. The initial values of the other parameters were generated from their priors. For each analysis, the Markov chain was run for 60,000 cycles after discarding the first 2,000 cycles for the burn-in period. In order to reduce the serial correlation among samples, the chain was thinned by saving one iteration in every 60 cycles so that the total number of samples stored for calculating the targeted posterior distributions of the unknowns was 1,000 [26].

Bayesian functional mapping was used to genome-wide scan for the existence of QTLs for body mass growth trajectories in the F${}_{2}$ mouse population. Estimated marginal posteriors of the QTL locations over all 19 mouse chromosomes are illustrated in Figure 1. from which the posterior peaks were observed on chromosome 6, 7, 10, 11, and 15. By calculating the Bayes factors of the model with equation 28 by assuming one QTL over the model assuming no QTL, chromosomes 6, 7 and 10 were each detected to harbor a QTL given that the logarithmic scaled Bayes factors are 12.91, 13.47 and 7.99 for the three chromosomes, respectively. The estimated locations of these QTLs are between between markers $D6nds$ and $D6Mit58$ on chromosome 6, marker $D7nds1$ and $D7Mit17$ on chromosome 7 and marker $D10Mit133$ and $D10Mit14$ on chromosome 10.

Parameter | $QQ$ | $Qq$ | $qq$ | |||

Chromosome 6 | ||||||

Location, cM from the first marker 82.68 (67.77, 92.96) | ||||||

α | 36.09 | (35.20,37.04) | 34.94 | (34.36,35.52) | 33.12 | (32.36,33.93) |

β | 11.93 | (11.44,12.45) | 11.58 | (11.16,12.03) | 11.07 | (10.65,11.51) |

γ | 0.65 | (0.64,0.66) | 0.65 | (0.64,0.66) | 0.65 | (0.64,0.67) |

Chromosome 7 | ||||||

Location, cM from the first marker 46.84 (38.80,56.02) | ||||||

α | 36.55 | (35.50, 37.73) | 35.61 | (34.56,36.50) | 33.38 | (32.54,34.33) |

β | 11.83 | (11.43,12.34) | 11.27 | (10.90,11.73) | 11.25 | (10.76, 11.70) |

γ | 0.65 | (0.63,0.66) | 0.64 | (0.63,0.65) | 0.65 | (0.63,0.66) |

Chromosome 10 | ||||||

Location, cM from the first marker 77.78 (68.75,80.96) | ||||||

α | 35.41 | (34.33, 36.52) | 34.71 | (33.67,35.70) | 33.59 | (32.61,34.42) |

β | 11.67 | (11.23,12.19) | 11.47 | (11.23,11.81) | 11.01 | (10.61, 11.44) |

γ | 0.65 | (0.64,0.66) | 0.64 | (0.63,0.66) | 0.65 | (0.63,0.66) |

Table 1 tabulates Bayesian estimates for the locations of the three QTLs detected and their genotype-specific curve parameters from the marginal posterior means of these parameters. From the estimated 95% equal-tail confidence intervals as HPD regions, Bayesian estimates are considered to be reasonably precise (Table 1).

The estimated curve parameters were used to draw the growth curves of three different genotypes at each of the three detected QTLs (Figure 2). The three QTLs exert an effect on body mass growth curves in a similar pattern. The homozygote for the LG/J allele has the best growth during the time course of measurement, followed by the heterozygote for the LG/J and SM/J alleles and the homozygote for the SM/J allele. Based on quantitative genetic theory, we can partition time-dependent genotypic values into time-dependent additive ($a\left(t\right)$) and time-dependent dominance effect components ($d\left(t\right)$) for each QTL, illustrated in Figure 3. The additive effect due to the substitution of the SM/J allele with the LG/J allele is positive and increases with age for all the three detected QTLs. The dominant effect due to the interaction between the LG/J and LG/J alleles is consistently small in time course.

Simulation studies were performed to study the statistical behavior of Bayesian-based functional mapping and demonstrated its applicability and utilization. The simulation design is based on an F${}_{2}$ population containing 450 individuals. A linkage group of length 100 cM was simulated with 11 equally-spaced, ordered codominant markers. A QTL that affects growth curves was assumed at 34 cM from the first marker. The true values of curve parameters for three QTL genotypes are given as:
which are close to the estimates for body mass growth from the real mouse example used above. A total of 10 evenly spaced time points were assumed to measure growth curves. The residual covariance matrix among different time points was set to be the same as the sample covariance matrix estimated from the mouse example. Given this hypothesized covariance matrix and curves parameters for the three QTL genotypes, the heritability of the simulated growth trait at a middle time point is about $0.1$. The time-dependent phenotypic values of the growth trait are assumed to follow a multivariate normal distribution.

$$\begin{array}{c}\hfill {\Theta}_{0}=(33.4,11.2,0.65),\phantom{\rule{1.em}{0ex}}{\Theta}_{2}=(36.7,11.9,0.65),\phantom{\rule{1.em}{0ex}}{\Theta}_{1}=(35.6,11.2,0.64),\end{array}$$

The priors for curve parameters ${\Theta}_{j}$ ($j=0,1,2$) are assumed to be a multivariate normal distribution centered at $(30,10,0.7)$ with covariance matrix $\Lambda =diag\{10,5,4\}$. We used a uniform distribution as a prior for the QTL location over the simulated linkage group. Three different approaches were used to estimate the residual covariance matrix Σ: (1) estimating the unstructured covariance matrix based on an inverse Whishart prior with $T=10$ degrees of freedom, (2) estimating the structured covariance matrix based on the SAD(1) model by imposing $IG(1,1)$ as a prior for innovation variance ${\nu}^{2}$ and $Normal(0,10)$ as a prior for antedependence parameter ϕ, and (3) estimating the structured covariance matrix based on the AR(1) model by imposing $IG(10,1)$ as a prior for variance ${\sigma}^{2}$ and $Uniform(-1,1)$ as a prior for correlation ρ.

For each MCMC-implemented run, 10,000 initial "burn-in" iterations were discarded and, after then, samples are collected once for every 60 cycles so that a working set of 1,000 states were used to estimate the posteriors of parameters. The MCMC experiment was repeated several times with the same simulated data set to ensure the chain to converge to a stationary distribution. Table 2 gives the Bayesian estimates of the QTL location and growth curves for different QTL genotypes, along with the $95\%$ empirical highest posterior density (HPD) confidence regions, under three different matrix-estimating approaches described above, respectively. It can be seen that our Bayesian-based functional mapping provides reasonably good estimates of the parameters in terms of accuracy and precision. The estimates of parameters, especially the QTL location (Figure 4), are affected by matrix-estimating approaches. Overall, the SAD(1)-structured approach is more precise in parameter estimation than the unstructured approach, whereas the unstructured approach is more precise than the AR(1)-structured approach. The SAD(1)-structured approach gives a narrowest confidence interval [33.01, 36.30] for the localization of QTL, followed by the unstructured [28.02, 38.16] and AR(1)-structured approach [25.54, 41.39]. Also, the AR(1)-structured approach generates a small peak for the marginal posterior distribution at a wrong location (Figure 4), thus increasing a risk to detect a spurious QTL.

The simulated data set was further analyzed by ML-based functional mapping in which the likelihood ratio (LR) test statistic was computed and plotted across the linkage group (Figure 5). We found two drawbacks for the ML-based approach. First, the LR profile has two small peaks, giving a chance to claim a false positive QTL. On the other hand, none of these peaks is sharp over a flatted profile, suggesting less power and precision for QTL detection by ML-based functional mapping. Second, the significance test for the ML-based approach is based on computationally expensive permutation tests. ML-based functional mapping costs about 20 times in computing times (including 100 permutation tests) as much as does Bayesian-based functional mapping.

The genetic architecture of complex traits can well be understood by incorporating their underlying developmental features described by mathematical functions. Functional mapping that integrates genetics, statistics and developmental biology as a whole can be useful for deciphering the ontogenetic development of the genetic control of a complex quantitative trait [8,9]. Original models for functional mapping were derived within the maximum likelihood (ML) context and implemented with the EM algorithm. A similar approach for mapping logistic QTLs was used by Malosetti [8]. Although ML-based approaches possess many favorable statistical properties in parameter estimation, they may not be powerful enough to handle the complexity of high-dimensional QTL mapping models, as often seen in functional mapping. As an increasingly popular approach, Bayesian methods display remarkable capacity to estimate genetic parameters in QTL mapping [17,18,19,39,40].

In this article, we derived a general Bayesian framework for functional QTL mapping of dynamic traits and implemented Markov chain Monte Carlo (MCMC) algorithms to locate genomic positions of QTLs, and estimate the mathematical parameters that define a biological process and the statistical parameters that model the covariance structure. The Bayesian-based model allows the estimation of these parameters and their confidence intervals based on posterior distributions, and has great power to handle complex estimation issues related to functional mapping in an effective way. Like original parametric functional mapping [9], the new model allows to approximate the ontogenetic changes of the genetic effects triggered by a QTL. Because many biological processes, such as growth, follow a particular pattern of development, the ontogenetic control of a QTL can be mathematically described and, thereby, tested by estimating the parameters that define a biological process. The new model also take an advantage of functional mapping to model the structure of covariance matrix by a stationary or nonstationary approach.

The application of Bayesian approaches to functional mapping was also considered by other authors. Yang and Xu [19] integrated Bayesian shrinkage approaches to map dynamic QTL, but their model was based on a nonparametric Legendre polynomial fitting. Although this treatment may be statistically flexible, its biological relevance may be limited because no biologically sensible functions are deployed. Also, Yang and Xu’s [19] approach did not utilize the high-efficiency characteristic of functional mapping through modeling the structure of an autocorrelated covariance matrix.

Parameter | $QQ$ | $Qq$ | $qq$ | |||

Unstructured approach | ||||||

Location | 32.74 (28.02, 38.16) | |||||

α | 36.67 | (35.81, 37.46) | 36.03 | (35.47, 36.63) | 33.67 | (32.67, 34.31) |

β | 11.83 | (11.95, 12.64) | 11.22 | (10.97, 11.50) | 11.30 | (10.94, 11.60) |

γ | 0.66 | (0.64, 0.67) | 0.64 | (0.63, 0.65) | 0.64 | (0.63, 0.66) |

SAD(1)-structured approach | ||||||

Location | 34.63 (33.01, 36.30) | |||||

α | 36.58 | (35.85, 37.36) | 35.88 | (35.37, 36.39) | 33.81 | (33.09, 34.55) |

β | 12.04 | (11.65, 12.45) | 11.27 | (11.01, 11.54) | 11.46 | (11.09, 11.83) |

γ | 0.66 | (0.65, 0.67) | 0.64 | (0.63, 0.65) | 0.65 | (0.64, 0.65) |

AR(1)-structured approach | ||||||

Location | 33.54 (25.54, 41.39) | |||||

α | 36.60 | (35.59, 37.66) | 35.57 | (34.88, 36.35) | 33.61 | (32.65, 34.54) |

β | 12.04 | (11.61, 12.48) | 11.23 | (10.99, 11.56) | 11.42 | (11.05, 11.83) |

γ | 0.65 | (0.64, 0.66) | 0.63 | (0.62, 0.65) | 0.66 | (0.64, 0.67) |

Given a simulated time-dependent covariance matrix, Bayesian-based functional mapping obtains different results about the precision of parameter estimation, depending on whether and how such a covariance matrix is structured. This suggests the importance of estimating the residual covariance matrix effectively and efficiently. The unstructured approach captures the full information of time-dependent variances and covariances, which is effective but not efficient. The SAD(1)-structured model approximates the changes of variance and correlation over time [20], which is not only efficient (through estimating fewer parameters) but also effective (by reflecting the reality of this simulated data set to great extent). The AR(1)-structure model assuming the stationarity in both variance and correlation is efficient but not effective as much as the SAD(1) model. Overall, the SAD(1)-structured approach performs best for this particular simulated longitudinal data, followed by the unstructured and AR(1)-structured approach in an order. If the covariance does not have a structure, an unstructured approach, such as one based on the Wishart prior, should be used. In Appendix D, we describe a different approach for estimating the full matrix based on a reference prior.

The model was used to reanalyze a published data set on the growth of body mass in mice, confirming the discovery of a few QTL detected by conventional ML-based functional mapping. Yet, the new model provides estimates of confidence intervals of curve parameters, thus allowing better statistical inferences about the genetic control of dynamic QTLs. Simulation studies show that Bayesian-based functional mapping is more robust than ML-based functional mapping, in that the former provides more reasonable estimates of QTL positions and dynamic QTL effects when the heritability of growth curves is modest (0.1) compared to the latter. However, a unique issue related to the Bayesian approach is about its stability or sensitivity in parameter estimation to different priors. One approach for testing the model’s stability is to initiate the chain with several different starting points and/or different priors under the same MCMC sampling scheme [41], and further examine how the estimates depend on the choices of initial values and priors.

The model proposed in this article can be modified by considering a network of genetic control. As a basic Bayesian framework, our single-QTL interval mapping model is not adequate to explore the effects of interactions on developmental variation for a growth trait between different QTLs from the same genome [43] or different genomes [44], and between QTLs and environments [24]. One of the significant advantages of Bayesian approaches lies in the estimation of the optimal number of QTLs involved. Variable selection via stepwise regression is used in ML mapping, but it is highly computationally expensive. Corresponding to this variable selection procedure, reversible jump MCMC is proposed in Bayesian analysis [45,46], although it is subject to poor mixing and a slow convergence to the stationary distribution [47,48]. More efficient methods based on Bayesian shrinkage analysis [16] and stochastic search variable selection [17] have now been proposed. These methods do not rely upon any explicit form of variable selection; rather they proceed implicitly by shrinking the effects of excessive QTLs to zero. Our Bayesian-based model modified by considering complicated features of growth and development will certainly prove its value in elucidating the genetic architecture of dynamic traits and will probably be the beginning of detecting the driving forces behind developmental genetics and its relationship to the organism as a whole.

The Markov chain for Bayesian functional mapping described in the **Parameter Estimation and Algorithm** section is constructed as follows:

$Updating\phantom{\rule{3.33333pt}{0ex}}\lambda :$ In each step, following the idea of Satagopan et al. [14], λ is updated by using the Metropolis algorithm. A new value of ${\lambda}^{*}$ is generated from $Uniform(\mathrm{max}(0,\lambda -\delta ),\mathrm{min}(\lambda +\delta ,{D}_{m}))$ (where $\delta >0$ is the tuning parameter), and this proposed distribution is denoted by $q(\lambda ,{\lambda}^{*})$. This proposed λ is accepted with probability $\mathrm{min}({\alpha}_{\lambda},1)$, and the state keeps the current value if the proposal is rejected, where ${\alpha}_{\lambda}$ is given as:
Note that,
Similarly, we have:
Hence, the acceptance probability (A 1) can be simplified as:

$${\alpha}_{\lambda}(\lambda ,{\lambda}^{*})=\frac{\pi \left({\lambda}^{*}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}\mathbf{y},\mathbf{Q},\Theta ,\Sigma )\xb7\mathbf{q}({\lambda}^{*},\phantom{\rule{0.166667em}{0ex}}\lambda )}{\pi \left(\lambda \phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}\mathbf{y},\mathbf{Q},\Theta ,\Sigma )\xb7\mathbf{q}(\lambda ,\phantom{\rule{0.166667em}{0ex}}{\lambda}^{*})}$$

$$\begin{array}{c}\hfill \begin{array}{c}\phantom{\rule{5.69054pt}{0ex}}\pi \left({\lambda}^{*}\right|\mathbf{y},\mathbf{Q},\Theta ,\Sigma )=\pi \left({\lambda}^{*}\right|\mathbf{y},\mathbf{Q},\Theta ,\Sigma ,\mathbf{M})\hfill \\ \phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}=\pi \left({\lambda}^{*}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}\mathbf{Q},\mathbf{M})\hfill \\ \phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\propto \pi \left(\mathbf{Q}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\lambda}^{*},\mathbf{M})\xb7\pi \left({\lambda}^{*}\right)\hfill \\ \phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}=\prod _{i=1}^{n}\phantom{\rule{0.166667em}{0ex}}\pi \left({Q}_{i}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\lambda}^{*},{M}_{i})\xb7\pi \left({\lambda}^{*}\right)\hfill \end{array}\end{array}$$

$$\pi \left(\lambda \right|\mathbf{y},\mathbf{Q},\Theta ,\Sigma )\propto \prod _{i=1}^{n}\pi \left({Q}_{i}\right|\lambda ,{M}_{i})\xb7\pi \left(\lambda \right)$$

$${\alpha}_{\lambda}(\lambda ,{\lambda}^{*})=\mathrm{min}\left(\frac{{\prod}_{i=1}^{n},\pi \left({Q}_{i}\right|{\lambda}^{*},{M}_{i})\xb7q({\lambda}^{*},\lambda )}{{\prod}_{i=1}^{n},\pi \left({Q}_{i}\right|\lambda ,{M}_{i})\xb7q(\lambda ,{\lambda}^{*})},1\right).$$

$Updating\phantom{\rule{3.33333pt}{0ex}}\mathbf{Q}:$ Because of independence among n progeny, $\mathbf{Q}$ is updated by separately updating each ${Q}_{i}$. For each progeny i and QTL genotype j, the full conditional density is in the form of a multinormial with cell probabilities:
Hence, at each cycle, we can sample the QTL genotype ${Q}_{i}$ directly from this full conditional density.

$$\begin{array}{c}\hfill {p}_{ij}=\pi ({Q}_{i}=j\phantom{\rule{0.166667em}{0ex}}|\phantom{\rule{0.166667em}{0ex}}\mathbf{y},\Theta ,\Sigma ,\lambda )=\frac{\pi ({\mathit{Q}}_{\mathbf{i}}=\mathbf{j}\phantom{\rule{0.166667em}{0ex}}|\phantom{\rule{0.166667em}{0ex}}\lambda )\xb7\pi \left({\mathbf{y}}_{\mathbf{i}}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}\Theta ,\Sigma ,{\mathit{Q}}_{\mathbf{i}}=\mathbf{j})}{{\sum}_{{\mathit{q}}_{\mathbf{i}}=\mathbf{0}}^{\mathbf{2}}\pi ({\mathit{Q}}_{\mathbf{i}}={\mathit{q}}_{\mathbf{i}}\phantom{\rule{0.166667em}{0ex}}|\phantom{\rule{0.166667em}{0ex}}\lambda )\xb7\pi \left({\mathbf{y}}_{\mathbf{i}}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}\Theta ,\Sigma ,{\mathit{Q}}_{\mathbf{i}}={\mathit{q}}_{\mathbf{i}})}.\end{array}$$

$Updating\phantom{\rule{3.33333pt}{0ex}}\Theta :$ We update each ${\Theta}_{j}$ successively by a Metropolis-Hastings algorithm. For each QTL genotype, a new value ${\Theta}_{j}^{*}$ is generated from a proposed density $q({\Theta}_{j},{\Theta}_{j}^{*})$, given the current Θ. Evaluate the acceptance probability of the move is $\mathrm{min}(1,{\alpha}_{{\Theta}_{j}})$. In general, ${\alpha}_{{\Theta}_{j}}$ can be expressed as:
Note that the choice of the Metropolis kernel q is essentially arbitrary, and a symmetric q in the sense that $q({\Theta}_{j},{\Theta}_{j}^{*})=q({\Theta}_{j}^{*},{\Theta}_{j})$ is usually preferred. And in that case, the ratio $q({\Theta}_{j}^{*},{\Theta}_{j})/q({\Theta}_{j},{\Theta}_{j}^{*})$ is canceled in the above expression (A 6). Here, for the proposed density, we use a multivariate normal distribution centered at the current Θ, with variance-covariance matrix given by an information-type matrix [49] whose inverse has $(u,v)$th element:
This expression combines the expected information (the first term) with the prior information (the second term) and offers an advantage of avoiding singular information matrices. Unfortunately, a tedious initial analysis has to be conducted to obtain estimates of ${\Theta}_{j}$ and Σ from which to evaluate (A 7). So, the Metropolis algorithm described above can be carried out by using an arbitrary variance-covariance matrix. The posterior means of ${\Theta}_{j}$ and Σ are then plugged into (A 7) from the subsequent analysis.

$${\alpha}_{{\Theta}_{j}}=\frac{\pi \left({\Theta}_{j}^{*}\right|\mathbf{y},{\Theta}_{-j},\Sigma ,\mathbf{Q},\lambda )\xb7q({\Theta}_{j}^{*},{\Theta}_{j})}{\pi \left({\Theta}_{j}\right|\mathbf{y},{\Theta}_{-j},\Sigma ,\mathbf{Q},\lambda )\xb7q({\Theta}_{j},{\Theta}_{j}^{*})}.$$

$$\sum _{i=1}^{{n}_{j}}\phantom{\rule{0.166667em}{0ex}}{\left(\frac{\partial g\left(\mathbf{t}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\Theta}_{\mathbf{j}})}{\partial {\Theta}_{j,u}}\right)}^{\prime}\left(\frac{\partial g\left(\mathbf{t}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\Theta}_{\mathbf{j}})}{\partial {\Theta}_{j,v}}\right)+\frac{1}{2}\frac{\partial}{\partial {\Theta}_{j,u}}\frac{\partial}{\partial {\Theta}_{j,v}}{({\Theta}_{j}-\eta )}^{\prime}{\Sigma}^{-1}({\Theta}_{j}-\eta ).$$

$Updating\phantom{\rule{3.33333pt}{0ex}}\Sigma :$ We generate a new value of ${\Sigma}^{-1}$ directly from its full conditional posterior distribution. This is straightforward since it has an explicit expression for its full conditional posterior distribution.

Below, we describe the Metroplolis-Hastings steps for updating ${\sigma}^{2}$ and ρ within the MCMC estimation scheme when the residual covariance matrix is structured by the AR(1) model.

$\mathbf{Updating}\phantom{\rule{3.33333pt}{0ex}}{\sigma}^{2}:$ In each MCMC cycle, a candidate value of ${\sigma}^{2}$ denoted by ${\sigma}^{2*}$ is generated from its proposal distribution, which can be specified as:
This proposal will be accepted with probability:
where:

$$q({\sigma}^{{2}^{*}}\phantom{\rule{0.166667em}{0ex}}|\phantom{\rule{0.166667em}{0ex}}{\sigma}^{2})=IG(\frac{1}{{\sigma}^{2}}+1,1).$$

$$\mathrm{min}({\alpha}_{{\sigma}^{2}},1),$$

$$\begin{array}{c}\hfill {\alpha}_{{\sigma}^{2}}=\frac{\pi \left({\sigma}^{{2}^{*}}\right|y,\lambda ,Q,\Theta ,\rho )\xb7q\left({\sigma}^{{2}^{*}}\right|{\sigma}^{2})}{\pi \left({\sigma}^{2}\right|y,\lambda ,Q,\Theta ,\rho )\xb7q\left({\sigma}^{2}\right|{\sigma}^{{2}^{*}})}.\end{array}$$

$\mathbf{Updating}\phantom{\rule{3.33333pt}{0ex}}\rho :$ The proposal distribution of ρ can be specified as a uniform with a moderate range around the current value of ρ. In other words, $q\left({\rho}^{*}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}\rho )\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}Uniform(\phantom{\rule{0.166667em}{0ex}}max(-1,\rho -{\delta}_{\rho}),\phantom{\rule{0.166667em}{0ex}}min(\rho +{\delta}_{\rho},1))$. A new value of ρ, ${\rho}^{*}$, is generated from this proposal distribution and accepted with probability:
with:

$$\mathrm{min}({\alpha}_{\rho},1),$$

$$\begin{array}{c}\hfill {\alpha}_{\rho}=\frac{\pi \left({\rho}^{*}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}y,\lambda ,Q,\Theta ,{\sigma}^{2})\xb7q\left({\rho}^{*}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}\rho )}{\pi \left(\rho \phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}y,\lambda ,Q,\Theta ,{\sigma}^{2})\xb7q\left(\rho \phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\rho}^{*})}.\end{array}$$

The Metroplolis-Hastings steps for updating ${\nu}^{2}$ and ϕ within the MCMC estimation scheme for a SAD(1)-structured residual covariance matrix is described as follows:

$\mathbf{Updating}\phantom{\rule{3.33333pt}{0ex}}{\nu}^{2}$: In each MCMC cycle, a candidate value of ${\nu}^{2}$ denoted by ${\nu}^{{2}^{*}}$ is generated from its proposal distribution, which can be specified as:
This proposal will be accepted with probability:
where:

$$q({\nu}^{{2}^{*}}\phantom{\rule{0.166667em}{0ex}}|\phantom{\rule{0.166667em}{0ex}}{\nu}^{2})=IG(\frac{1}{{\nu}^{2}}+1,1).$$

$$\mathrm{min}({\alpha}_{{\nu}^{2}},1),$$

$$\begin{array}{c}\hfill {\alpha}_{{\nu}^{2}}=\frac{\pi \left({\nu}^{{2}^{*}}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}y,\lambda ,Q,\Theta ,\varphi )\xb7q\left({\nu}^{{2}^{*}}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\nu}^{2})}{\pi \left({\nu}^{2}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}y,\lambda ,Q,\Theta ,\varphi )\xb7q\left({\nu}^{2}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\nu}^{{2}^{*}})}.\end{array}$$

$\mathbf{Updating}\phantom{\rule{3.33333pt}{0ex}}\varphi $: The proposal distribution of ρ can be specified as a uniform with a moderate range around the current value of ρ, i.e., $q\left({\varphi}^{*}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}\varphi )\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}N(\varphi ,{V}_{\varphi})$. A new value of ϕ, ${\varphi}^{*}$, is then generated from this proposal distribution and accepted with probability:
with:

$$\mathrm{min}({\alpha}_{\varphi},1),$$

$$\begin{array}{c}\hfill {\alpha}_{\varphi}=\frac{\pi \left({\varphi}^{*}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}y,\lambda ,Q,\Theta ,{\nu}^{2})\xb7q\left({\varphi}^{*}\phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}\varphi )}{\pi \left(\varphi \phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}y,\lambda ,Q,\Theta ,{\nu}^{2})\xb7q\left(\varphi \phantom{\rule{0.166667em}{0ex}}\right|\phantom{\rule{0.166667em}{0ex}}{\varphi}^{*})}.\end{array}$$

Although the Wishart is a standard prior for the covariance matrix and convenient to use, it has been criticized for being too restricted and the lack of flexibility. Also, as pointed out by Dempster et al. [52] and Stein [50], if the true covariance matrix Σ is close to I, the eigenstructure of Σ can be systematically distorted by the estimator, so this conventional prior can behave poorly, especially when the sample size is small or the data are sparse. To overcome these drawbacks, several other more flexible priors have been introduced, including a log-matrix prior [51], a reference noninformative prior [53], and a constrained Wishart prior [54]. The covariance matrix can also be modeled with a different parameterization [55]. This strategy is based on the key idea that a covariance matrix for longitudinal data can be diagonalized, i.e.,
where $\mathbf{S}$ is a diagonal matrix with positive entries and $\mathbf{A}$ is a unique lower triangular matrix with 1’s on the diagonal.

$$\begin{array}{c}\hfill {\mathbf{A}}^{\prime}\Sigma \mathbf{A}=\mathbf{S},\end{array}$$

Without nice properties due to the conjugate priors, the resulting full conditional for Σ is no longer inverse Wishart. Often, one might have to generate Σ componentwise by using Gibbs sampling. Consequently, we will focus on computationally easy methods that can generate the entire Σ at a time, given the problem’s complexity. Therefore, as an option to further improve the estimation for the covariance matrix, the reference (noninformative) prior, first introduced by Berger and Bernado [56] and thoroughly discussed by Yang and Berger [53], is investigated.

The Jeffreys prior,
might be the most commonly used reference prior. However, care must be taken when using this prior, as it can lead to an improper posterior distribution, and it also fails to shrink the eigenvalues appropriately. However, the approach proposed by Yang and Berger [53] has proven to be able to overcome these inadequacies of the Jeffrey’s prior remarkably. Note that Σ can be decomposed as $\Sigma ={\mathbf{ODO}}^{\prime}$, where $\mathbf{O}$ is orthogonal with positive entries in the first row, and $\mathbf{D}=diag({d}_{1},{d}_{2},...,{d}_{p})$, with ${d}_{1}\u2a7e,{d}_{2}\u2a7e,...,\u2a7e{d}_{p}$. Hence, providing the monotonically ordered $\left\{{d}_{i}\right\}$, the reference prior for $(\mathbf{D},\mathbf{O})$ is given by:
and the resulting posterior distribution is:
Comparing the reference prior with the Jeffrey’s prior, it is noted that the posterior given in equation (D3) is always proper. Also note that, since this reference prior put more mass near the region for equal eigenvalues, it can produce an estimator with better eigenstructure.

$$\begin{array}{c}\hfill \pi (\Sigma )={|\Sigma |}^{-(p+1)/2}\end{array}$$

$$\pi (\mathbf{D},\mathbf{O})\propto \frac{1}{|\Sigma |{\prod}_{i<j}({d}_{i}-{d}_{j})}\phantom{\rule{0.277778em}{0ex}},$$

$$\begin{array}{c}\hfill \pi (\Sigma |\mathbf{y},\Theta ,\mathbf{Q},\lambda )\propto \frac{\mathrm{exp}\left[tr{(-\frac{1}{2n}{\mathbf{OD}}^{-1}{\mathbf{O}}^{\prime}({\sum}_{j=0}^{2}{\sum}_{i=1}^{{n}_{j}}\left({\mathbf{y}}_{ij}\right)-g\left(\mathbf{t}\right|{\Theta}_{j}))\left({\mathbf{y}}_{ij}\right)-g\left(\mathbf{t}\right|{\Theta}_{j}))}^{\prime}\right)\left)\right]}{{|\Sigma |}^{n/2+1}}.\end{array}$$

Again, it is very difficult to analytically evaluate the posterior (D3). Yang and Berger [53] suggested using a Metroplisized hit-and-run sampler algorithm to obtain the integration. The detail sampling procedure at the $kth$ iteration is given as follows:

- (1)
- Given the current positive-definitive matrix ${\Sigma}_{k}$, we set ${\mathbf{W}}_{k}=\mathrm{log}{\Sigma}_{k}$, in the sense that ${\mathbf{W}}_{k}={\sum}_{i=0}^{\infty}\frac{{\left({\Sigma}_{k}\right)}^{i}}{i!}$;
- (2)
- Randomly generate a symmetric p by p matrix T, with elements ${t}_{ij}={z}_{ij}/{\left({\sum}_{l\u2a7dm}{z}_{lm}^{2}\right)}^{1/2},$ where ${z}_{ij}\backsim \phantom{\rule{0.166667em}{0ex}}i.i.d.N(0,1),\phantom{\rule{0.277778em}{0ex}}for\phantom{\rule{0.277778em}{0ex}}i\u2a7dj$;
- (3)
- Set ${\mathbf{W}}^{*}={\mathbf{W}}_{k}+\nu T$ where ν is generated from $N(0,1)$;
- (4)
- Update ${\mathbf{W}}_{k}$ with an acceptance probability $\mathrm{min}(1,{\pi}^{*}\left({\mathbf{W}}^{*}\right|\mathbf{y},\Theta ,\mathbf{Q},\lambda )/{\pi}^{*}\left({\Sigma}_{k}\right|\mathbf{y},\Theta ,\mathbf{Q},\lambda ))$.

We thank four reviewers for their constructive comments on the manuscript and Dr. Jim Cheverud for providing his mouse data. This work is partially supported by Joint NSF/NIH grant DMS/NIGMS-0540745 and NNSFC grant 30671704.

- Lynch, M; Walsh, B. Genetics and Analysis of Quantitative Traits; Sinauer Associates: Sunderland, MA, USA, 1998. [Google Scholar]
- Paterson, A.H.; Lander, E.S.; Hewitt, J.D.; Peterson, S.; Lincoln, S.E.; Tanksley, S.D. Resolution of quantitative traits into Mendelian factors by using a complete linkage map of restriction fragment length polymorphisms. Nature
**1988**, 335, 721–726. [Google Scholar] [CrossRef] [PubMed] - Lander, E.S.; Botstein, D. Mapping Mendelian factors underlying quantitative traits using RFLP linkage maps. Genetics
**1989**, 121, 185–199. [Google Scholar] [PubMed] - Zeng, Z.B. Precision mapping of quantitative trait loci. Genetics
**1994**, 136, 1457–1468. [Google Scholar] [PubMed] - Weller, J.I. Quantitative Trait Loci Analysis in Animals; CABI Publishing: London, UK, 2001. [Google Scholar]
- Siegmund, D.; Yakir, B. The Statistics of Gene Mapping; Springer: New York, USA, 2007. [Google Scholar]
- Lin, M.; Li, H.; Hou, W.; Johnson, J.; Wu, R.L. Modeling sequence-sequence interactions for drug response. Bioinformatics
**2007**, 23, 1251–1257. [Google Scholar] [CrossRef] [PubMed] - Ma, C.X.; Casella, G.; Wu, R.L. Functional mapping of quantitative trait loci underlying the character process: A theoretical framework. Genetics
**2002**, 161, 1751–1762. [Google Scholar] [PubMed] - Wu, R.L.; Lin, M. Functional mapping – how to map and study the genetic architecture of dynamic complex traits. Nat. Rev. Genet.
**2006**, 7, 229–237. [Google Scholar] [CrossRef] [PubMed] - West, G.B.; Brown, J.H.; Enquist, B.J. A general model for ontogenetic growth. Nature
**2001**, 413, 628–631. [Google Scholar] [CrossRef] [PubMed] - Dempster, A.P. Elements of Continuous Multivariate Analysis; Addison-Wesley: Reading, 1969. [Google Scholar]
- Meng, X.L.; Rubin, D.B. Maximum likelihood via the ECM algorithm: A general framework. Biometrika
**1993**, 80, 267–278. [Google Scholar] [CrossRef] - Carlin, BP.; Louis, TA. Bayes and Empirical Bayes Methods for Data Analysis; Chapman & Hall: New York, 1996. [Google Scholar]
- Satagopan, J.M.; Yandell, B.S.; Newton, M.A.; Osborn, T.C. A Bayesian approach to detect quantitative trait loci using Markov chain Monte Carlo. Genetics
**1996**, 144, 805–816. [Google Scholar] [PubMed] - Sillanpää, M.J.; Arjas, E. Bayesian mapping of multiple quantitative trait loci from incomplete outbred offspring data. Genetics
**1999**, 151, 1605–1619. [Google Scholar] [PubMed] - Xu, S. Estimating polygenic effects using markers of the entire genome. Genetics
**2003**, 163, 789–801. [Google Scholar] [PubMed] - Yi, N.; Yandell, B.S.; Churchill, G.A.; Allison, D.B.; Eisen, E.J.; Pomp, D. Bayesian model selection for genome-wide epistatic quantitative trait loci analysis. Genetics
**2005**, 170, 1333–1344. [Google Scholar] [CrossRef] [PubMed] - Zhang, M.; Montooth, K.L.; Wells, M.T.; Clark, A.G.; Zhang, D. Mapping multiple quantitative trait loci by Bayesian classification. Genetics
**2005**, 169, 2305–2318. [Google Scholar] [CrossRef] [PubMed] - Yang, R.Q.; Xu, S. Bayesian shrinkage analysis of quantitative trait loci for dynamic traits. Genetics
**2007**, 176, 1169–1185. [Google Scholar] [CrossRef] [PubMed] - Zimmerman, D.L.; Núñez-Antón, V. Parametric modeling of growth curve data: An overview (with discussion). Test
**2001**, 10, 1–73. [Google Scholar] [CrossRef] - Diggle, P.J.; Heagerty, P.; Liang, K.Y.; Zeger, S.L. Analysis of Longitudinal Data; Oxford University Press: Oxford, UK, 2002. [Google Scholar]
- Wu, R.L.; Ma, C.M.; Lin, M.; Wang, Z.H.; Casella, G. Functional mapping of growth QTL using a transform-both-sides logistic model. Biometrics
**2004**, 60, 729–738. [Google Scholar] [CrossRef] [PubMed] - Carrol, R.J.; Rupert, D. Power transformations when fitting theoretical models to data. J. Am. Stat. Assoc.
**1984**, 79, 321–328. [Google Scholar] [CrossRef] - Zhao, W.; Ma, C.M.; Cheverud, J.M.; Wu, R.L. A unifying statistical model for QTL mapping of genotype-sex interaction for developmental trajectories. Physiol. Genomics
**2004**, 19, 218–227. [Google Scholar] [CrossRef] [PubMed] - Evans, I.G. Bayesian estimation of parameters of multivariate normal distribution. J. Roy. Statist. Soc. B
**1965**, 27, 279–283. [Google Scholar] - Geman, S.; Geman, D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE
**1984**, 6, 721–741. [Google Scholar] [CrossRef] - Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika
**1970**, 57, 397–409. [Google Scholar] [CrossRef] - Tierney, L. Markov chain for exploring posterior distributions. Ann. Stat.
**1994**, 22, 1701–1762. [Google Scholar] [CrossRef] - Fan, J.; Gilbels, I. Local Polynomial Modelling and Its Applications; Chapman & Hall: New York, NY, USA, 1996. [Google Scholar]
- Box, G.; Tao, G. Bayesian Inference in Statistical Analysis; Wiley Interscience: New York, NY, USA, 1973. [Google Scholar]
- Ritter, C.; Tanner, M.A. Facilitating the Gibbs sampler: The Gibbs stopper and Griddy-Gibbs sampler. J. Am. Stat. Assoc.
**1992**, 87, 861–868. [Google Scholar] [CrossRef] - Geyer, C. Practical Markov chain Monte Carlo. Stat. Sci.
**1992**, 7, 473–483. [Google Scholar] [CrossRef] - Gabriel, K.R. Ante-dependence analysis of an ordered set of variables. Trans. Roy. Soc. Edinb- Earth Sci.
**1962**, 33, 201–212. [Google Scholar] [CrossRef] - Jaffrézic, F.; Thompson, R.; Hill, W.R. Structured antedependence models for genetic analysis of repeated measures on multiple quantitative traits. Genet. Res.
**2003**, 82, 55–65. [Google Scholar] [CrossRef] [PubMed] - Stephens, M. Bayesian analysis of mixture models with an unknown number of components–An alternative to reversible jump methods. Ann. Stat.
**2000**, 28, 40–74. [Google Scholar] [CrossRef] - Cheverud, J.M.; Routman, E.J.; Duarte, FA.M.; van Swinderen, B.; Cothran, K.; Perel, C. Quantitative trait loci for murine growth. Genetics
**1996**, 142, 1305–1319. [Google Scholar] [PubMed] - Vaughn, T.T.; Pletscher, L.S.; Peripato, A.; King-Ellison, K.; Adams, E.; Erikson, C.; Cheverud, J.M. Mapping quantitative trait loci for murine growth - A closer look at genetic architecture. Genet. Res.
**1999**, 74, 313–322. [Google Scholar] [CrossRef] [PubMed] - Zhao, W.; Chen, Y.Q.; Casella, G.; Cheverud, J.M.; Wu, R.L. A non-stationary model for functional mapping of longitudinal quantitative traits. Bioinformatics
**2005**, 21, 2469–2477. [Google Scholar] [CrossRef] [PubMed] - Satagopan, J.M.; Yandell, B.S.; Newton, M.A.; Osborn, T.C. A Bayesian approach to detect quantitative trait loci using Markov chain Monte Carlo. Genetics
**1996**, 144, 805–816. [Google Scholar] [PubMed] - Sillanpää, M.J.; Arjas, E. Bayesian mapping of multiple quantitative trait loci from incomplete outbred offspring data. Genetics
**1999**, 151, 1605–1619. [Google Scholar] [PubMed] - Raftery, A.E.; Lewis, S. Bayesian Statistics, 4th ED ed; Oxford University Press: Oxford, UK, 1992. [Google Scholar]
- Malosetti Zunin, M.; Visser, R.G.F.; Celis Gamboa, B.C.; van Eeuwijk, F.A. QTL methodology for response curves on the basis of non-linear mixed models, with an illustration to senescence in potato. Theor. Appl. Genet.
**2006**, 113, 288–300. [Google Scholar] [CrossRef] [PubMed] - Kao, C.H.; Zeng, Z.B. Modeling epistasis of quantitative trait loci using Cockerham’s model. Genetics
**2002**, 160, 1243–1261. [Google Scholar] [PubMed] - Cui, Y.H.; Wu, R.L. Mapping genome-genome epistasis: A multi-dimensional model. Bioinformatics
**2005**, 21, 2447–2455. [Google Scholar] [CrossRef] [PubMed] - Green, P.J. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika
**1995**, 82, 711–732. [Google Scholar] [CrossRef] - Green, P.J.; Hjort, L.; Richardson, S. Trans-Dimensional Markov Chain Monte Carlo; Oxford University Press: London, UK, 2003. [Google Scholar]
- Brooks, S.P.; Giudici, P. Markov chain Monte Carlo convergence assessment via two-way analysis of variance. J. Comput. Graph. Stat.
**2000**, 9, 266–276. [Google Scholar] - Godsill, S.J. On the relationship between MCMC model uncertainty methods. J. Comput. Graph. Stat.
**2002**, 10, 230–248. [Google Scholar] [CrossRef] - Dempster, A.P. Elements of Continuous Multivariate Analysis; Addison-Wesley: Reading, MA, USA, 1969. [Google Scholar]
- Stein, C. Estimation of a Covariance Matrix; Rietz Lecture: Atlanta, Georgia, 1975. [Google Scholar]
- Wakefield, J. The Bayesian analysis of population pharmacokinetics models. J. Am. Stat. Assoc.
**1969**, 91, 62–75. [Google Scholar] [CrossRef] - Leonardo, T.; Hsu, J.S. Bayesian inference for a covariance matrix. Ann. Stat.
**1993**, 21, 1–25. [Google Scholar] [CrossRef] - Yang, R.; Berger, J.O. Estimation of a covariance using the reference prior. Ann, Stat,
**1994**, 22, 1195–1211. [Google Scholar] [CrossRef] - Everson, P.J.; Morris, C.N. Inference for multivariate normal hierarchical models. J. Roy. Statist. Soc. B
**2000**, 62, 399–412. [Google Scholar] [CrossRef] - Pourahmadi, M. Joint mean-covariance models with applications to longitudinal data: Unconstrained parameterisation. Biometrika
**1999**, 86, 677–690. [Google Scholar] [CrossRef] - Bayesian Statistics; Bernardo, J.M.; Berger, J.O.; Dawid, A.P.; Smith, A. F.M. (Eds.) Oxford University Press: Oxford, UK, 1992; p. 3560.

© 2009 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).