Model Comparison Sample Clauses

Model Comparison. We compute the marginal data densities (MDDs), known as a measure of fit, to discriminate between various versions of the model. We employ three different methods to compute MDDs. The first method is the standard new modified harmonic mean (MHM) method illustrated by Geweke (1999) who proposes a multivariate normal distribution as a weighting function. Such a function may produce unreasonable inference when approximating a non-Gaussian posterior density as characterized by the posterior distribution of Markov-switching models. The two other methods employed in this paper — the Sims, Waggoner, and Zha (2008) method and the bridge sampling method developed by Meng and ▇▇▇▇ (1996) — overcome this difficulty by proposing new weighting functions. Appendix
Model Comparison. To assess the importance of accounting for cross-effects and brand-program synergy in our model of television advertising’s impact on online WOM, we compare our proposed model to four alternative models. Deviance information criterion (DIC), a likelihood- based measure that penalizes complex model specifications, and the mean absolute error (MAE) are used to compare our proposed model to these alternatives. Lower DIC and MAE indicate better model fit. We first consider a baseline model (Model 1) that includes intercepts, characteristic variables (Xi.), brand-specific effects to control for brand unobservables that can impact brand WOM (αb,1), and program-specific effects to control for program unobservables that can impact program WOM (γp,2). We then evaluate how accounting for cross-effects impacts model fit. We build upon Model 1 to assess model fit when only brand cross-effects are accounted for (Model 2) or only program cross-effects are accounted for (Model 3). Finally, we consider a model in which BPSynergybp is withheld from equation (3) (Model 4). Adding BPSynergybp to Model 4 gives us our proposed model (Model 5). The DIC and MAE estimates in Table 3 establish that including cross- effects in our model of television advertising’s impact on online WOM improves model fit. We also find that incorporating brand-program synergy into Model 5 improves overall model fit. As Model 5 is our best fitting model, we focus our discussion on the results from this model estimation.
Model Comparison. To compare different models in fitting cancer incidence and mortality data, I use the de- viance information criterion (DIC) [▇▇▇▇▇▇▇▇▇▇▇▇▇ et al., 2002] which is the posterior average of the deviance plus a measure of complexity. The DIC is an addition of two statistics D¯ and pD, where D¯ is the posterior mean deviance which can be computed from the distribution of posterior deviance D(λiap) and pD is the effective number of parameters which is used to penalize increasing model complexity [▇▇▇▇▇▇ and ▇▇▇▇, 2004]. The posterior deviance is computed as Σ D(λij) = −2 (l(λij) − l(λˆij )), Figure 4.1: Convergency plots for posterior estimate τ Figure 4.2: Convergency plots for posterior estimate apc effects imately 5. The DIC value is almost the same when the TSCE carcinogenesis model is incorporated into Bayesian extended APC model. For both Bayesian extended APC mod- els, autoregressive priors are chosen for period and cohort effects. Model constraints as illustrated before are added. Compared with the DIC value (DIC=1286.43) derived from conventional Bayesian APC model where all age, period and cohort effects are taking au- toregressive priors, we don’t see much difference on the improvement of DIC values between these two models. However, the age effect in the extended BAPC model has more sound biological meanings since we replace it with the hazard function from the carcinogenesis model. To consider alternative prior settings for TSCE model parameters, three different uniform Table 4.3: Model comparisons using DIC Extended BAPC Model DIC Parameter Posterior Estimate 95% HPD Armitage-Doll 1287.0 s 4.867 [4.237, 5.5] TSCE 1286.98 p q r -0.1495 4.9 × 10—4 7.5 × 10—6 [-0.1984, -0.05376] [2.7 × 10—4, 9.8 × 10—4] [2.6 × 10—6, 9.9 × 10—6] priors for model parameter p were chosen. Result shows that the posterior inference varied when we change the uniform intervals in the prior setting. From the biological perspective, we chose the first prior setting where p is uniformed distributed between -0.2 and 0.