MRD Model Learning and Results Sample Clauses

MRD Model Learning and Results. ( ×) ∈ The MRD only gets the three simulated datasets Yi as inputs. Everything else is being deduced by the model and making use of the model structure. We learn the model allowing for 4 latent dimensions q X RN 4, to show that one latent dimension is being learnt as non informative. In Figure 2.8, we summarize the re√sults from the MRD learning. We plot the mean M⋅i and 95% confidence interval 2 S⋅i of the learnt latent space q(X) = N (M, S) for each of the dimensions on the left side of the plot as lines and shaded areas. The confidence intervals for the three first learnt dimensions are too small to see the shaded areas. The samples are plotted from left to right for visibility. As you can see, the three input signals l get learnt with high confidence. Additionally, the last learnt dimension has mean M⋅4 = 0 and a variance of S⋅4 = 1, which indicates a non informative dimension. Y1 Y2 Y3 Y1 Y3 Y3 ( ) = N ( ) Figure 2.8: MRD simulation with three different su√bsets of observed data. We plot the mean M⋅i as thick line and 95% confidence interval 2 S⋅i as shaded area of the learnt latent ⋅ space q X M, S for each of the dimensions i on the left. On the right, we plot the ARD parameters (height of the grey vertical bars) corresponding to the latent dimensions X q (rows), respectively for each dataset Ysi . We can see, that the MRD model conforms to the simulation as shown in Figure 2.7). Note also, the non-informative last dimension (last row). This part can be learnt using a simple GPLVM model on the concatenation of all datasets across dimensions (or even PCA as we simulated a linear relationship in the data). = × MRD, however, gives us additional insight on top of the input signals recov- ered. It tells us which dimensions come from which dataset provided. We recover this information through the ARD parameters of each covariance function ki for each dataset Yi. Each covariance function supplies one ARD parameter per dimen- sion of the latent space. Here, this was q 4. So we have 3 4 ARD parameters. To show the dependence structure of the datasets on the learnt latent spaces, we plot these ARD parameters to the right hand side of the dimensions (Fig. 2.8). Thus, each dimension gets 3 ARD parameter bars, indicating which dataset includes the signals. The ARD parameters show either “switched on” or “switched off” signals, indicated by either visible bars, or a “switched off” state indicated by a line. For example, we can deduce that the input signal corresp...